That might be an oversimplification. If anything, this could result in console ports NOT running like crap. What's the biggest complaint about ports? That the game is tailored "to the metal" of a console, making port to such a variety of PCs more difficult to develop.
Think about it - when designing games around the Xbone/PS4, they tailor the games for eight cores and are not restricted by DX11 call limits or RAM - only the GPU power, but then when they port to PC, those optimized engines have to sludge through the DX11 pipeline before tapping into the GPU. With that restrictive pipeline removed (and GPUs shipping more RAM), those game engines can operate more efficiently in multicore PCs.
It's not a cure-all (low-res textures), but I think this could be the start of a revolution in which ports stop sucking.
The current consoles both have 8GB of RAM, all of which is GPU-addressable, so texture resolution shouldn't really be a problem.
Also, the Xbox One is built around DX11 so this will be helpful for that. Frankly DirectX 12 will be helpful because it will make Xbox One -> PC ports easier so hopefully we'll either see more of them or see better ports.
It's not really a big worry anyway, this is quite likely the last console generation anyway.
My comment about low-res textures has to do with the fact that Xbone/PS4 don't always use the same high-res texture packs available to PC users and that DX12 won't help with that in either scenario.
The API Xbone runs is far removed from its PC counterpart. It's a heavily modified "Direct3D 11.X" that is built exclusively for Xbone, which removes the overhead that Windows DX11 has to deal with. In PC terms, it's effectively a superset of DX11.2 features running with DX12 efficiency.
"Microsoft, though, claims that the Direct3D 11.X for Xbox One features significant enhancements in the area of runtime overhead, which results in a very streamlined, 'close to metal' level of runtime performance."
It just means being able to code with as little overhead as possible. The idea is to have very little between the application and the hardware running it, to get as close to the maximum possible performance as you can.
The 'metal' comes from an old saying: 'bare metal'. It's still used in the compute industry when referring to special testing that bypasses OS and driver layers, talking to the silicon directly.
Not all 8GB is addressable. Only 3.5 GB of memory on the PS4 is, with 500MB extra of virtual in the wing. The Xbox One has 4GB addressable by the GPU, but a much weaker GPU.
PlayStationBSD consumes a considerable amount of overhead, more than the HyperV Windows OS on the Xbox One.
Sony obviously can not have anything to do with DX12, but nor they have to in the first place -- AMD's Mantle transitions to OpenGL's Vulkan project, so all games that are OpenGL-bases will be able to use it. Before that, AMD can help with using pure Mantle on PS4 as standalone thing as it is their GPU in there anyway.
This is somewhat accurate. With DX12/Vulcan games should actually be easier to port, as explicit memory control, tight thread controls, and cheap draw calls are all assumed to be there when writing for consoles, which is then code beaten repeatedly like an abused step child to get it to play nice with DX11, in most part because it's not known what exactly the API and card are actually doing together.
The end result should actually be that minimum requirements to play games, which crept up a lot over the last generation as the consoles were better understood but the PC version had to be brute forced into getting the API to behave, should creep up a lot less. But considering both the CPU and GPU of a high end PC are far beyond both consoles already, I'm not sure how much benefit end users will directly see. Maybe there will be setting to stop culling out small objects with distance, that would be an easy abuse of all those extra draw calls. But otherwise I can see the low to mid end benefitting a lot more than someone running crossfire 390s or something.
The benefit will come mainly for people using chips with many cores but poorer single threaded performance. That's basically quad core AMD APU users and 8 core FX chip and 6 core phenom II x6. Since users of those cpu's are the people most likely to be CPU bottlenecked due to dx11 only caring about single thread performance. Since intel chips have top tier single threaded performance they were not as restricted in dx11 and the gpu was usually the bottleneck to begin with so not much change there the gpu will still be shader bound.
The consoles are limited to 6 cores for gaming, not 8, those 6 cores are roughly equivalent to a Haswell Core i3 in terms of total performance (Or a high clocked Pentium Anniversary!). Remember, AMD's fastest high-end chips struggles to beat Intel's 4 year old mid-range... Take AMD's low-end low-powered chips and it's a laughable situation. But that's to be expected, consoles cannot afford to have high-end components, they are cost sensitive low-end devices. Lets just hope that Microsoft and Sony do not beat this horse for all it's worth and we get a successor out within the next 4ish years.
The Xbox One also uses a modified version of Direct X 11 for it's high level API. The Xbox One also has a low-level API which developers can target and extract more performance.
Basically once Direct X 12 is finalized for the PC it will be modified and ported to the Xbox One, giving developers who do not buy an already made game-engine like Unreal, CryEngine etc' a performance boost without blowing out development time significantly by being forced to target the low level API.
The same thing is also occurring on the Playstation, the high-level API is getting an overhaul thanks to Vulkan, it still has it's low-level API for developers to target of course.
Ram is still a bit of an issue too, 5-5.5Gb of Ram for the game and graphics is pretty tiny, it may become a real limiter in the coming years, slightly offset with hard drive asset streaming.
To compare it to a PC the Xbox One is like a Core i3 3ghz, 4Gb of Ram, Radeon 7770, 1.5Gb graphics card. Change the GPU to a Radeon 7850 for the PS4 and that's what we have for the next half decade or more.
Correct me if I'm wrong but I believe the ps4 is built with a downclocked 7870 (20 cu) but the ps4 igpu has 2 CU disabled as well as the downclock. The 7850 is a 16 CU part but i guess the 2 extra CU combined with the downclock would make the ps4 behave like a 7850. the radeon 7770 is only 10CU and the xbone has 12CU's but a lower clock. So are you basically saying for the ps4 and xbone the extra 2cu + the lower clock speed makes them equal to those desktop cards? Because they really aren't exactly those cards. Some situations the higher clock speed matters more and some the more cu's matter more. In some situations the ps4 may behave more like a 7870 than a 7850 and the xbone may be more like a 7790 than a 7770 in some situations.
If the developers done their jobs right, hi-specs PC still gains much advantage over console (especially in the frame rate area). However PC itself are also a drag as well (remember those Atom/Pentium equipped PC).
All the next generation consoles are based on AMD eight core CPU and GCN architecture (with Nintendo possibly opting for an ARM CPU paired with GCN), so developers will just have to optimize once for the consoles and have a easier time porting to PCs.
It is interesting to see the AMD R9-285 Tonga consistently outperform Nvidia's high end GTX 980 and its make you wonder how incredibly fast the next generation R9-390x Fiji and 380x could be.
Yeah, 285 might outperform the 980 but keep in mind this is a very specific test only focusing one one aspect of rendering a frame. I mean, a man can accelerate faster than an F1 car over very short distance of few meters, but that doesn't really mean much in the real world. Keeping my fingers crossed though since I've always been AMD fan and I hope they can egain some market share.
The Wii U is based on PowerPC 7xx/G3 and RV770, not ARM or GCN. Unless you're referring the the recently-announced "NX" which for all we know may not even be a traditional home console.
I did some math on what available information there is for the 390 versus the Titan and it seems to go toe-to-toe. If it has a lead, it won't be huge. I compared some leaked slides with the numbers Anandtech had for the Titan review. I suspect it will use a lot more electricity though and create more heat.
We can likely expect it to have much more compute built-in.
It doesn't really say anything about the performance of the 285x or the 980, or any of the others for that matter.
Just because they can make a couple more million draw calls a second doesn't mean you will ever see anything.
Just means the video card is really good at screaming for more work, not doing it. Hell these draw calls are all way beyond anything realistic anyhow, you will NEVER have one of these GPU's ever make half as many draw calls as being shown in this test in any real world usage scenario.
If anything I would say that the Nvidia cards are more refined and more balanced, based on these draw call results. Nvidia has optimized more to get the most out of Dx11 while AMD shows a lead on actual hardware capacity through the greater gains both relative and absolute on draw call numbers. It is the very same trend you also see in the amount of hardware both companies use in their top tiered cards to achieve similar performance - AMD uses more, Nvidia uses less and wins on efficiency gains.
Well AMD does win at double precision even over the Titan X. Nvidia pulled a lot of the double precision hardware to save on power, one of the ways maxwell is more efficient. This isn't a bad thing in the gaming community but ruins the Titan X for a lot of compute scenarios. So Nvidia really did lose out a lot in one area to beat AMD at efficiency.
If Anandtech benched Radeon silicon being fed by AMD FX or A10 then NO INTEL/nVidia siliocn would even come close to AMD GCN enabled Asynchronous Shader hardware. Intel and nVidia are now second rate siliocn in a DX12 world.
Why do yo think so many folks trashed MANTLE. FUD!!!!
Even if there were no consoles, games wouldn't be targeted for high end PCs. They will be targeted for lower end PCs to increase the amount of market share they can reach. Maybe once in a blue moon, some developer who doesn't care about that will make the next Crysis.
As mentioned below this will make the ports much more scalable to PC's. So when taking a game meant ton run on 6 year old hardware meets brand new hardware it isn't like taking a Porsche 911 from your city streets to a mud pit like it is now. It will be more like going from the city to the Autobahn.
Ports will actually run better on better computers, not worse. Also, it will speed up the time of release for ports, in fact in a few years I wouldn't be surprised if multiform games were released on consoles and PC's at the same time as standard policy.
I'm more interested in being able to display more dynamic enviroments with more artifacts, more units on screen etc.
Can someone please remaster Supreme Commander Forged Alliance for DX12, and fix the bad AI. SC:FA is by far the best RTS, it's a shame that it was released before the technology was there to support it.
The performance issues with SupCom Forged Alliance Forever (or you're just doing it wrong), are from the sheer quantity of units the game needs to manage. Not the number of issued drawcalls.
The 'gameplay' simply requires the CPU to do too much - all of which must be done in a single thread - for any machine to reasonably manage in large games. DX12 can't help much with that.
Hmmmm... yes you are right. Partially. Console games will be developed to the limit of the console and Microsoft just announced that DX12 was going into XBOX.
AMD 8 core Jaguar will scale much higher tan 4.4 million draw calls on XBOX.
But you also have to realise the GAMES are about the storey and eye candy. Games studios are also highly competitive. It is the nature of business that all things evolve to the lowest limiting factor. Until now DX11 was THE factor that limited the size and complexity of games. DX12 removes those limits.
Expect games to be photorealistic at 4k easily!
So the decision the consumer must make si simple. Great gaming with exensive Intel silicon or better gaming with inexpensive AMD silicon!!!
AMD A6-7400 K CRUSHES INTEL i7 IGP by better than 100%!!!
But Anand is also guilty of a WHOPPER of a LIE!
Anand uses Intel i7-4960X. NOBODY uses RADEON with an Intel i7 cpu. But rather than use either an AMD FX CPU or an AMD A10 CPU they decided to degrade AMD's scores substanbtially by using an Intel product which is not optimsed to work with Radeon. Intel i7 also is not GCN or HSA compatible nor can it take advantage Asynchronous Shader Pipelines either. Only an IDIOT would feed Radeon GPU with Intel CPU.
In short Anand's journalistic integrity is called into question here.
Basically RADEON WOULD HAVE DESTROYED ALL nVIDIA AND INTEL COMBINATIONS if Anand benchmarked Radeon dGPU with AMD silicon. By Itself A6 is staggeringly superior to Intel i3, i5, AND i7.
Ryan Smith & Ian Cutress have lied.
As it stands A10-7700k produces 4.4 MILLION drawcalls per second. At 6 cores the GTX 980 in DX11 only produces 2.2 MILLION draw calls.
DX12 enables a $150 AMD APU to CRUSH a $1500.00 Intel/nVidia gaming setup that runs DX11.
Here is the second lie.
AMD Asynchronous Shader Pipelines allow for 100% multithreaded proceesing in the CPU feeding the GPU whether it is an integrated APU or an 8 core FX feeding a GPU. What Anand sould also show is 8 core scaling using an AMD FX processor.
Anand will say that they are too poor to use an AMD CPU or APU set up. Somehow I think that they are being disingenuous.
NO INTEL/nVidia combination can compete with AMD using DX12.
good for the news but bad because the tech owner, I would like that someday an implementation API would from another standard organization separate it from the Operating system, because in this case Microsoft will force to all users to upgrade the version of Windows but throught the cash register "do you want new directx? just pay for it", even having the last hardware but with a windows 7 you can't use directx 12 at least you have Windows 10 (nearest future), the same will happen with windows 11, 12 onwards.
An open standard does not solve the problem as well as you'd think, or OpenGL would own the majority marketshare for games. Open standards have their own set of issues.
Development of those technologies costs a lot of money. That`s why MS offsets this cost to the user by denying DX12 to Win7/8.
Cry all you want, they are a for-profit company that acts as a for-profit company, and OpenGL will be bogged down as long as it is designed by a committee.
then stay with AMD GPU where you get mantle/vulkan on win7 There is absolutely no need to rush for Wi10 uless you own nvidia gpu. I would petty much guess that new games that will be build on DX12 will prety much have mantle/vulan render support. Actually in the end, if Vulcan pulls off, it may damage M$ name/reputation that they "forced" (but you stll have choice not to) gamers to upgrade(?)/move to Win10. As Vulcan should be cross-platform and should run on nvidia, unles they decide to screw up n their users
With its 48 EUs, more than double and better in internal architecture than Haswell GT2, we could see a lot better results or not. A useful addition I think.
Yeah buddy! Bring on DX12, aka Low Level API Done Right.
Also fun to note all the rumors and speculation of AMD's poor DX11 MT driver support look to be real (virtually no DX11 ST to MT scaling and both lower than Nvidia DX11), but it is also obvious their efforts with Mantle have given them a nice base for their DX12 driver, at least in synthetic max draw call tests.
Main benefits for DX12 will be for CPU limited games on fast hardware, especially RTS and MMO type games where the CPU tends to be the bottleneck. It will also be interesting to see what impact it has on higher-end set-ups like high-end multi-GPU. Mantle was supposed to show us the benefits in scaling, but due to piecemeal support and the fact multi-GPU needed much more attention with Mantle, CF was often left in a broken state.
I really hope dx12 and it's increase in draw call throughput will bring us greater scene complexity, i mean more "real" objects that could be interacted with rather than tricks like textures that make us think there is depth to them while in reality it's just clever artwork. Also objects like leaves, stones, grass etc. I think this would bring much better immersion in the games than just trying to constantly up the polygon count on characters and find new ways to animate hair. Maybe I'm the odd one, but i often focus much more on the game world rather than the characters.
I suspect for the chips listed it's about as good as it will get. Note that these are all Haswell GT2 chips - GT3 doubles up on some fixed function blocks in the frontend, though I don't know if it would help (the command streamer is supposedly the same so if it's limited there it wouldn't help). The results could be better with Broadwell, though (be it GT2 or GT3).
The older article on DX12 showed GT3/3e don't see much more gain past GT2, because while many things are doubled, the front end isn't. Command input limited.
I haven't heard that Broadwell is different there.
My desktop can heat the room when gaming and I believe that DX12 and FPS limits could allow me to play cooler next summer. I'd like to see some FPS limiting options if it can reduce heat. During the winter I don't care. I pretty much stop gaming during the summer; at least with the desktop.
I like this. Very much. The industry needs a clean reset and this is a perfect opportunity... Now if only the business side was as easy to overhaul as the technical side. :)
hhm pcw says " All of our tests were performed at 1280x720 resolution at Microsoft's recommendation." if that's the case with your tests too then its seems that the real test today should be 1080p and a provisional 4k/UHD1 to get a set of future core numbers regardless of MS's wishes...
720p is the internal rendering resolution, and is used to avoid potential ROP bottlenecks (especially at the early stages). This is supposed to be a directed, synthetic benchmark, and the ability to push pixels is not what is intended to be tested.
That said, the actual performance impact from switching resolutions on most of these GPUs is virtually nil since there's more than enough ROP throughput for all of this.
Very interesting results, and very informative article, the only small caveat I find is that for proper comparison of 2, 4 and 6 cores (seems to be one of the focal points of the article) the clock should be the same for all 3 configurations, it is a bit misleading otherwise. The difference seems to be around 10 - 15% in going from 4 to 6 cores but there is also a 10% difference in clock rate between them.
Fair point, it almost looks like they are trying to artifically force some contrast in the results there. Biggest issue I have with that is you are more likely to find higher clocked 4-cores in the wild since they tend to overclock better than the TDP and size limited 6-core chips.
That's the tradeoff any power-user faces there, higher overclock on that 4790K (and soon Broadwell-K) chip or the higher L3 cache and more cores of a 6-core chip with lower OC potential.
I got 1.7M draw calls per second with an i7-970 and GTX480 in DX11, and 2.3M in DX11MT. Pretty much identical to every other Nvidia card benchmarked. Interested to see what kind of draw call gains I get with a 480 once Windows 10 and DX12 come out with finalized drivers.
Well, currently, the limiting factor is almost always the GPU, with with a powerful GPU, unless we are talking AMD CPUs which are TDP limtied in many cases or an I3 and even then the differences are not great
So, I think that it's mainly a look for the future, allowing higher draws scenes, potentially
Well, varying results aside, I've heard of scores in the region of eight million. That would theoretically (if other results are anything to go off) put it around the level of a mildly-overclocked i3 (stock about 7.5m). Definitely worth bearing in mind the more-than-six-cores scaling limitation showcased by this test - AMD's own tests show this happening to the 8350, meaning that the Mantle score - which can scale to more cores - should be higher. Incidentally, the DX11 scores seem to be in the low 600,000s with a slight regression in the MT test. I saw these 8350 figures in some comments somewhere but forgot where so I do apologise for not being able to back them up, however the Intel results can be found here:
I suppose it's all hearsay until a site actually does a CPU comparison involving both Intel and AMD processors. Draw calls are also just a synthetic; I can't see AMD's gaming performance leaping through the stratosphere overnight, and Intel stands to benefit a lot here as well.
I think AMD APU's are the biggest winner here. Since draw calls help lift cpu bottlenecks and the apu's have 4 weaker cores the lack of dx11 to be able to really utilize multi core for draw calls means the weak single threaded performance of the apus could really hold things back here. DX12 will be able to shift the bottleneck back to the igpu of the apu's for a lot of games and really help make more games playable at 1080p with higher settings or at least same settings and smoother.
If only AMD would release an updated version of the 20 cu design for the ps4 using GCN 1.3 cores + 16GB of 2nd generation 3d HBM memory directly on top that the cpu or gpu could use, not only would you have a rly fast 1080p capable gaming chip you could design radically new motherboards that omit ram slots entirely. Could have new mini itx boards that have room for more sata ports and usb headers and fan headers and more room available for vrm's and cool it with good water cooling like the thermaltake 3.0 360mm rad AIO and good TIM like the coollaboratory liquid metal ultra. Or you could even take it the super compact direction and even create a smaller board than mini-itx and turn it into an ultimate htpc. And as well as the reduced size your whole system would benefit from the massive bandwidth (1.2TB/sec) and reduced latency. The memory pool could respond in real time to add more space for the gpu as necessary and since apu's are really only for 1080p that will never be a problem. I know this will probably never happen but if it did i would 100% build my htpc with an apu like that
As a side question, Is there some contractual agreement that will not allow AMD to sell these large 20 cu designed APU's on the regular pc market? Does sony have exclusive rights to the chip and the techniques used to make such a large igpu? Or is it die size and cost that scares AMD from making the chip for the PC market as their would be a much higher price compared to current apu's? I'm sure 4 excavator cores cant be much bigger than 8 jaguar so if its doable with 8 jaguar it should be doable with 4 excavator, especially if they put it on the 16/14nm finfet node?
I'm sure Sony would only be bothered if AMD couldn't fulfill their orders. A PC built to offer exactly the same as the PS4 would generally cost more anyway.
They can't very well go from an eight FPU design to one with two/four depending on how you look at it, even if the clocks are much higher. I think you'd need to wait for the next generation of consoles.
I really hope the developers put this to good use. I am also particularly excited about multicore scaling, since single threaded performance has stagnated (yes, even in the Intel camp).
I think this shows that AMD has got a big boost from being the main partner with Microsoft on the Xbox. It's meant that AMD got a major seat at the top DX12 table from day one for a change. I hope to see some really interesting results now that it appears finally AMD hardware has been given some optimisation love other than Intel.
>>> Finally with 2 cores many of our configurations are CPU limited. The baseline changes a bit – DX11MT ceases to be effective since 1 core must be reserved for the display driver – and the fastest cards have lost quite a bit of performance here. None the less, the AMD cards can still hit 10M+ draw calls per second with just 2 cores, and the GTX 980/680 are close behind at 9.4M draw calls per second. Which is again a minimum 6.7x increase in draw call throughput versus DirectX 11, showing that even on relatively low performance CPUs the draw call gains from DirectX 12 are substantial. <<<
Can you please explain how can it be? I thought the main advantage of new APIs is the workload of all CPU cores (instead of one in DX11). If so, should't the performance double in 2-core mode?Why there is 6.7x increase in draw call instead of 2x ?
Just to make it clear: I know there such advantage of Mantle and DX12 as direct addressing GPU, w/o CPU. But this test is about draw calls, requested from CPU to GPU. How can we boost the number of draw calls apart from using additional CPU core?
1) Much, much less CPU overhead in submitting draw calls
2) Better scaling out with core count
Even though we can't take advantage of #2, we take advantage of #1. DX11ST means you have 1 relatively inefficient thread going, whereas DX12 means you have 2 (or 4 depending on HT) highly efficient threads going.
Does this mean we could see games developed to similar levels of graphical fidelity as current ones, but performance significantly higher?
In which case, current graphics hardware could, in theory, run a game in a 4k resolution at much higher framerates today, all other things being equal? Or run at a lower resolution at much higher sustained framerates (making a 120hz display suddenly a useful thing to have)?
Or, put another way: does the increased CPU overhead, which allows for significantly more draw calls, mean that developers will only see a benefit with more detail/objects on the screen, or could someone, for instance, take a current game with a D3D11 renderer, add a D3D12 renderer to it, and get huge performance increases? I don't think we've seen that with Mantle, so I'm assuming it isn't the case?
You probably won`t get 4K out of middle to low-end cards of today, as it is also a memory size and bandwidth issue, but frameraates could improve I think.
Why do AMD and nvida fanboys continue to bitch at each other. Take a moment to realise we are going to both be getting great looking games but one thing hold us back consoles. So take your hate towards them as they are holding pc back
Maybe because we are entering into a new age when cards are not worth measuring on FPS alone in most cases and thats going to take a lot of fun out of the fanboy wars.
To be honest unless you are running multi monitor/ultra high res just save up $200 and choose the card that looks best in your case.
The article fails to address for the layman how exactly this will impact gameplay. Will games simply look better? Will AI get better? will maps be larger and more complex? All of the above? And how much?
It's up to the developers. Ultimately DX12 frees up resources and removes bottlenecks; it's up to the developers to decide how they want to spend that performance. They could do relatively low draw calls and get some more CPU performance for AI, or they could try to do more expansive environments, etc.
Yeah seems to me that DX12 isn't so much about adding new eye-sandy its about a long time coming total back end refresh to get rid of the old DX crap and bring it more up to speed with modern hardware.
I would love to see what effect directx 12 has on the cpu side. All the articles so far have been about cpu scalling with different gpus. Would be nice to see how amd compare to intel with a better use of their higher core count.
AMD masterpiece. Does this superiority has something to do with AMD Asynchronous Shaders? I know that nvidia's kerpel and maxwell asynchronous pipeline engine is not as powerful as the one in GCN architecture.
1)PASSPORT: 2) license driving: 3)Identity Card: For other types of documents, the price is to be determined we are also able to clone credit cards, or create for you a physical card codes starting with cc in your possession. But they are not able to do it with cards equipped with the latest generation of chips, but only with the old ones are still outstanding feature of the single magnetic stripe. The price in this case is 200 euro per card./
On page 4: "Intel does end up seeing the smallest gains here, but again even in this sort of worst case scenario of a powerful CPU paired with a weak CPU, DX12 still improved draw call performance by over 3.2x."
AMD A6-7400 K CRUSHES INTEL i7 IGP by better than 100%!!!
But Anand is also guilty of a WHOPPER of a LIE!
Anand uses Intel i7-4960X. NOBODY uses RADEON with an Intel i7 cpu. But rather than use either an AMD FX CPU or an AMD A10 CPU they decided to degrade AMD's scores substanbtially by using an Intel product which is not optimsed to work with Radeon. Intel i7 also is not GCN or HSA compatible nor can it take advantage Asynchronous Shader Pipelines either. Only an IDIOT would feed Radeon GPU with Intel CPU.
In short Anand's journalistic integrity is called into question here.
Basically RADEON WOULD HAVE DESTROYED ALL nVIDIA AND INTEL COMBINATIONS if Anand benchmarked Radeon dGPU with AMD silicon. By Itself A6 is staggeringly superior to Intel i3, i5, AND i7.
Ryan Smith & Ian Cutress have lied.
As it stands A10-7700k produces 4.4 MILLION drawcalls per second. At 6 cores the GTX 980 in DX11 only produces 2.2 MILLION draw calls.
DX12 enables a $150 AMD APU to CRUSH a $1500.00 Intel/nVidia gaming setup that runs DX11.
Here is the second lie.
AMD Asynchronous Shader Pipelines allow for 100% multithreaded proceesing in the CPU feeding the GPU whether it is an integrated APU or an 8 core FX feeding a GPU. What Anand sould also show is 8 core scaling using an AMD FX processor.
Anand will say that they are too poor to use an AMD CPU or APU set up. Somehow I think that they are being disingenuous.
NO INTEL/nVidia combination can compete with AMD using DX12.
Anand says in the article that GCN 1.0 is now working. And they text a HD7970 to prove it. I have a 7950, latest drivers and Win10, and it says "API not supported". Can Anand or anyone here explain why this might be?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
113 Comments
Back to Article
nrexz - Friday, March 27, 2015 - link
How much can they do with it really? Games will still be developed to the limits of the consoles not pc's.Also, I'm not sure if I should be impressed or sad that Forbes published this yesterday.
nathanddrews - Friday, March 27, 2015 - link
That might be an oversimplification. If anything, this could result in console ports NOT running like crap. What's the biggest complaint about ports? That the game is tailored "to the metal" of a console, making port to such a variety of PCs more difficult to develop.Think about it - when designing games around the Xbone/PS4, they tailor the games for eight cores and are not restricted by DX11 call limits or RAM - only the GPU power, but then when they port to PC, those optimized engines have to sludge through the DX11 pipeline before tapping into the GPU. With that restrictive pipeline removed (and GPUs shipping more RAM), those game engines can operate more efficiently in multicore PCs.
It's not a cure-all (low-res textures), but I think this could be the start of a revolution in which ports stop sucking.
Flunk - Friday, March 27, 2015 - link
The current consoles both have 8GB of RAM, all of which is GPU-addressable, so texture resolution shouldn't really be a problem.Also, the Xbox One is built around DX11 so this will be helpful for that. Frankly DirectX 12 will be helpful because it will make Xbox One -> PC ports easier so hopefully we'll either see more of them or see better ports.
It's not really a big worry anyway, this is quite likely the last console generation anyway.
happycamperjack - Friday, March 27, 2015 - link
Only about 5 to 5.5GB of RAM is consoles are usable. The rest are reserved by OS.Laststop311 - Saturday, March 28, 2015 - link
I think its actually 3-3.5GB reserved for the system so 4.5-5GB available to the GPUnathanddrews - Friday, March 27, 2015 - link
My comment about low-res textures has to do with the fact that Xbone/PS4 don't always use the same high-res texture packs available to PC users and that DX12 won't help with that in either scenario.The API Xbone runs is far removed from its PC counterpart. It's a heavily modified "Direct3D 11.X" that is built exclusively for Xbone, which removes the overhead that Windows DX11 has to deal with. In PC terms, it's effectively a superset of DX11.2 features running with DX12 efficiency.
"Microsoft, though, claims that the Direct3D 11.X for Xbox One features significant enhancements in the area of runtime overhead, which results in a very streamlined, 'close to metal' level of runtime performance."
DERSS - Saturday, March 28, 2015 - link
"Close to metal"?Maybe they meant "Close to silicon"? Or they meant to compare with Apple's Metal for iOS?
deruberhanyok - Saturday, March 28, 2015 - link
It's just an expression.Way back before Apple had "Metal", ATI had "Close to Metal" (http://en.wikipedia.org/wiki/Close_to_Metal), and even earlier than that, S3 had their own API, also called Metal.
It just means being able to code with as little overhead as possible. The idea is to have very little between the application and the hardware running it, to get as close to the maximum possible performance as you can.
Navvie - Tuesday, March 31, 2015 - link
The term goes back to the C64 and Amiga demo scenes. Programming in assembler without an API and literally "hitting the metal".Silicon is a metalloid element, so "hitting the metal" doesn't really need correcting.
Kidster3001 - Wednesday, April 1, 2015 - link
The 'metal' comes from an old saying: 'bare metal'. It's still used in the compute industry when referring to special testing that bypasses OS and driver layers, talking to the silicon directly.jtrdfw - Monday, April 6, 2015 - link
Actually it comes from terminator 2 when arnold is dropped into the molten metal.Zepid - Friday, March 27, 2015 - link
Not all 8GB is addressable. Only 3.5 GB of memory on the PS4 is, with 500MB extra of virtual in the wing. The Xbox One has 4GB addressable by the GPU, but a much weaker GPU.PlayStationBSD consumes a considerable amount of overhead, more than the HyperV Windows OS on the Xbox One.
Source: Console development at EA.
Laststop311 - Saturday, March 28, 2015 - link
yea ur right 4GB max for xbox one and 4GB max for ps4 but only if they tap into the extra 500MB offeredSamus - Saturday, March 28, 2015 - link
DirectX is an API, ie, software. There is nothing stopping Microsoft ans Sony from enabling DX12 in their consoles ans updating the devkits.DERSS - Saturday, March 28, 2015 - link
Sony obviously can not have anything to do with DX12, but nor they have to in the first place -- AMD's Mantle transitions to OpenGL's Vulkan project, so all games that are OpenGL-bases will be able to use it. Before that, AMD can help with using pure Mantle on PS4 as standalone thing as it is their GPU in there anyway.Gigaplex - Monday, March 30, 2015 - link
Sony could, if they managed to licence the API. There is no technical limitation preventing them from porting the DirectX API.akamateau - Thursday, April 30, 2015 - link
Actually SOny would have some difficulty as they are not using a Windows kernal as Microsoft is doing.PS4 uses Orbis OS derived from FreeBSD. DX12 obviously will not run on that though I am sure that Sony has that issue solved.
akamateau - Thursday, April 30, 2015 - link
Nah....DX12 is going into XBOX probably by October.http://www.vrworld.com/2015/04/29/directx-12-on-xb...
Current XBOX games will see a performance boost but the greatest boost will happen when XBOX game developers start writing DX12 games.
imaheadcase - Friday, March 27, 2015 - link
You also deal with the terrible UI in games for console users. Many small things in games add up to major complains for PC users.Frenetic Pony - Friday, March 27, 2015 - link
This is somewhat accurate. With DX12/Vulcan games should actually be easier to port, as explicit memory control, tight thread controls, and cheap draw calls are all assumed to be there when writing for consoles, which is then code beaten repeatedly like an abused step child to get it to play nice with DX11, in most part because it's not known what exactly the API and card are actually doing together.The end result should actually be that minimum requirements to play games, which crept up a lot over the last generation as the consoles were better understood but the PC version had to be brute forced into getting the API to behave, should creep up a lot less. But considering both the CPU and GPU of a high end PC are far beyond both consoles already, I'm not sure how much benefit end users will directly see. Maybe there will be setting to stop culling out small objects with distance, that would be an easy abuse of all those extra draw calls. But otherwise I can see the low to mid end benefitting a lot more than someone running crossfire 390s or something.
Laststop311 - Saturday, March 28, 2015 - link
The benefit will come mainly for people using chips with many cores but poorer single threaded performance. That's basically quad core AMD APU users and 8 core FX chip and 6 core phenom II x6. Since users of those cpu's are the people most likely to be CPU bottlenecked due to dx11 only caring about single thread performance. Since intel chips have top tier single threaded performance they were not as restricted in dx11 and the gpu was usually the bottleneck to begin with so not much change there the gpu will still be shader bound.silverblue - Saturday, March 28, 2015 - link
I'm glad somebody mentioned the Phenom II X6. I'd be very interested to see how it copes, particularly against the 8350 and 6350.akamateau - Thursday, April 30, 2015 - link
AMD A6 APU has 4.4 million draw calls per second running DX12. Intel i7 4560 and GTX980 only has 2.2MILLION draw calls running DX11!!!!DX12 allows a $100 AMD APU by itself to outperform a $1500 Intel/nVidia gaming system running DX11.
That is with 4 CORES. Single core performance is not relevant any more.
All things being equal, DX12 will give AMD APU and Radeon dGPU a staggering performance advantage over Intel/nVidia.
FlushedBubblyJock - Tuesday, March 31, 2015 - link
What's the mystery ? It's Mantle for everyone - that's what DX12 essentially is.So just look at what mantle did.
Close enough.
StevoLincolnite - Friday, March 27, 2015 - link
The consoles are limited to 6 cores for gaming, not 8, those 6 cores are roughly equivalent to a Haswell Core i3 in terms of total performance (Or a high clocked Pentium Anniversary!).Remember, AMD's fastest high-end chips struggles to beat Intel's 4 year old mid-range... Take AMD's low-end low-powered chips and it's a laughable situation.
But that's to be expected, consoles cannot afford to have high-end components, they are cost sensitive low-end devices.
Lets just hope that Microsoft and Sony do not beat this horse for all it's worth and we get a successor out within the next 4ish years.
The Xbox One also uses a modified version of Direct X 11 for it's high level API.
The Xbox One also has a low-level API which developers can target and extract more performance.
Basically once Direct X 12 is finalized for the PC it will be modified and ported to the Xbox One, giving developers who do not buy an already made game-engine like Unreal, CryEngine etc' a performance boost without blowing out development time significantly by being forced to target the low level API.
The same thing is also occurring on the Playstation, the high-level API is getting an overhaul thanks to Vulkan, it still has it's low-level API for developers to target of course.
Ram is still a bit of an issue too, 5-5.5Gb of Ram for the game and graphics is pretty tiny, it may become a real limiter in the coming years, slightly offset with hard drive asset streaming.
To compare it to a PC the Xbox One is like a Core i3 3ghz, 4Gb of Ram, Radeon 7770, 1.5Gb graphics card.
Change the GPU to a Radeon 7850 for the PS4 and that's what we have for the next half decade or more.
Laststop311 - Saturday, March 28, 2015 - link
Correct me if I'm wrong but I believe the ps4 is built with a downclocked 7870 (20 cu) but the ps4 igpu has 2 CU disabled as well as the downclock. The 7850 is a 16 CU part but i guess the 2 extra CU combined with the downclock would make the ps4 behave like a 7850. the radeon 7770 is only 10CU and the xbone has 12CU's but a lower clock. So are you basically saying for the ps4 and xbone the extra 2cu + the lower clock speed makes them equal to those desktop cards? Because they really aren't exactly those cards. Some situations the higher clock speed matters more and some the more cu's matter more. In some situations the ps4 may behave more like a 7870 than a 7850 and the xbone may be more like a 7790 than a 7770 in some situations.Gigaplex - Monday, March 30, 2015 - link
The console CPUs are actually significantly slower than a Haswell i3. The Pentium chips are a closer comparison due to the lack of hyperthreadingmr_tawan - Monday, March 30, 2015 - link
'PC is not meant to be played' (TM)(Just kidding though)
If the developers done their jobs right, hi-specs PC still gains much advantage over console (especially in the frame rate area). However PC itself are also a drag as well (remember those Atom/Pentium equipped PC).
JonnyDough - Tuesday, March 31, 2015 - link
Half the time it's just that they don't even bother updating menus and controls. Skyrim is a prime example.Veritex - Friday, March 27, 2015 - link
All the next generation consoles are based on AMD eight core CPU and GCN architecture (with Nintendo possibly opting for an ARM CPU paired with GCN), so developers will just have to optimize once for the consoles and have a easier time porting to PCs.It is interesting to see the AMD R9-285 Tonga consistently outperform Nvidia's high end GTX 980 and its make you wonder how incredibly fast the next generation R9-390x Fiji and 380x could be.
Barilla - Friday, March 27, 2015 - link
Yeah, 285 might outperform the 980 but keep in mind this is a very specific test only focusing one one aspect of rendering a frame. I mean, a man can accelerate faster than an F1 car over very short distance of few meters, but that doesn't really mean much in the real world.Keeping my fingers crossed though since I've always been AMD fan and I hope they can egain some market share.
AndrewJacksonZA - Friday, March 27, 2015 - link
What @Barilla said.akamateau - Thursday, April 30, 2015 - link
ALL Radeon will outperfrom nVidia if the Radeon dGPU is fed by AMD siliocn. Intel degrades AMD Radeon silicon.lowlymarine - Friday, March 27, 2015 - link
The Wii U is based on PowerPC 7xx/G3 and RV770, not ARM or GCN. Unless you're referring the the recently-announced "NX" which for all we know may not even be a traditional home console.eanazag - Friday, March 27, 2015 - link
I did some math on what available information there is for the 390 versus the Titan and it seems to go toe-to-toe. If it has a lead, it won't be huge. I compared some leaked slides with the numbers Anandtech had for the Titan review. I suspect it will use a lot more electricity though and create more heat.We can likely expect it to have much more compute built-in.
Refuge - Friday, March 27, 2015 - link
It doesn't really say anything about the performance of the 285x or the 980, or any of the others for that matter.Just because they can make a couple more million draw calls a second doesn't mean you will ever see anything.
Just means the video card is really good at screaming for more work, not doing it. Hell these draw calls are all way beyond anything realistic anyhow, you will NEVER have one of these GPU's ever make half as many draw calls as being shown in this test in any real world usage scenario.
Vayra - Saturday, March 28, 2015 - link
If anything I would say that the Nvidia cards are more refined and more balanced, based on these draw call results. Nvidia has optimized more to get the most out of Dx11 while AMD shows a lead on actual hardware capacity through the greater gains both relative and absolute on draw call numbers. It is the very same trend you also see in the amount of hardware both companies use in their top tiered cards to achieve similar performance - AMD uses more, Nvidia uses less and wins on efficiency gains.Crunchy005 - Monday, March 30, 2015 - link
Well AMD does win at double precision even over the Titan X. Nvidia pulled a lot of the double precision hardware to save on power, one of the ways maxwell is more efficient. This isn't a bad thing in the gaming community but ruins the Titan X for a lot of compute scenarios. So Nvidia really did lose out a lot in one area to beat AMD at efficiency.http://anandtech.com/show/9059/the-nvidia-geforce-...
akamateau - Thursday, April 30, 2015 - link
If Anandtech benched Radeon silicon being fed by AMD FX or A10 then NO INTEL/nVidia siliocn would even come close to AMD GCN enabled Asynchronous Shader hardware. Intel and nVidia are now second rate siliocn in a DX12 world.Why do yo think so many folks trashed MANTLE. FUD!!!!
xenol - Friday, March 27, 2015 - link
Even if there were no consoles, games wouldn't be targeted for high end PCs. They will be targeted for lower end PCs to increase the amount of market share they can reach. Maybe once in a blue moon, some developer who doesn't care about that will make the next Crysis.Vayra - Saturday, March 28, 2015 - link
Oh hi Star Citizen, how are you today.Michael Bay - Sunday, March 29, 2015 - link
Wait until they hit optimizations stage.Refuge - Friday, March 27, 2015 - link
As mentioned below this will make the ports much more scalable to PC's. So when taking a game meant ton run on 6 year old hardware meets brand new hardware it isn't like taking a Porsche 911 from your city streets to a mud pit like it is now. It will be more like going from the city to the Autobahn.Ports will actually run better on better computers, not worse. Also, it will speed up the time of release for ports, in fact in a few years I wouldn't be surprised if multiform games were released on consoles and PC's at the same time as standard policy.
Belgarathian - Friday, March 27, 2015 - link
I'm more interested in being able to display more dynamic enviroments with more artifacts, more units on screen etc.Can someone please remaster Supreme Commander Forged Alliance for DX12, and fix the bad AI. SC:FA is by far the best RTS, it's a shame that it was released before the technology was there to support it.
DarkXale - Sunday, March 29, 2015 - link
The performance issues with SupCom Forged Alliance Forever (or you're just doing it wrong), are from the sheer quantity of units the game needs to manage. Not the number of issued drawcalls.The 'gameplay' simply requires the CPU to do too much - all of which must be done in a single thread - for any machine to reasonably manage in large games. DX12 can't help much with that.
FlushedBubblyJock - Tuesday, March 31, 2015 - link
Q: " How much can they do with it really? "A: "How much did mantle do ?"
akamateau - Thursday, April 30, 2015 - link
Hmmmm... yes you are right. Partially. Console games will be developed to the limit of the console and Microsoft just announced that DX12 was going into XBOX.AMD 8 core Jaguar will scale much higher tan 4.4 million draw calls on XBOX.
But you also have to realise the GAMES are about the storey and eye candy. Games studios are also highly competitive. It is the nature of business that all things evolve to the lowest limiting factor. Until now DX11 was THE factor that limited the size and complexity of games. DX12 removes those limits.
Expect games to be photorealistic at 4k easily!
So the decision the consumer must make si simple. Great gaming with exensive Intel silicon or better gaming with inexpensive AMD silicon!!!
akamateau - Thursday, April 30, 2015 - link
FINALLY THE TRUTH IS REVEALED!!!AMD A6-7400 K CRUSHES INTEL i7 IGP by better than 100%!!!
But Anand is also guilty of a WHOPPER of a LIE!
Anand uses Intel i7-4960X. NOBODY uses RADEON with an Intel i7 cpu. But rather than use either an AMD FX CPU or an AMD A10 CPU they decided to degrade AMD's scores substanbtially by using an Intel product which is not optimsed to work with Radeon. Intel i7 also is not GCN or HSA compatible nor can it take advantage Asynchronous Shader Pipelines either. Only an IDIOT would feed Radeon GPU with Intel CPU.
In short Anand's journalistic integrity is called into question here.
Basically RADEON WOULD HAVE DESTROYED ALL nVIDIA AND INTEL COMBINATIONS if Anand benchmarked Radeon dGPU with AMD silicon. By Itself A6 is staggeringly superior to Intel i3, i5, AND i7.
Ryan Smith & Ian Cutress have lied.
As it stands A10-7700k produces 4.4 MILLION drawcalls per second. At 6 cores the GTX 980 in DX11 only produces 2.2 MILLION draw calls.
DX12 enables a $150 AMD APU to CRUSH a $1500.00 Intel/nVidia gaming setup that runs DX11.
Here is the second lie.
AMD Asynchronous Shader Pipelines allow for 100% multithreaded proceesing in the CPU feeding the GPU whether it is an integrated APU or an 8 core FX feeding a GPU. What Anand sould also show is 8 core scaling using an AMD FX processor.
Anand will say that they are too poor to use an AMD CPU or APU set up. Somehow I think that they are being disingenuous.
NO INTEL/nVidia combination can compete with AMD using DX12.
RandomUser15 - Friday, March 27, 2015 - link
First and foremost this is the first comment, also great article, very well done!RandomUser15 - Friday, March 27, 2015 - link
Damn, nooooooooo.gauravnba - Monday, March 30, 2015 - link
hahaha! You got shunted to page 4 of comments instead.AndrewJacksonZA - Friday, March 27, 2015 - link
@RandomUser15: Almost... ;-)<a href="http://imgur.com/AAtozki"><img src="http://i.imgur.com/AAtozki.jpg" title="source: imgur.com" /></a>
ntam - Friday, March 27, 2015 - link
good for the news but bad because the tech owner, I would like that someday an implementation API would from another standard organization separate it from the Operating system, because in this case Microsoft will force to all users to upgrade the version of Windows but throught the cash register "do you want new directx? just pay for it", even having the last hardware but with a windows 7 you can't use directx 12 at least you have Windows 10 (nearest future), the same will happen with windows 11, 12 onwards.AleXopf - Friday, March 27, 2015 - link
Windows 10 will be a free upgrade for windows 7/8 userspiroroadkill - Friday, March 27, 2015 - link
If you paid any attention at all, you'd know that's already being done, and it's called Vulkan.redviper - Friday, March 27, 2015 - link
The expectation is that Windows 10 will be the last major version of Windows. After this it will be kept evergreen through windows update.And its free (atleast if you upgrade within the year), and will be on all manner of hardware.
inighthawki - Friday, March 27, 2015 - link
An open standard does not solve the problem as well as you'd think, or OpenGL would own the majority marketshare for games. Open standards have their own set of issues.Michael Bay - Friday, March 27, 2015 - link
Development of those technologies costs a lot of money.That`s why MS offsets this cost to the user by denying DX12 to Win7/8.
Cry all you want, they are a for-profit company that acts as a for-profit company, and OpenGL will be bogged down as long as it is designed by a committee.
lordken - Sunday, March 29, 2015 - link
then stay with AMD GPU where you get mantle/vulkan on win7There is absolutely no need to rush for Wi10 uless you own nvidia gpu.
I would petty much guess that new games that will be build on DX12 will prety much have mantle/vulan render support.
Actually in the end, if Vulcan pulls off, it may damage M$ name/reputation that they "forced" (but you stll have choice not to) gamers to upgrade(?)/move to Win10. As Vulcan should be cross-platform and should run on nvidia, unles they decide to screw up n their users
NikosD - Friday, March 27, 2015 - link
Could you add Broadwell iGPU too ?With its 48 EUs, more than double and better in internal architecture than Haswell GT2, we could see a lot better results or not.
A useful addition I think.
chizow - Friday, March 27, 2015 - link
Yeah buddy! Bring on DX12, aka Low Level API Done Right.Also fun to note all the rumors and speculation of AMD's poor DX11 MT driver support look to be real (virtually no DX11 ST to MT scaling and both lower than Nvidia DX11), but it is also obvious their efforts with Mantle have given them a nice base for their DX12 driver, at least in synthetic max draw call tests.
Main benefits for DX12 will be for CPU limited games on fast hardware, especially RTS and MMO type games where the CPU tends to be the bottleneck. It will also be interesting to see what impact it has on higher-end set-ups like high-end multi-GPU. Mantle was supposed to show us the benefits in scaling, but due to piecemeal support and the fact multi-GPU needed much more attention with Mantle, CF was often left in a broken state.
Barilla - Friday, March 27, 2015 - link
I really hope dx12 and it's increase in draw call throughput will bring us greater scene complexity, i mean more "real" objects that could be interacted with rather than tricks like textures that make us think there is depth to them while in reality it's just clever artwork. Also objects like leaves, stones, grass etc. I think this would bring much better immersion in the games than just trying to constantly up the polygon count on characters and find new ways to animate hair. Maybe I'm the odd one, but i often focus much more on the game world rather than the characters.MobiusPizza - Friday, March 27, 2015 - link
I can see how FutureMark can help make the next gen MineCraft title :Ptipoo - Friday, March 27, 2015 - link
Can Intel do any more on the driver side to see more DX12 gains, or is it all GPU front end limited at this point?mczak - Friday, March 27, 2015 - link
I suspect for the chips listed it's about as good as it will get. Note that these are all Haswell GT2 chips - GT3 doubles up on some fixed function blocks in the frontend, though I don't know if it would help (the command streamer is supposedly the same so if it's limited there it wouldn't help).The results could be better with Broadwell, though (be it GT2 or GT3).
tipoo - Friday, March 27, 2015 - link
The older article on DX12 showed GT3/3e don't see much more gain past GT2, because while many things are doubled, the front end isn't. Command input limited.I haven't heard that Broadwell is different there.
eanazag - Friday, March 27, 2015 - link
DX12 is exciting for PC laptop and tablet gaming.My desktop can heat the room when gaming and I believe that DX12 and FPS limits could allow me to play cooler next summer. I'd like to see some FPS limiting options if it can reduce heat. During the winter I don't care. I pretty much stop gaming during the summer; at least with the desktop.
martixy - Friday, March 27, 2015 - link
I like this. Very much. The industry needs a clean reset and this is a perfect opportunity...Now if only the business side was as easy to overhaul as the technical side. :)
KaarlisK - Friday, March 27, 2015 - link
I can see the 4770R (GT3e) in the system specifications, but I do not see it in any of the charts. What happened?tipoo - Friday, March 27, 2015 - link
That one I'd definitely be interested in, would the higher bandwidth it has allow any more DX12 gains?tipoo - Friday, March 27, 2015 - link
4X gains seen herehttp://www.pcworld.com/article/2900814/tested-dire...
Ryan Smith - Friday, March 27, 2015 - link
Sorry, that was an error in that table. We didn't have the 4770R for this article.geekfool - Saturday, March 28, 2015 - link
hhm pcw says " All of our tests were performed at 1280x720 resolution at Microsoft's recommendation."if that's the case with your tests too then its seems that the real test today should be 1080p and a provisional 4k/UHD1 to get a set of future core numbers regardless of MS's wishes...
Ryan Smith - Sunday, March 29, 2015 - link
720p is the internal rendering resolution, and is used to avoid potential ROP bottlenecks (especially at the early stages). This is supposed to be a directed, synthetic benchmark, and the ability to push pixels is not what is intended to be tested.That said, the actual performance impact from switching resolutions on most of these GPUs is virtually nil since there's more than enough ROP throughput for all of this.
Winterblade - Friday, March 27, 2015 - link
Very interesting results, and very informative article, the only small caveat I find is that for proper comparison of 2, 4 and 6 cores (seems to be one of the focal points of the article) the clock should be the same for all 3 configurations, it is a bit misleading otherwise. The difference seems to be around 10 - 15% in going from 4 to 6 cores but there is also a 10% difference in clock rate between them.chizow - Friday, March 27, 2015 - link
Fair point, it almost looks like they are trying to artifically force some contrast in the results there. Biggest issue I have with that is you are more likely to find higher clocked 4-cores in the wild since they tend to overclock better than the TDP and size limited 6-core chips.That's the tradeoff any power-user faces there, higher overclock on that 4790K (and soon Broadwell-K) chip or the higher L3 cache and more cores of a 6-core chip with lower OC potential.
dragonsqrrl - Friday, March 27, 2015 - link
I got 1.7M draw calls per second with an i7-970 and GTX480 in DX11, and 2.3M in DX11MT. Pretty much identical to every other Nvidia card benchmarked. Interested to see what kind of draw call gains I get with a 480 once Windows 10 and DX12 come out with finalized drivers.godrilla - Friday, March 27, 2015 - link
Vulkan seems more attractive for devs though.The battle of the APIs incoming.
junky77 - Friday, March 27, 2015 - link
Well, currently, the limiting factor is almost always the GPU, with with a powerful GPU, unless we are talking AMD CPUs which are TDP limtied in many cases or an I3 and even then the differences are not greatSo, I think that it's mainly a look for the future, allowing higher draws scenes, potentially
Mat3 - Friday, March 27, 2015 - link
Would be interesting to see how the FX-8350 compares to the i7-4960X for this test.silverblue - Saturday, March 28, 2015 - link
Well, varying results aside, I've heard of scores in the region of eight million. That would theoretically (if other results are anything to go off) put it around the level of a mildly-overclocked i3 (stock about 7.5m). Definitely worth bearing in mind the more-than-six-cores scaling limitation showcased by this test - AMD's own tests show this happening to the 8350, meaning that the Mantle score - which can scale to more cores - should be higher. Incidentally, the DX11 scores seem to be in the low 600,000s with a slight regression in the MT test. I saw these 8350 figures in some comments somewhere but forgot where so I do apologise for not being able to back them up, however the Intel results can be found here:http://www.pcworld.com/article/2900814/tested-dire...
I suppose it's all hearsay until a site actually does a CPU comparison involving both Intel and AMD processors. Draw calls are also just a synthetic; I can't see AMD's gaming performance leaping through the stratosphere overnight, and Intel stands to benefit a lot here as well.
silverblue - Saturday, March 28, 2015 - link
Sorry, stock i3 about 7.1m.oneb1t - Saturday, March 28, 2015 - link
my fx-8320@4.7ghz + R9 290x does 14.4mil :) in mantleLaststop311 - Friday, March 27, 2015 - link
I think AMD APU's are the biggest winner here. Since draw calls help lift cpu bottlenecks and the apu's have 4 weaker cores the lack of dx11 to be able to really utilize multi core for draw calls means the weak single threaded performance of the apus could really hold things back here. DX12 will be able to shift the bottleneck back to the igpu of the apu's for a lot of games and really help make more games playable at 1080p with higher settings or at least same settings and smoother.If only AMD would release an updated version of the 20 cu design for the ps4 using GCN 1.3 cores + 16GB of 2nd generation 3d HBM memory directly on top that the cpu or gpu could use, not only would you have a rly fast 1080p capable gaming chip you could design radically new motherboards that omit ram slots entirely. Could have new mini itx boards that have room for more sata ports and usb headers and fan headers and more room available for vrm's and cool it with good water cooling like the thermaltake 3.0 360mm rad AIO and good TIM like the coollaboratory liquid metal ultra. Or you could even take it the super compact direction and even create a smaller board than mini-itx and turn it into an ultimate htpc. And as well as the reduced size your whole system would benefit from the massive bandwidth (1.2TB/sec) and reduced latency. The memory pool could respond in real time to add more space for the gpu as necessary and since apu's are really only for 1080p that will never be a problem. I know this will probably never happen but if it did i would 100% build my htpc with an apu like that
Laststop311 - Saturday, March 28, 2015 - link
As a side question, Is there some contractual agreement that will not allow AMD to sell these large 20 cu designed APU's on the regular pc market? Does sony have exclusive rights to the chip and the techniques used to make such a large igpu? Or is it die size and cost that scares AMD from making the chip for the PC market as their would be a much higher price compared to current apu's? I'm sure 4 excavator cores cant be much bigger than 8 jaguar so if its doable with 8 jaguar it should be doable with 4 excavator, especially if they put it on the 16/14nm finfet node?silverblue - Saturday, March 28, 2015 - link
I'm sure Sony would only be bothered if AMD couldn't fulfill their orders. A PC built to offer exactly the same as the PS4 would generally cost more anyway.They can't very well go from an eight FPU design to one with two/four depending on how you look at it, even if the clocks are much higher. I think you'd need to wait for the next generation of consoles.
FriendlyUser - Saturday, March 28, 2015 - link
I really hope the developers put this to good use. I am also particularly excited about multicore scaling, since single threaded performance has stagnated (yes, even in the Intel camp).jabber - Saturday, March 28, 2015 - link
I think this shows that AMD has got a big boost from being the main partner with Microsoft on the Xbox. It's meant that AMD got a major seat at the top DX12 table from day one for a change. I hope to see some really interesting results now that it appears finally AMD hardware has been given some optimisation love other than Intel.Tigran - Saturday, March 28, 2015 - link
>>> Finally with 2 cores many of our configurations are CPU limited. The baseline changes a bit – DX11MT ceases to be effective since 1 core must be reserved for the display driver – and the fastest cards have lost quite a bit of performance here. None the less, the AMD cards can still hit 10M+ draw calls per second with just 2 cores, and the GTX 980/680 are close behind at 9.4M draw calls per second. Which is again a minimum 6.7x increase in draw call throughput versus DirectX 11, showing that even on relatively low performance CPUs the draw call gains from DirectX 12 are substantial. <<<Can you please explain how can it be? I thought the main advantage of new APIs is the workload of all CPU cores (instead of one in DX11). If so, should't the performance double in 2-core mode?Why there is 6.7x increase in draw call instead of 2x ?
Tigran - Saturday, March 28, 2015 - link
Just to make it clear: I know there such advantage of Mantle and DX12 as direct addressing GPU, w/o CPU. But this test is about draw calls, requested from CPU to GPU. How can we boost the number of draw calls apart from using additional CPU core?Ryan Smith - Sunday, March 29, 2015 - link
DX12 brings two benefits in this context:1) Much, much less CPU overhead in submitting draw calls
2) Better scaling out with core count
Even though we can't take advantage of #2, we take advantage of #1. DX11ST means you have 1 relatively inefficient thread going, whereas DX12 means you have 2 (or 4 depending on HT) highly efficient threads going.
LoccOtHaN - Saturday, March 28, 2015 - link
Hmm, Where are FX 8350 + and FX x4 x6 or Phenom x4 & x6 Tests?Lot of people have those CPU's, and i mean LOT of People ;-)
flabber - Saturday, March 28, 2015 - link
It's too bad that AMD is at the end of the road. They were putting out some good technology. Or at least, pushing for technology to improve.Michael Bay - Sunday, March 29, 2015 - link
Intel will never let them die.deruberhanyok - Saturday, March 28, 2015 - link
Does this mean we could see games developed to similar levels of graphical fidelity as current ones, but performance significantly higher?In which case, current graphics hardware could, in theory, run a game in a 4k resolution at much higher framerates today, all other things being equal? Or run at a lower resolution at much higher sustained framerates (making a 120hz display suddenly a useful thing to have)?
Or, put another way: does the increased CPU overhead, which allows for significantly more draw calls, mean that developers will only see a benefit with more detail/objects on the screen, or could someone, for instance, take a current game with a D3D11 renderer, add a D3D12 renderer to it, and get huge performance increases? I don't think we've seen that with Mantle, so I'm assuming it isn't the case?
Michael Bay - Sunday, March 29, 2015 - link
You probably won`t get 4K out of middle to low-end cards of today, as it is also a memory size and bandwidth issue, but frameraates could improve I think.Gigaplex - Monday, March 30, 2015 - link
4k performance is generally ROP limited, not draw call limited. This won't help a whole lot.Uplink10 - Saturday, March 28, 2015 - link
Too bad publishers won`t issue developers to "remaster" older video games with DX12. Only new games will benefit from this.lukeiscool10 - Saturday, March 28, 2015 - link
Why do AMD and nvida fanboys continue to bitch at each other. Take a moment to realise we are going to both be getting great looking games but one thing hold us back consoles. So take your hate towards them as they are holding pc backjabber - Monday, March 30, 2015 - link
Maybe because we are entering into a new age when cards are not worth measuring on FPS alone in most cases and thats going to take a lot of fun out of the fanboy wars.To be honest unless you are running multi monitor/ultra high res just save up $200 and choose the card that looks best in your case.
Mannymal - Sunday, March 29, 2015 - link
The article fails to address for the layman how exactly this will impact gameplay. Will games simply look better? Will AI get better? will maps be larger and more complex? All of the above? And how much?Ryan Smith - Sunday, March 29, 2015 - link
It's up to the developers. Ultimately DX12 frees up resources and removes bottlenecks; it's up to the developers to decide how they want to spend that performance. They could do relatively low draw calls and get some more CPU performance for AI, or they could try to do more expansive environments, etc.jabber - Monday, March 30, 2015 - link
Yeah seems to me that DX12 isn't so much about adding new eye-sandy its about a long time coming total back end refresh to get rid of the old DX crap and bring it more up to speed with modern hardware.AleXopf - Sunday, March 29, 2015 - link
I would love to see what effect directx 12 has on the cpu side. All the articles so far have been about cpu scalling with different gpus. Would be nice to see how amd compare to intel with a better use of their higher core count.Netmsm - Monday, March 30, 2015 - link
AMD is the thech's hero ^_^. always been.JonnyDough - Tuesday, March 31, 2015 - link
Great! Now all we need are driver hacks to make our over priced non-DX12 video cards worth their money!loguerto - Friday, April 3, 2015 - link
AMD masterpiece. Does this superiority has something to do with AMD Asynchronous Shaders? I know that nvidia's kerpel and maxwell asynchronous pipeline engine is not as powerful as the one in GCN architecture.perula - Thursday, April 9, 2015 - link
1)PASSPORT:
2) license driving:
3)Identity Card:
For other types of documents, the price is to be determined we are also
able to clone credit cards, or create for you a physical card codes
starting with cc in your possession. But they are not able to do it with
cards equipped with the latest generation of chips, but only with the old
ones are still outstanding feature of the single magnetic stripe. The
price in this case is 200 euro per card./
Email /perula0@gmail.com /
Text;+1(201) 588-4406
Clorex - Wednesday, April 22, 2015 - link
On page 4:"Intel does end up seeing the smallest gains here, but again even in this sort of worst case scenario of a powerful CPU paired with a weak CPU, DX12 still improved draw call performance by over 3.2x."
Should be "powerful CPU paired with a weak GPU".
akamateau - Thursday, April 30, 2015 - link
FINALLY THE TRUTH IS REVEALED!!!AMD A6-7400 K CRUSHES INTEL i7 IGP by better than 100%!!!
But Anand is also guilty of a WHOPPER of a LIE!
Anand uses Intel i7-4960X. NOBODY uses RADEON with an Intel i7 cpu. But rather than use either an AMD FX CPU or an AMD A10 CPU they decided to degrade AMD's scores substanbtially by using an Intel product which is not optimsed to work with Radeon. Intel i7 also is not GCN or HSA compatible nor can it take advantage Asynchronous Shader Pipelines either. Only an IDIOT would feed Radeon GPU with Intel CPU.
In short Anand's journalistic integrity is called into question here.
Basically RADEON WOULD HAVE DESTROYED ALL nVIDIA AND INTEL COMBINATIONS if Anand benchmarked Radeon dGPU with AMD silicon. By Itself A6 is staggeringly superior to Intel i3, i5, AND i7.
Ryan Smith & Ian Cutress have lied.
As it stands A10-7700k produces 4.4 MILLION drawcalls per second. At 6 cores the GTX 980 in DX11 only produces 2.2 MILLION draw calls.
DX12 enables a $150 AMD APU to CRUSH a $1500.00 Intel/nVidia gaming setup that runs DX11.
Here is the second lie.
AMD Asynchronous Shader Pipelines allow for 100% multithreaded proceesing in the CPU feeding the GPU whether it is an integrated APU or an 8 core FX feeding a GPU. What Anand sould also show is 8 core scaling using an AMD FX processor.
Anand will say that they are too poor to use an AMD CPU or APU set up. Somehow I think that they are being disingenuous.
NO INTEL/nVidia combination can compete with AMD using DX12.
akamateau - Thursday, April 30, 2015 - link
DX12 renders expensive Intel silicon useless when benched against AMD APU or FX feeding Radeon.If you are happy with you gameplay running a $1500 Intel/nVidia with DX11 gaming rig then you will be ecstatic with just a $150 A10 APU running DX12.
kyllients - Friday, June 26, 2015 - link
You seem to be confused. High end silicon like the NVidia 980 Ti with DX12 is going to run circles around any integrated graphics.Erebus5683 - Wednesday, May 6, 2015 - link
Anand says in the article that GCN 1.0 is now working. And they text a HD7970 to prove it. I have a 7950, latest drivers and Win10, and it says "API not supported". Can Anand or anyone here explain why this might be?