I say kudos to AMD for forcing the issue on low-level API's. This is something that could, and should, have been done over a decade ago by microsoft(as the developer of the most widely used API). Pity it took a financially struggling, second fiddle manufacturer in both CPU and GPU markets to convince the software maker to do what needed to be done.
AMD has a history of doing this...don't forget AMD64?
Without AMD pushing x86-64, Intel would still be trying to push IA64 and if 64-bit extensions were to be added to x86, it would have been years after AMD even introduced the idea.
Just one of many technologies Intel benefited from with their architecture licensing agreement with AMD, but by no means the chief.
Intel didn't need a licensing agreement with AMD for 64 bit X86 Intel reverse engineered it. Quit making this crap up. Stop rewriting history and do your D&D and you won't look like a AMD pumping fool tool.
You delusional AMD pump boys are a bunch of fact-less fool tools who can't read to learn and spew BS out of your orifice like bull with diarrhea. You're too lame brained to be educated and instead of reading the facts you blatantly post fiction you pulled out from your AMD pumping stink hole.
I REPEAT INTEL REVERSE ENGINEERED X86 64 bit and DOES NOT License AMD's version!
READ LEARN and STFU!
Intel Did Not license the x86 64 bit from AMD, Intel instead Reverse Engineered x86 64 bit, read, "Microprocessor Reports", the article, "AMD and Intel Harmonize on 64" can be found in the March 29th edition of In-Stat/MDR's Microprocessor Report "Intel's reverse-engineering of AMD64 " says Halfhill. "
Intel Did Not license the x86 64 bit from AMD, Intel instead Reverse Engineered x86 64 bit, read, "Microprocessor Reports", the article, "AMD and Intel Harmonize on 64" can be found in the March 29th edition of In-Stat/MDR's Microprocessor Report "Intel's reverse-engineering of AMD64 " says Halfhill. "
You Fact-less AMD pumpers need to learn how to read so that you can read to learn.
Intel Did Not license the x86 64 bit from AMD, Intel instead Reverse Engineered x86 64 bit, read, "Microprocessor Reports", the article, "AMD and Intel Harmonize on 64" can be found in the March 29th edition of In-Stat/MDR's Microprocessor Report "Intel's reverse-engineering of AMD64 " says Halfhill. "
AMD owns its 64 bit implementation but AMD's 64 bit instruction set requires 32 bit backward compatibility. Also 64 bit x86 is integer units only, AMD's ISA is not unique, the (FPU) floating point unit or SIMD units are Intel's patents and IP. It's a Fact that x86 is so old that Intel has no patents to x86 ISA they expired long ago, but the addition of new and evolving instruction set are still Intel's IP. Anyone today can an design and mfg a x86 ISA 32 bit processor license royalty free using original ISA but it's useless without the many new extensions like MMX, SSE, SSE2
You are correct that Intel did reverse-engineer it (in the same manner, I wonder if AMD could "reverse-engineer" 32- and 16-bit x86, when they have the 64-bit one), but they did so only because they had to catch up with AMD64, especially when their best product was Prescott. Otherwise, we would live in "merry world" of VLIW IA-64...
... Actually, with the x86 backward compatibility thrown out, it would have been easier to change to ARM :D.
I hope AMD64 was never happen. We could have been throw away all the x86 into trash bin back to 2003. I purchased Athlon64 at that time and thought it was awesome. But now I think shift instructions like Apple did with Rosetta is a much better solution. IA64 itself is a revised architecture and it is designed to be better than the 30 years old x86(IA32).
I'm with you this, Intel wanted to move forward to a RISC architecture world, but you know who dropped a 64bit anchor into the muddy instruction set waters and let ARM set its RISC sail on those waters. Bad for x86 good for ARM but x86 is evolving into a RISC form, ARM's efficiency advantage will be lost as more IPC is needed.
AMD64 helped push Windows XP(64bit); I was a beta tester of the 64 bit operating system and was so because I wanted to play Far Cry's 64bit version (AMD was a partner of a little known German development house called Crytek back in the day); Remember rambus ram? Thanks to AMD, we don't have to. God only knows how much we would be paying for technology these days if only Intel existed. AMD has forced both Intel and Microsoft to improve, innovate and do things they otherwise would have taken longer to do if indeed at all.
Between 1999 and 2001 I worked as a low level kernel engineer for Microsoft. I worked on RDRAM based systems from the time they were prototypes through commercial availability. If there was one thing RDRAM was not, it was "awesome". It had severe limitations and drawbacks. A quick list:
- High heat that was not evenly dissipated due to the serial nature of the chips - Latency that increased the more memory you had resulting in incredibly inconsistent performance (boot times for Windows, a basic process that should be repeatable consistently clean boot to clean boot would vary by as much as 300% depending on what was loaded where in memory) - Extremely poor scaling making it completely unsuitable for large memory applications - Very poor random access performance
And on and on and on. RDRAM works very well in applications where you can plan for it and put your data in specific locations both to balance heat issues and to control for latency problems (ie: actively manage your memory to put less used data in higher latency addresses). It can be fantastic for streaming applications, such as a lot of multimedia production work, where random access is not a common task vs raw bandwidth (one place where RDRAM had an advantage). But for general purpose PC's or servers, RDRAM was a terrible choice that did not make sense from day one.
I dislike the historical revisionism I see in some quarters around RDRAM. The decisions Intel made around the technology were a complete fiasco, and the Intel engineers I worked with knew and lamented it.
FIT, RDRAM was a solution looking for a problem. It worked better in low-memory embedded systems because it was able to flush and rewrite fast. Inevitably I agree, the high cost killed RAMBUS's business, but even if it was price-competitive with DDR it would have never become mainstream in anything but where you find it now; embedded applications like game consoles and network routers. Even the latest RAMBUS (XDR2) runs too hot for use in even large notebooks and graphics cards.
Long story short, RAMBUS was fitting to the P4; both used very inefficient technology, and Netburst was in fact designed around the quad datarate of RDRAM (which is why it performed so poorly with SDR and even DDR until the memory controller was optimized.) A match made in hell.
RDRAM was expensive and had high latency, I shorted RAMBUSTED from triple digits to the price a sub sandwich, that set me up financially for years. Ripped Intel a new ahole for getting in bed with RAMBUSTED, but I still didn't buy anything AMD. AMD won a few benchmarks and lost a few, it was never a slam dunk for AMD and I still trusted Intel more. I bought every CPU Intel made from 286 up, my timing upgrade cycle never pointed to a AMD alternative, it would of taken a all out performance win for me to switch camps and that didn't happen even with Intel riding expensive high latency RAMBUS.
plus that company tried to sue just about every other competitor in the DDR market in order to have a monopoly with their already out of this world pricing.
They never forced the issue, DX12 development is ahead of mantle - it's actually going to get released and work properly. AMD just saw what was coming and released and AMD only alpha of DX12 called Mantle, paid off some games companies to use it, hinted that AMD's console win would make Mantle much better then anything the opposition would do. Talked a lot of rubbish about openness they never had any plans to follow through with. It ends up being empty marketing - if you bought a card for Mantle support then you were conned. Not quite sure why Ryan is being so nice to AMD here - they shouldn't be praised for empty marketing, they should be castigated for it. Praise them for real solutions (e.g. AMD eyeinfinity), but burn them for empty marketing campaigns.
BF4 with mantle is over a year old , DA:I makes the game playable on higher settings with mantle on lower end machines - NFS rivals is another to benefit... as do cryengine games.
DX12 games wont be about till 2016 at the earliest. Please stop making up rubbish `Dribble`
Releasing alpha versions of a library doesn't make them better than not available libraries still in development. When DX12 will bere released it will be finished and with driver capable of making it run well in a couple of months. And DX12 engine could be in creation today. You don't know. Mantle is still not fully useable after a year. And it will soon be forgotten by AMD as soon as a new GCN architecture revision comes into play. It is a dead library. In fact it was dead at its launch.
I think what he meant to say is "AMD: mission accomplished, time to move on"
AMD never wanted to support their own API. I remember in 2003 getting two huge programming books from AMD regarding AMD64 design (one was for the hardware level, one was for the programming level) and just thinking "man this must be costing them a fortune to support."
AMD isn't in a financial situation to push Mantel and they knew it from day 1. BUT the know by optimizing rendering closer to the hardware level will benefit them more than competing architectures because GCN on average still has 1/3 more physical cores than Maxwell for comparable performance.
It isn't relevant how powerful the cores are (Maxwell's are obviously more powerful) but how busy the cores are and how efficient they are running. Maxwell's are already pretty efficient, AMD's are not. A better API can even this out in AMD's favor and it's less expensive to push an API then a whole new architecture, especially when GCN isn't that bad. Nothing AMD has is bad, per se, it just isn't optimized for (especially CPU's.)
The real boggle is why AMD didn't go after the CUDA market, a market that is actually making nVidia decent money. The premium of Quadro parts and their performance over FireGL because of application optimizations, in addition to the sheer number of sales in the high performance computing space, are just icing on the cake :\
OpenGL was also ahead of Mantle. Mantle is just like 3DFX GLIDE and we all know what happened to 3DFX. Looks like AMD did not learn from history and soon shall repeat it.
The only similarity is has to GLide is it's proprietary.
Similarities end there. GLide was just that, an IDE, not an API. It could be wrapped and emulated with little performance loss to OpenGL.
Mantel is one of those things that developers have been talking about for awhile and for some reason it took AMD to finally listen and put together an API. After the industry saw the working concept had tangible benefits, everyone is behind the idea now of moving closer to the hardware interface and eliminating as much software overhead as possible.
Could AMD have made Mantel universal...of course. And they probably should have. But they are a publicly traded company and are not in the business of giving away millions of dollars in research. AMD would have kept x86-64 (AMD64) to itself if it wasn't obligated to share it with Intel via their licensing agreement...and it's important to note Intel benefited from many AMD manufacturing innovations via this agreement, too. IMC (integrated memory controller) and various lithography/material patents AMD was forced to share with Intel via their contract, which is all BS because AMD doesn't get much back from Intel in this regard.
AMD was NEVER OBLIGATED to share AMD64, AMD was NEVER forced to share MANUFACTURING innovations, "IMC (integrated memory controller) and various lithography/material patents" NEVER.
"AMD noted to Ars that it has a number of patents of its own, including some related to the functionality of integrated memory controllers, the x86-64 instruction set, and x86 multicore configurations. The company also hinted that it may hold patents regarding the creation of an integrated CPU+GPU product on a single die—the so-called "Fusion" parts that now appear on the roadmaps of both companies."
On top of this the intel 64 bit version failed miserably, stop thinking intel is some brilliant company they have pushed technologies because of AMD. Integrated GPUs anyone? That was AMDs patent first. After AMD APUs came out intel HD graphics popped up. Competition it's a thing and it is needed. Also CPUGPUGURU you need to calm down and stop with the rosey intel glasses. Intel is a boring company that has just worked on improving it's own core architecture with no thought or creativity behind it, if anything AMD should at least be praised for the risks it has taken and how it thinking outside of the box(Intel thinks inside of a very small box) have pushed the industry as a whole.
BSing AMD pump boy Intel had integrated graphics first and that was before Sandy Bridge and its ring bus that is a lot faster than AMD's on-die northbridge—the ring bus design connects the CPU and GPU at 384GB/s, while the link between AMD's northbridge and the GPU is 27GB/ which came out before AMD's Watt Sucking IPC cripple APUs.
Intel x86 64 Bit DID NOT Fail and was reverse engineered and Again AMD was NEVER OBLIGATED to share AMD64, AMD was NEVER forced to share MANUFACTURING innovations, "IMC (integrated memory controller) and various lithography/material patents" NEVER.
Ya they weren't forced they did it because didn't we all feel sorry for Intel when titanium fell through? Intel x86-64 is licensed from AMD, find me proof that it was reverse engineered or STFU. And Itanium was not x86-64 it was a 64bit only architecture it couldn't run anything 32bit which is why AMDs x86-64 option was better at the time since 64bit software wasn't huge yet. I mena really I gave you articles and googled some before I posted can you please not yell at me like that it makes you look like an idiot and you provide no proof to your side of the argument.
It was reverse-engineered. Which is allowed for some reason. Now ... if you want to get control AMD, with their x86-64, you might succeed in "clean room reverse engineering" of 16- and 32-bit x86 and get free of Intel's patent grasp.
I do not understand patenting specification concept anyway.
Don't think Glide can be blamed solely for 3DFX's demise. That lies with poor management decisions like buying STB which annoyed OEMS & introducing late products!
Yeah, DX12 development is ahead of Mantle. That's why there are so many DX12 games out there. Oh wait, we don't even have o proper DX12 OS.
Sometimes I wonder if people really believe the nonsense they write. You just sound like the typical deluded Nvidia fangirl.
The truth is, Vulkan and DX12 were inspired by Mantle. No Mantle, no Vulkan and no DX12. Vulkan is actually a Mantle fork. And it seems that Mantle and Vulkan are ahead of DX12 in terms of functionality. You should be thankful to AMD for improving the whole industry with open initiatives. Something Nvidia has never done before. You remember their PhysX disaster? And CUDA fades away too. Finally more and more developers use OpenCL and other open standards.
Because you all here speak as if DX12 are miraculously getting more performance at zero cost. Low level means more performance at development costs. What the API does not provide as a high level functionality, programmer must do on their own using simpler "bricks".
It's like none of you knew that through Assembly you can achieve much more speed than using C++ layered functions calls. But I guess none of you ever tried to build in asm anything more than a simple algorithm to be embedded in a usual C/C++ library. Creating something large in asm is out of question. That's why at some point high level languages have been created. And why some of them have also evolved into providing OOP. A much inefficient way of programming under the point of view of performances, but has the immense advantage to simplify and reuse things a lot.
I'm glad they pushed Khronos and MS to release low-level APIs. It benefits everyone in the long run. In the meantime they have helped and continue to help out their own products in some big titles.
Mantle IS OPEN. People obviously deny the fact that Vulkan IS MANTLE. The difference is, AMD don't need to make Mantle an industry standard anymore because Vulkan is just that. Now Khronos can maintain this standard which saves AMD a lot of resources. Resources they can spend on newer Mantle versions with new features and innovations. AMD achieved their goals. They established low-level graphics APIs. And Mantle is the reason for it.
Letting OpenGL Next and DirectX 12 to take the wheel is the most open source option they could made. And also is in the interest of the consumer. The only way to see Mantle in games in the future, is if consumers reject Windows 10 and stay with Windows 7.
It depends on the point of view.. are nvidia fanboys that bash emtpy AMD marketing or are AMD fanboys that fall in an make-belive state each time AMD publish a colored slide? It was clear since the beginning that Mantle could not survive DX12. AMD's move was just a marketing move to make the world notice it can do something. Most people that have been in the IT world for a while (or do not possess 2 interactive neurons) could not really believe anything of what was said about the purpose of that library. And infact only donkeys believed them.
You are so right. Its sickening as to how many NVidia (Even Intel to an extent) stooges are out there. Paid/Volunteer to bash competitors like AMD. Attached to a Corp like they are their mommy, or that NVidia likes them. Look at how many have been sticking up for NVidia even though they outright, and knowingly lied.
Some people just have no morals or ethics. And don't even care they are that way. Very sad.
The sad thing is, Nvidia fangirls still don't realize that Mantle is the reason why Vulkan and DX12 have been developed. Mantle was never intended to be OGL or DX competition. AMD always said it is intended to coexist. And nothing changed with that. Now they achieved their first main goal, making Mantle an open industry standard. It's just not called Mantle but Vulkan. The technology under the hood is the same.
I don't know how much thc was in that one fella: " Mantle was never intended to be OGL or DX competition. AMD always said it is intended to coexist. "
Wow, the blown brain gasketing is an incredible amd outfitting skill. Let's see, competition is great, it made intel and nvidia jump and work and lower prices for consumers, cause amd kicked their backsides -! Competition !"!!
But now: " Mantle was never intended to be OGL or DX competition. AMD always said it is intended to coexist. " Now competition is bad, because "mantle" DIED. But instead of declaring it dead, we (YOU) claim amd always said it "coexists", not competes ! R O F L with boatloads of mayonnaise! !!!!!!!!!
My my... please send a sample to the new government grow labs...
Mantle still works on Windows7 and Windows 8, something that DirectX can't do. I know game developers really hate Microsofts tying of the OS version to the API. It means they can't progress to the latest graphics without losing half their customer base.
Yes, so a game developer that using Mantle now covers just a bit more than 25% of the potential gaming market, and that will have the share narrowed to a lower percentage after Win10, will still invest money in supporting Mantle? Mantle is now used because AMD invested some money in DICE to make them develop and adopt it in their engine. As soon as DX12 comes into play, AMD will for sure stop feeding this dying horse and so the horse will die for real. It's just a question of how much money you can bring on the table. AMD has not enough. And as a game engine could be modified to adopt low level Mantle API, so it can be modified to adopt DX12 API. That move does not require any "artificial feed" by MS or anyone else as it is a natural evolution of the game market. Engine will be DX11/DX12 capable for a while, until Win10 takes the most of the game market. Earlier or later everyone will have a DX12 capable engine. None will be interested in Mantle anymore, even thinking about those that are left with pre Win10 version of the OS.
AMD has complete control over the entire software stck due to their ability to craft a cpu and a gpu that can communicate with each other any way they want. They need to stop screwing around and deliver something that offers an order of magnitude improvement in performance, otherwise no one will bother to use mantle. The fact that microsoft can come along and write something just as good and have it work for nvidia too is telling.
Nvidia was working with Microsoft on DX12 while debt laden AMD was wasting resources marketing redundant mantle and paying developers to use mantle. AMD needed mantle because its CPUs were and still are IPC cripple. Alpha mantle was never open, AMD was playing the open card knowing it would never be open. AMD pumped up the mantle band wagon while developers jumped off knowing DX12 is on the way.
AMD all talk no walk, its been that way with every watt wasting CPU/APU and re-branded GPU for years now.
No, it's wrong. Nvidia never worked on the DX12 API. AMD did. Nvidia fangirls seem to be really desperate to make up stories considering all the bad Nvidia media lately.
So calling someone a girl is an insult from your point of view? You must be one of the smartest people on the internet. Like AMD, do you? Makes perfect sense, considering . Why not buy a lot of AMD stock while you're at it, lol.
In addition to Nvidia’s new Maxwell GPU having top-of-the-line performance and power efficiency, it has another feature that will probably make a lot more difference in the real world: It’s the first GPU to offer full support for Microsoft’s upcoming DirectX 12 and Direct3D 12 graphics APIs. According to Microsoft, it has worked with Nvidia engineers in a “zero-latency environment” for several months to get DX12 support baked into Maxwell and graphics drivers. Even more importantly, Microsoft then worked with Epic to get DirectX 12 support baked into Unreal Engine 4, and to build a tech demo of Fable Legends that uses DX12. Back in March, when Microsoft officially unveiled DirectX 12 (and D3D 12), it surprised a lot of people by proclaiming that most modern Nvidia (Fermi, Kepler, and Maxwell).
Microsoft officially unveiled DirectX 12 (and D3D 12), it surprised a lot of people by proclaiming that most modern Nvidia (Fermi, Kepler, and Maxwell) support DX12. One of the surprise announcements at the show is that Nvidia will support DX12 on every Fermi, Kepler, and Maxwell-class GPU. That means nearly every GTX 400, 500, and 600 series card will be supported.
At GDC 2014, Microsoft and Nvidia (NO AMD Here) have taken the lid off DirectX 12 — the new API that promises to deliver low-level, Mantle-like latencies with vastly improved performance and superior hardware utilization compared to DX11. Even better, DirectX 12 (and D3D 12) are backwards compatible with virtually every single GPU from the GTX 400 to the present day.
Interestingly, AMD isn’t necessarily following suit — the company has indicated that it will support DX12 on all GCN-based hardware.
Wait maxwell was first GPU to support it? Either way I see GCN 1.0, 1.1, and 1.2 all supported here, granted the GCN 1.0 is buggy. Also thats the last 3 generations of GCN supported by AMD. Nvidia also has most of theirs working although looks like at the time of this article they don't have Fermi support. Where are you getting the info that says the Maxwell was the first to support DX12? Also the r9-290x had top of the line performance until the 980 came out...funny how they leapfrog but as soon as nvidia is in the lead Nvidia fans act like AMD never was and deny that nvidia is behind when they are.
Fermi does support DX12, but Maxwell has full DX12 implementation. AMD's hot watt wasting r9-290x never beat top end Kepler 780/Titan and Maxwell 980 widen the performance per watt gap. Try using a search engine for, "Intel reverse engineered x86 64 bit" and use it for Maxwell DX12 naybe you will kearn a thing or two. Stop believing AMD's hype, they lost all cred years ago, everything AMD spews about is propaganda wrapped by marketing BS. Sandy Bridge came out before any AMD APU and Intel had integrated graphics before Sandy Bridge, Intel Ring Bus runs circles around AMD's northbridge, read "Intel's ring bus is a lot faster than AMD's on-die northbridge—the ring bus design connects the CPU and GPU at 384GB/s, while the link between AMD's northbridge and the GPU is 27GB/s. "
Sorry but I lost all hope in AMD, each and every CPU/APU and rebranded GPU has been over hyped, IPC cripple, and sucks watts. I am sick of AMD fool tools rewriting history and smearing a new shade lipstick on AMD piggy. I hate BSing fantards who are paid to post hype pumping propaganda, I lived it, built it, benchmark it and sold it, so quit apologizing for AMD's short comings. Its a ARM vs Intel world now there is no need for a watt wasting IPC cripple AMD that offers nothing to the x86 world. So stop living in the past and rewriting history, debt laden AMD fell and can't get up, now late limp and lame AMD lives on propaganda, bogus benchmarks, and hyping pumping the next coming of its CPU/GPU/APU savior. Well its not gonna happen, AMD's ARM core a generic me too up against deep pocketed custom ARMed Armies, seamicro makes money selling Intel Inside severs, Skylake is around the corner, Max Daddy Maxwell is ready and waiting, Its way too late and too lame for debt laden AMD.
Cry me a Amazon river with the Fat Lady singing from a canoe, Turn out the lights... the party's over.
http://blogs.nvidia.com/blog/2014/03/20/directx-12... "Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC.
Gosalia demonstrated the new API with a tech demo of the Xbox One racing game Forza running on a PC powered by an NVIDIA GeForce Titan Black."
So exactly a year ago they said they'd been working with them for FOUR years. They even had a demo of an Xbox1 game running on PC hardware with DX12 a YEAR ago at GDC 2014. Literally working hand in hand to get the demo going for a year. Did AMD have a GDC 2014 demo of DX12? No because they were wasting time on Mantle instead of DirectX or OpenGL (which if you're running linux is pretty important, where are AMD's killer linux drivers?).
How exactly do you think you get a demo of a game NOT made for DX12 working with your hardware, drivers etc a YEAR ago without being involved in working on it for at least the PREVIOUS year as he said hand in hand. I guess you should call Nvidia and tell them they weren't really working on what they were working on...LOL.
http://www.extremetech.com/gaming/198964-dx12-conf... NVidia vs. AMD DX12 Star Swarm. Unlike the rosy slide Anandtech shows, AMD is getting killed here. While you're at it take a look at Nvidia's DX11 vs. AMD's. Killed there too. Clearly Mantle wasn't needed to make DX11 rock a bit more. You just needed to put in some driver time on DX11 instead of something like Mantle correct? OVER 3x faster than 290x for 980. Clearly Nvidia was putting in DirectX driver time just as they said (which affects a TON of games unlike Mantle). This is just last month.
"The first thing people are going to notice is that the GTX 980 is far faster than the R9 290X in a benchmark that was (rightly) believed to favor AMD as a matter of course when the company released it last year. I’ll reiterate what I said then — Star Swarm is a tech demo, not a final shipping product. While Oxide Games does have plans to build a shipping game around their engine, this particular version is still designed to highlight very specific areas where low-latency APIs can offer huge performance gains."
So it's AMD's benchmark, but Nvidia slaughtered them in their best case scenario, which is what I call a benchmark made for them that is NOT ever going to be an actual game. They supposedly have plans for a game using the engine but it won't BE a game itself. I really doubt NV would say they were working hand in hand if it wasn't true. Surely MS would have something to say. Who else do you think worked on it with only two major gpu vendors in the running? Why wasn't AMD used for the GDC 2014 demo if it wasn't NV who helped forge it? It's an Xbox1 game running on AMD hardware in a console, but MS chose NV to do the demo? Odd? NO, NV worked on DX12.
The OP was right, AMD never planned to make it open. OPEN=running on Nvidia hardware as MANTLE not some fork of it. Mantle didn't even work on all of AMD's own hardware, let alone anyone else (intel said they were rebuffed multiple times too).
Debt laden AMD was busy working with both MS & Sony. Don't think MS cripples Mouth that Feeds XBox for Nvidia who's margins for GPU alone kept then out of both consoles!
Love the hate here, again you are very close minded. Why are you saying they made mantle to fix their own CPUs mantle works with intel CPUs as well and offered improvement on both CPUs. The low level API moved the bottleneck to the GPU same as DX12. The fact that they showed the benefits of a low level API like mantle is awesome and the push for low level APIs has been huge since then. Nothing bad here and with DX12 coming out there isn't a huge need for it once that happens because DX12 and mantle perform about the same. I don't see why there is anger in your post. Intel makes better processors with tech that has benefited from AMD influence same with Nvidia. Without the competition we would probably be on 32bit if it was left to intel, with a retarded Itanium 64 architecture on the side that no one wants to use.
And besides DX12 will take 5 years before it's used readily in games. I mean it took years for DX11 to come around in games outside of a nitch few. Got stuck on DX9 for forever...thanks consoles.
Hate the AMD fool tool rewriting history here, AMD's Mantle was NEVER open its STILL a closed bug infested Alpha redundant AMD ONLY API experiment that is NOW DOA. DX12 was in the works before malware mantle BUT AMD was too busy hyping Mantle that it miss the DX12 boat that's why every single Nvidia GPU from the GTX 400 to the present day work with DX12 and Maxwell is the only Full DX12 GPU.
You AMD fool tools must love watt sucking IPC cripple CPUs/APUs that bottle neck high end GPUs, Bulldozer was a over hyped waste of money that strangles multi GPU setups, have fun with your GPU choking Bulcraper and hot watt sucking rebranded GPUs and need to be water cooled.
After years of never walking its over hyping bogus benchmark talk AMD can go to chiphell, AMD lost all cred and so have you AMD rewriting history fool tools.
Wow way to miss the mark entirely, you are throwing a lot of accusations with no real proof behind it. Also most of what you wrote has no relevance to what I was saying. Also still no proof showing that Maxwell is the only fully DX12 supported GPU because the AMD one seems to work just fine in the benchmarks shown in the article. If your going to keep spouting crap like you are at least provide me with proof, I am open minded and would like to learn but you provide nothing to learn from.
"You AMD fool tools must love watt sucking IPC cripple CPUs/APUs that bottle neck high end GPUs"
Actually the 93xx's do fine with this but with DX12 the bottleneck is shifting to the GPUs and away form the CPUs so this isn't really an issue anymore. Yes the bulldozer arch was way overhyped and kinda killed AMDs competitive edge in single threaded benchmarks. Which is why i'm excited to see Zen next year form AMD 14nm and finally throwing out Bullcraper as you put it.
Now please if you respond 1) read what I wrote and try to be relevant 2) provide me with some content to back up what you say because I am interested on both sides and 3) try not to be so angry over this, I mean I know your married to intel and nvidia but what you are doing is a bit excessive.
Maybe this is what AMD mean by re purposing Mantle
AMD announces LiquidVR see #LiquidVR on twitter, engadget has the story too
at the bottom of the story it states
"When asked about whether or not AMD is also working with HTC and Valve on the HTC Vive headset, AMD reps hilariously clammed up and asked whether or not they could talk about that yet."
Once again, the "open" "good hearted" "would never do that !" AMD has stabbed everyone in the back: " however AMD’s broader plans to also release a Mantle SDK to allow full access, particularly allowing it to be implemented on other hardware, has been shelved "
Yes, we know, AMD, you (and your fans) would never keep it to yourselves, you're all about everyone....
Can we just claim AMD is full of it off the bat next time ? How many times can they play their holier than thou PR game and every fan falls for it ?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
94 Comments
Back to Article
waltsmith - Monday, March 2, 2015 - link
I say kudos to AMD for forcing the issue on low-level API's. This is something that could, and should, have been done over a decade ago by microsoft(as the developer of the most widely used API). Pity it took a financially struggling, second fiddle manufacturer in both CPU and GPU markets to convince the software maker to do what needed to be done.Samus - Tuesday, March 3, 2015 - link
AMD has a history of doing this...don't forget AMD64?Without AMD pushing x86-64, Intel would still be trying to push IA64 and if 64-bit extensions were to be added to x86, it would have been years after AMD even introduced the idea.
Just one of many technologies Intel benefited from with their architecture licensing agreement with AMD, but by no means the chief.
CPUGPUGURU - Wednesday, March 4, 2015 - link
Intel didn't need a licensing agreement with AMD for 64 bit X86 Intel reverse engineered it. Quit making this crap up. Stop rewriting history and do your D&D and you won't look like a AMD pumping fool tool.elitewolverine - Wednesday, March 4, 2015 - link
Actually Intel did need to license the tech...gruffi - Wednesday, March 4, 2015 - link
You have no clue what you are talking about. Intel licensed AMD64 and renamed it to Intel64. Reverse engineered ... joke of the day.JonnyDough - Friday, March 6, 2015 - link
Correct. CPUGPUGURU is "some guru". Not!WinterCharm - Thursday, March 5, 2015 - link
For a so called guru, you're surprisingly ignorant.MamiyaOtaru - Friday, March 6, 2015 - link
I'm leaning towards this being a troll. The juxtaposition of username and ignorance of the subject is just to incredible otherwiseJonnyDough - Friday, March 6, 2015 - link
Troll much?CPUGPUGURU - Friday, March 6, 2015 - link
You delusional AMD pump boys are a bunch of fact-less fool tools who can't read to learn and spew BS out of your orifice like bull with diarrhea. You're too lame brained to be educated and instead of reading the facts you blatantly post fiction you pulled out from your AMD pumping stink hole.I REPEAT INTEL REVERSE ENGINEERED X86 64 bit and DOES NOT License AMD's version!
READ LEARN and STFU!
Intel Did Not license the x86 64 bit from AMD, Intel instead Reverse Engineered x86 64 bit, read, "Microprocessor Reports", the article, "AMD and Intel Harmonize on 64" can be found in the March 29th edition of In-Stat/MDR's Microprocessor Report
"Intel's reverse-engineering of AMD64 " says Halfhill. "
Intel Did Not license the x86 64 bit from AMD, Intel instead Reverse Engineered x86 64 bit, read, "Microprocessor Reports", the article, "AMD and Intel Harmonize on 64" can be found in the March 29th edition of In-Stat/MDR's Microprocessor Report
"Intel's reverse-engineering of AMD64 " says Halfhill. "
You're welcome
piiman - Saturday, March 7, 2015 - link
Just because you reverse engineer something doesn't mean you own it.CPUGPUGURU - Saturday, March 7, 2015 - link
In this case yes it does, its Intel's 64 bit version of x86 that Intel does own, so you are wrong Intel owns its version.dooki - Thursday, April 30, 2015 - link
"Intel reverse engineered it"...."quit making this crap up"...
"do your D&D"... ...what?.... did you mean to write R&D? ..and even then ,Why would he need to do development?
The only pumping fool tool here is you...guru...
Irony in your name And post.
CPUGPUGURU - Wednesday, March 4, 2015 - link
You Fact-less AMD pumpers need to learn how to read so that you can read to learn.Intel Did Not license the x86 64 bit from AMD, Intel instead Reverse Engineered x86 64 bit, read, "Microprocessor Reports", the article, "AMD and Intel Harmonize on 64" can be found in the March 29th edition of In-Stat/MDR's Microprocessor Report
"Intel's reverse-engineering of AMD64 " says Halfhill. "
AMD owns its 64 bit implementation but AMD's 64 bit instruction set requires 32 bit backward compatibility. Also 64 bit x86 is integer units only, AMD's ISA is not unique, the (FPU) floating point unit or SIMD units are Intel's patents and IP. It's a Fact that x86 is so old that Intel has no patents to x86 ISA they expired long ago, but the addition of new and evolving instruction set are still Intel's IP. Anyone today can an design and mfg a x86 ISA 32 bit processor license royalty free using original ISA but it's useless without the many new extensions like MMX, SSE, SSE2
Read learn and stop posting nonsense.
ppi - Wednesday, March 4, 2015 - link
x86 is specification. Are you sure it expires?You are correct that Intel did reverse-engineer it (in the same manner, I wonder if AMD could "reverse-engineer" 32- and 16-bit x86, when they have the 64-bit one), but they did so only because they had to catch up with AMD64, especially when their best product was Prescott. Otherwise, we would live in "merry world" of VLIW IA-64...
... Actually, with the x86 backward compatibility thrown out, it would have been easier to change to ARM :D.
CPUGPUGURU - Saturday, March 7, 2015 - link
I believe it has expired and now public domain but pretty much useless without new extensions.mikeztm - Friday, March 6, 2015 - link
I hope AMD64 was never happen. We could have been throw away all the x86 into trash bin back to 2003. I purchased Athlon64 at that time and thought it was awesome. But now I think shift instructions like Apple did with Rosetta is a much better solution. IA64 itself is a revised architecture and it is designed to be better than the 30 years old x86(IA32).CPUGPUGURU - Saturday, March 7, 2015 - link
I'm with you this, Intel wanted to move forward to a RISC architecture world, but you know who dropped a 64bit anchor into the muddy instruction set waters and let ARM set its RISC sail on those waters. Bad for x86 good for ARM but x86 is evolving into a RISC form, ARM's efficiency advantage will be lost as more IPC is needed.joemark - Tuesday, March 3, 2015 - link
AMD64 helped push Windows XP(64bit); I was a beta tester of the 64 bit operating system and was so because I wanted to play Far Cry's 64bit version (AMD was a partner of a little known German development house called Crytek back in the day); Remember rambus ram? Thanks to AMD, we don't have to. God only knows how much we would be paying for technology these days if only Intel existed. AMD has forced both Intel and Microsoft to improve, innovate and do things they otherwise would have taken longer to do if indeed at all.FITCamaro - Tuesday, March 3, 2015 - link
Rambus memory was awesome memory. It was just expensive.Reflex - Tuesday, March 3, 2015 - link
Between 1999 and 2001 I worked as a low level kernel engineer for Microsoft. I worked on RDRAM based systems from the time they were prototypes through commercial availability. If there was one thing RDRAM was not, it was "awesome". It had severe limitations and drawbacks. A quick list:- High heat that was not evenly dissipated due to the serial nature of the chips
- Latency that increased the more memory you had resulting in incredibly inconsistent performance (boot times for Windows, a basic process that should be repeatable consistently clean boot to clean boot would vary by as much as 300% depending on what was loaded where in memory)
- Extremely poor scaling making it completely unsuitable for large memory applications
- Very poor random access performance
And on and on and on. RDRAM works very well in applications where you can plan for it and put your data in specific locations both to balance heat issues and to control for latency problems (ie: actively manage your memory to put less used data in higher latency addresses). It can be fantastic for streaming applications, such as a lot of multimedia production work, where random access is not a common task vs raw bandwidth (one place where RDRAM had an advantage). But for general purpose PC's or servers, RDRAM was a terrible choice that did not make sense from day one.
I dislike the historical revisionism I see in some quarters around RDRAM. The decisions Intel made around the technology were a complete fiasco, and the Intel engineers I worked with knew and lamented it.
Samus - Tuesday, March 3, 2015 - link
FIT, RDRAM was a solution looking for a problem. It worked better in low-memory embedded systems because it was able to flush and rewrite fast. Inevitably I agree, the high cost killed RAMBUS's business, but even if it was price-competitive with DDR it would have never become mainstream in anything but where you find it now; embedded applications like game consoles and network routers. Even the latest RAMBUS (XDR2) runs too hot for use in even large notebooks and graphics cards.Long story short, RAMBUS was fitting to the P4; both used very inefficient technology, and Netburst was in fact designed around the quad datarate of RDRAM (which is why it performed so poorly with SDR and even DDR until the memory controller was optimized.) A match made in hell.
CPUGPUGURU - Tuesday, March 3, 2015 - link
RDRAM was expensive and had high latency, I shorted RAMBUSTED from triple digits to the price a sub sandwich, that set me up financially for years. Ripped Intel a new ahole for getting in bed with RAMBUSTED, but I still didn't buy anything AMD. AMD won a few benchmarks and lost a few, it was never a slam dunk for AMD and I still trusted Intel more. I bought every CPU Intel made from 286 up, my timing upgrade cycle never pointed to a AMD alternative, it would of taken a all out performance win for me to switch camps and that didn't happen even with Intel riding expensive high latency RAMBUS.cwolf78 - Wednesday, March 4, 2015 - link
LOL get a load of this guy. It must be nice being delusional.dooki - Thursday, April 30, 2015 - link
in his words he is a intel pumping fool tool.dooki - Thursday, April 30, 2015 - link
plus that company tried to sue just about every other competitor in the DDR market in order to have a monopoly with their already out of this world pricing.They were the "GMO Monsanto" of their day
Dribble - Tuesday, March 3, 2015 - link
They never forced the issue, DX12 development is ahead of mantle - it's actually going to get released and work properly. AMD just saw what was coming and released and AMD only alpha of DX12 called Mantle, paid off some games companies to use it, hinted that AMD's console win would make Mantle much better then anything the opposition would do. Talked a lot of rubbish about openness they never had any plans to follow through with.It ends up being empty marketing - if you bought a card for Mantle support then you were conned. Not quite sure why Ryan is being so nice to AMD here - they shouldn't be praised for empty marketing, they should be castigated for it. Praise them for real solutions (e.g. AMD eyeinfinity), but burn them for empty marketing campaigns.
HalloweenJack - Tuesday, March 3, 2015 - link
BF4 with mantle is over a year old , DA:I makes the game playable on higher settings with mantle on lower end machines - NFS rivals is another to benefit... as do cryengine games.DX12 games wont be about till 2016 at the earliest. Please stop making up rubbish `Dribble`
CiccioB - Tuesday, March 3, 2015 - link
Releasing alpha versions of a library doesn't make them better than not available libraries still in development.When DX12 will bere released it will be finished and with driver capable of making it run well in a couple of months. And DX12 engine could be in creation today. You don't know.
Mantle is still not fully useable after a year. And it will soon be forgotten by AMD as soon as a new GCN architecture revision comes into play. It is a dead library. In fact it was dead at its launch.
Samus - Tuesday, March 3, 2015 - link
I think what he meant to say is "AMD: mission accomplished, time to move on"AMD never wanted to support their own API. I remember in 2003 getting two huge programming books from AMD regarding AMD64 design (one was for the hardware level, one was for the programming level) and just thinking "man this must be costing them a fortune to support."
AMD isn't in a financial situation to push Mantel and they knew it from day 1. BUT the know by optimizing rendering closer to the hardware level will benefit them more than competing architectures because GCN on average still has 1/3 more physical cores than Maxwell for comparable performance.
It isn't relevant how powerful the cores are (Maxwell's are obviously more powerful) but how busy the cores are and how efficient they are running. Maxwell's are already pretty efficient, AMD's are not. A better API can even this out in AMD's favor and it's less expensive to push an API then a whole new architecture, especially when GCN isn't that bad. Nothing AMD has is bad, per se, it just isn't optimized for (especially CPU's.)
The real boggle is why AMD didn't go after the CUDA market, a market that is actually making nVidia decent money. The premium of Quadro parts and their performance over FireGL because of application optimizations, in addition to the sheer number of sales in the high performance computing space, are just icing on the cake :\
Wreckage - Tuesday, March 3, 2015 - link
OpenGL was also ahead of Mantle. Mantle is just like 3DFX GLIDE and we all know what happened to 3DFX. Looks like AMD did not learn from history and soon shall repeat it.Samus - Tuesday, March 3, 2015 - link
The only similarity is has to GLide is it's proprietary.Similarities end there. GLide was just that, an IDE, not an API. It could be wrapped and emulated with little performance loss to OpenGL.
Mantel is one of those things that developers have been talking about for awhile and for some reason it took AMD to finally listen and put together an API. After the industry saw the working concept had tangible benefits, everyone is behind the idea now of moving closer to the hardware interface and eliminating as much software overhead as possible.
Could AMD have made Mantel universal...of course. And they probably should have. But they are a publicly traded company and are not in the business of giving away millions of dollars in research. AMD would have kept x86-64 (AMD64) to itself if it wasn't obligated to share it with Intel via their licensing agreement...and it's important to note Intel benefited from many AMD manufacturing innovations via this agreement, too. IMC (integrated memory controller) and various lithography/material patents AMD was forced to share with Intel via their contract, which is all BS because AMD doesn't get much back from Intel in this regard.
CPUGPUGURU - Tuesday, March 3, 2015 - link
STOP BSingAMD DID NOT share AMD64, Intel reversed engineered it. AMD was not FORCED to share anything it was IBM that forced Intel to share X86.
CPUGPUGURU - Tuesday, March 3, 2015 - link
Hey AMD pump boy,AMD was NEVER OBLIGATED to share AMD64, AMD was NEVER forced to share MANUFACTURING innovations, "IMC (integrated memory controller) and various lithography/material patents" NEVER.
Prove it or STFU!!! I hate LIARS.
Crunchy005 - Wednesday, March 4, 2015 - link
did some googleing."Intel is currently also a licensee of the AMD64 extensions, which Intel calls EM64T, AFAIK."
http://siliconmadness.blogspot.com/2009/03/amd-bre...
"AMD noted to Ars that it has a number of patents of its own, including some related to the functionality of integrated memory controllers, the x86-64 instruction set, and x86 multicore configurations. The company also hinted that it may hold patents regarding the creation of an integrated CPU+GPU product on a single die—the so-called "Fusion" parts that now appear on the roadmaps of both companies."
http://arstechnica.com/gadgets/2009/03/amd-intel-e...
On top of this the intel 64 bit version failed miserably, stop thinking intel is some brilliant company they have pushed technologies because of AMD. Integrated GPUs anyone? That was AMDs patent first. After AMD APUs came out intel HD graphics popped up. Competition it's a thing and it is needed. Also CPUGPUGURU you need to calm down and stop with the rosey intel glasses. Intel is a boring company that has just worked on improving it's own core architecture with no thought or creativity behind it, if anything AMD should at least be praised for the risks it has taken and how it thinking outside of the box(Intel thinks inside of a very small box) have pushed the industry as a whole.
CPUGPUGURU - Wednesday, March 4, 2015 - link
BSing AMD pump boy Intel had integrated graphics first and that was before Sandy Bridge and its ring bus that is a lot faster than AMD's on-die northbridge—the ring bus design connects the CPU and GPU at 384GB/s, while the link between AMD's northbridge and the GPU is 27GB/ which came out before AMD's Watt Sucking IPC cripple APUs.Intel x86 64 Bit DID NOT Fail and was reverse engineered and Again AMD was NEVER OBLIGATED to share AMD64, AMD was NEVER forced to share MANUFACTURING innovations, "IMC (integrated memory controller) and various lithography/material patents" NEVER.
Crunchy005 - Wednesday, March 4, 2015 - link
Ya they weren't forced they did it because didn't we all feel sorry for Intel when titanium fell through? Intel x86-64 is licensed from AMD, find me proof that it was reverse engineered or STFU. And Itanium was not x86-64 it was a 64bit only architecture it couldn't run anything 32bit which is why AMDs x86-64 option was better at the time since 64bit software wasn't huge yet. I mena really I gave you articles and googled some before I posted can you please not yell at me like that it makes you look like an idiot and you provide no proof to your side of the argument.ppi - Wednesday, March 4, 2015 - link
It was reverse-engineered. Which is allowed for some reason. Now ... if you want to get control AMD, with their x86-64, you might succeed in "clean room reverse engineering" of 16- and 32-bit x86 and get free of Intel's patent grasp.I do not understand patenting specification concept anyway.
0VERL0RD - Wednesday, March 4, 2015 - link
Don't think Glide can be blamed solely for 3DFX's demise. That lies with poor management decisions like buying STB which annoyed OEMS & introducing late products!gruffi - Wednesday, March 4, 2015 - link
Yeah, DX12 development is ahead of Mantle. That's why there are so many DX12 games out there. Oh wait, we don't even have o proper DX12 OS.Sometimes I wonder if people really believe the nonsense they write. You just sound like the typical deluded Nvidia fangirl.
The truth is, Vulkan and DX12 were inspired by Mantle. No Mantle, no Vulkan and no DX12. Vulkan is actually a Mantle fork. And it seems that Mantle and Vulkan are ahead of DX12 in terms of functionality. You should be thankful to AMD for improving the whole industry with open initiatives. Something Nvidia has never done before. You remember their PhysX disaster? And CUDA fades away too. Finally more and more developers use OpenCL and other open standards.
FlushedBubblyJock - Friday, March 27, 2015 - link
Uhh, I believe DX12 was 2 years in dev before amd mantled up the failure, and the forever proprietary, after lying about it, for a long time.TEAMSWITCHER - Tuesday, March 3, 2015 - link
AMD is completely dead to me.Zak - Tuesday, March 3, 2015 - link
Yes, I was wondering why this hasn't been done many years ago if the benefit is so huge?!?CiccioB - Wednesday, March 4, 2015 - link
Because you all here speak as if DX12 are miraculously getting more performance at zero cost.Low level means more performance at development costs. What the API does not provide as a high level functionality, programmer must do on their own using simpler "bricks".
It's like none of you knew that through Assembly you can achieve much more speed than using C++ layered functions calls. But I guess none of you ever tried to build in asm anything more than a simple algorithm to be embedded in a usual C/C++ library. Creating something large in asm is out of question. That's why at some point high level languages have been created. And why some of them have also evolved into providing OOP. A much inefficient way of programming under the point of view of performances, but has the immense advantage to simplify and reuse things a lot.
Alexvrb - Monday, March 2, 2015 - link
I'm glad they pushed Khronos and MS to release low-level APIs. It benefits everyone in the long run. In the meantime they have helped and continue to help out their own products in some big titles.Klimax - Tuesday, March 3, 2015 - link
Only problem, they didn't. DX12 was long before Mantle in development.HighTech4US - Monday, March 2, 2015 - link
What a joke to keep calling Mantle OPEN.It was NEVER OPEN to any company except AMD and with these changes it will NEVER BE OPEN.
A more realistic analysis (instead of this AMD apologetic piece) is here:
http://www.pcper.com/news/Graphics-Cards/GDC-15-AM...
gruffi - Wednesday, March 4, 2015 - link
Mantle IS OPEN. People obviously deny the fact that Vulkan IS MANTLE. The difference is, AMD don't need to make Mantle an industry standard anymore because Vulkan is just that. Now Khronos can maintain this standard which saves AMD a lot of resources. Resources they can spend on newer Mantle versions with new features and innovations. AMD achieved their goals. They established low-level graphics APIs. And Mantle is the reason for it.Senti - Tuesday, March 3, 2015 - link
Why still call it "glNext"? It's been confirmed that name of the new API is Vulkan.Ryan Smith - Tuesday, March 3, 2015 - link
The AMD announcement came before the Vulkan announcement, and we were under NDA on Vulkan.D. Lister - Tuesday, March 3, 2015 - link
lol, so the marketing bough finally breaks, eh? Now let the apologetics commence.yannigr2 - Tuesday, March 3, 2015 - link
Letting OpenGL Next and DirectX 12 to take the wheel is the most open source option they could made. And also is in the interest of the consumer. The only way to see Mantle in games in the future, is if consumers reject Windows 10 and stay with Windows 7.FlushedBubblyJock - Friday, March 27, 2015 - link
When open source is ZERO COMPETITION, how is it so great ?Would one of you amd fans (or not) please answer me that ?
Mikmike86 - Tuesday, March 3, 2015 - link
Anyone else notice anything related to AMD provokes NVIDIA fan boys to start bashing?They'll be wishing they hadn't if AMD goes under.
CiccioB - Tuesday, March 3, 2015 - link
It depends on the point of view.. are nvidia fanboys that bash emtpy AMD marketing or are AMD fanboys that fall in an make-belive state each time AMD publish a colored slide?It was clear since the beginning that Mantle could not survive DX12. AMD's move was just a marketing move to make the world notice it can do something. Most people that have been in the IT world for a while (or do not possess 2 interactive neurons) could not really believe anything of what was said about the purpose of that library. And infact only donkeys believed them.
formulav8 - Tuesday, March 3, 2015 - link
You are so right. Its sickening as to how many NVidia (Even Intel to an extent) stooges are out there. Paid/Volunteer to bash competitors like AMD. Attached to a Corp like they are their mommy, or that NVidia likes them. Look at how many have been sticking up for NVidia even though they outright, and knowingly lied.Some people just have no morals or ethics. And don't even care they are that way. Very sad.
gruffi - Wednesday, March 4, 2015 - link
Don't mind. Nvidia goes under before AMD. ;)The sad thing is, Nvidia fangirls still don't realize that Mantle is the reason why Vulkan and DX12 have been developed. Mantle was never intended to be OGL or DX competition. AMD always said it is intended to coexist. And nothing changed with that. Now they achieved their first main goal, making Mantle an open industry standard. It's just not called Mantle but Vulkan. The technology under the hood is the same.
FlushedBubblyJock - Friday, March 27, 2015 - link
I don't know how much thc was in that one fella: " Mantle was never intended to be OGL or DX competition. AMD always said it is intended to coexist. "Wow, the blown brain gasketing is an incredible amd outfitting skill.
Let's see, competition is great, it made intel and nvidia jump and work and lower prices for consumers, cause amd kicked their backsides -! Competition !"!!
But now: " Mantle was never intended to be OGL or DX competition. AMD always said it is intended to coexist. "
Now competition is bad, because "mantle" DIED. But instead of declaring it dead, we (YOU) claim amd always said it "coexists", not competes !
R O F L with boatloads of mayonnaise! !!!!!!!!!
My my... please send a sample to the new government grow labs...
lefty2 - Tuesday, March 3, 2015 - link
Mantle still works on Windows7 and Windows 8, something that DirectX can't do.I know game developers really hate Microsofts tying of the OS version to the API. It means they can't progress to the latest graphics without losing half their customer base.
CiccioB - Tuesday, March 3, 2015 - link
Yes, so a game developer that using Mantle now covers just a bit more than 25% of the potential gaming market, and that will have the share narrowed to a lower percentage after Win10, will still invest money in supporting Mantle?Mantle is now used because AMD invested some money in DICE to make them develop and adopt it in their engine. As soon as DX12 comes into play, AMD will for sure stop feeding this dying horse and so the horse will die for real.
It's just a question of how much money you can bring on the table. AMD has not enough.
And as a game engine could be modified to adopt low level Mantle API, so it can be modified to adopt DX12 API. That move does not require any "artificial feed" by MS or anyone else as it is a natural evolution of the game market. Engine will be DX11/DX12 capable for a while, until Win10 takes the most of the game market.
Earlier or later everyone will have a DX12 capable engine. None will be interested in Mantle anymore, even thinking about those that are left with pre Win10 version of the OS.
lefty2 - Tuesday, March 3, 2015 - link
Yeah, nice rant. It'll take about 5 years before everyone switches to Win10.FlushedBubblyJock - Friday, March 27, 2015 - link
It's free for one year, so expect quite a swamping.Shadowmaster625 - Tuesday, March 3, 2015 - link
AMD has complete control over the entire software stck due to their ability to craft a cpu and a gpu that can communicate with each other any way they want. They need to stop screwing around and deliver something that offers an order of magnitude improvement in performance, otherwise no one will bother to use mantle. The fact that microsoft can come along and write something just as good and have it work for nvidia too is telling.FlushedBubblyJock - Friday, March 27, 2015 - link
Bite your tongue ! Are you accusing the holy of holies AMD !??!!AMD would never, and I mean never, make something proprietary !
Please go read the AMD Gamer's Manifesto ten times and go to bed with no verdetrol.
I don't like nVidia's practices.
CPUGPUGURU - Tuesday, March 3, 2015 - link
Nvidia was working with Microsoft on DX12 while debt laden AMD was wasting resources marketing redundant mantle and paying developers to use mantle. AMD needed mantle because its CPUs were and still are IPC cripple. Alpha mantle was never open, AMD was playing the open card knowing it would never be open. AMD pumped up the mantle band wagon while developers jumped off knowing DX12 is on the way.AMD all talk no walk, its been that way with every watt wasting CPU/APU and re-branded GPU for years now.
So sorry but its so true.
gruffi - Wednesday, March 4, 2015 - link
No, it's wrong. Nvidia never worked on the DX12 API. AMD did. Nvidia fangirls seem to be really desperate to make up stories considering all the bad Nvidia media lately.D. Lister - Wednesday, March 4, 2015 - link
So calling someone a girl is an insult from your point of view? You must be one of the smartest people on the internet. Like AMD, do you? Makes perfect sense, considering . Why not buy a lot of AMD stock while you're at it, lol.CPUGPUGURU - Wednesday, March 4, 2015 - link
In addition to Nvidia’s new Maxwell GPU having top-of-the-line performance and power efficiency, it has another feature that will probably make a lot more difference in the real world: It’s the first GPU to offer full support for Microsoft’s upcoming DirectX 12 and Direct3D 12 graphics APIs. According to Microsoft, it has worked with Nvidia engineers in a “zero-latency environment” for several months to get DX12 support baked into Maxwell and graphics drivers. Even more importantly, Microsoft then worked with Epic to get DirectX 12 support baked into Unreal Engine 4, and to build a tech demo of Fable Legends that uses DX12. Back in March, when Microsoft officially unveiled DirectX 12 (and D3D 12), it surprised a lot of people by proclaiming that most modern Nvidia (Fermi, Kepler, and Maxwell).CPUGPUGURU - Wednesday, March 4, 2015 - link
Read LearnMicrosoft officially unveiled DirectX 12 (and D3D 12), it surprised a lot of people by proclaiming that most modern Nvidia (Fermi, Kepler, and Maxwell) support DX12. One of the surprise announcements at the show is that Nvidia will support DX12 on every Fermi, Kepler, and Maxwell-class GPU. That means nearly every GTX 400, 500, and 600 series card will be supported.
At GDC 2014, Microsoft and Nvidia (NO AMD Here) have taken the lid off DirectX 12 — the new API that promises to deliver low-level, Mantle-like latencies with vastly improved performance and superior hardware utilization compared to DX11. Even better, DirectX 12 (and D3D 12) are backwards compatible with virtually every single GPU from the GTX 400 to the present day.
Interestingly, AMD isn’t necessarily following suit — the company has indicated that it will support DX12 on all GCN-based hardware.
FlushedBubblyJock - Friday, March 27, 2015 - link
WOW - really ?" That means nearly every GTX 400, 500, and 600 series card will be supported."
OMG...
rofl amd is so hosed !
Crunchy005 - Wednesday, March 4, 2015 - link
Wait maxwell was first GPU to support it? Either way I see GCN 1.0, 1.1, and 1.2 all supported here, granted the GCN 1.0 is buggy. Also thats the last 3 generations of GCN supported by AMD. Nvidia also has most of theirs working although looks like at the time of this article they don't have Fermi support. Where are you getting the info that says the Maxwell was the first to support DX12? Also the r9-290x had top of the line performance until the 980 came out...funny how they leapfrog but as soon as nvidia is in the lead Nvidia fans act like AMD never was and deny that nvidia is behind when they are.http://anandtech.com/show/8962/the-directx-12-perf...
CPUGPUGURU - Wednesday, March 4, 2015 - link
Fermi does support DX12, but Maxwell has full DX12 implementation. AMD's hot watt wasting r9-290x never beat top end Kepler 780/Titan and Maxwell 980 widen the performance per watt gap. Try using a search engine for, "Intel reverse engineered x86 64 bit" and use it for Maxwell DX12 naybe you will kearn a thing or two. Stop believing AMD's hype, they lost all cred years ago, everything AMD spews about is propaganda wrapped by marketing BS. Sandy Bridge came out before any AMD APU and Intel had integrated graphics before Sandy Bridge, Intel Ring Bus runs circles around AMD's northbridge, read "Intel's ring bus is a lot faster than AMD's on-die northbridge—the ring bus design connects the CPU and GPU at 384GB/s, while the link between AMD's northbridge and the GPU is 27GB/s. "Sorry but I lost all hope in AMD, each and every CPU/APU and rebranded GPU has been over hyped, IPC cripple, and sucks watts. I am sick of AMD fool tools rewriting history and smearing a new shade lipstick on AMD piggy. I hate BSing fantards who are paid to post hype pumping propaganda, I lived it, built it, benchmark it and sold it, so quit apologizing for AMD's short comings. Its a ARM vs Intel world now there is no need for a watt wasting IPC cripple AMD that offers nothing to the x86 world. So stop living in the past and rewriting history, debt laden AMD fell and can't get up, now late limp and lame AMD lives on propaganda, bogus benchmarks, and hyping pumping the next coming of its CPU/GPU/APU savior. Well its not gonna happen, AMD's ARM core a generic me too up against deep pocketed custom ARMed Armies, seamicro makes money selling Intel Inside severs, Skylake is around the corner, Max Daddy Maxwell is ready and waiting, Its way too late and too lame for debt laden AMD.
Cry me a Amazon river with the Fat Lady singing from a canoe, Turn out the lights... the party's over.
FlushedBubblyJock - Friday, March 27, 2015 - link
Said in an angry loud Nixonian voice with cheeks flappping and jowling and fists shaking by sides:" Mantle , We're the Core of the Earth ... "
bwahahhaaaaaa
TheJian - Saturday, March 7, 2015 - link
http://blogs.nvidia.com/blog/2014/03/20/directx-12..."Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC.
Gosalia demonstrated the new API with a tech demo of the Xbox One racing game Forza running on a PC powered by an NVIDIA GeForce Titan Black."
So exactly a year ago they said they'd been working with them for FOUR years. They even had a demo of an Xbox1 game running on PC hardware with DX12 a YEAR ago at GDC 2014. Literally working hand in hand to get the demo going for a year. Did AMD have a GDC 2014 demo of DX12? No because they were wasting time on Mantle instead of DirectX or OpenGL (which if you're running linux is pretty important, where are AMD's killer linux drivers?).
How exactly do you think you get a demo of a game NOT made for DX12 working with your hardware, drivers etc a YEAR ago without being involved in working on it for at least the PREVIOUS year as he said hand in hand. I guess you should call Nvidia and tell them they weren't really working on what they were working on...LOL.
http://www.extremetech.com/gaming/198964-dx12-conf...
NVidia vs. AMD DX12 Star Swarm. Unlike the rosy slide Anandtech shows, AMD is getting killed here. While you're at it take a look at Nvidia's DX11 vs. AMD's. Killed there too. Clearly Mantle wasn't needed to make DX11 rock a bit more. You just needed to put in some driver time on DX11 instead of something like Mantle correct? OVER 3x faster than 290x for 980. Clearly Nvidia was putting in DirectX driver time just as they said (which affects a TON of games unlike Mantle). This is just last month.
"The first thing people are going to notice is that the GTX 980 is far faster than the R9 290X in a benchmark that was (rightly) believed to favor AMD as a matter of course when the company released it last year. I’ll reiterate what I said then — Star Swarm is a tech demo, not a final shipping product. While Oxide Games does have plans to build a shipping game around their engine, this particular version is still designed to highlight very specific areas where low-latency APIs can offer huge performance gains."
So it's AMD's benchmark, but Nvidia slaughtered them in their best case scenario, which is what I call a benchmark made for them that is NOT ever going to be an actual game. They supposedly have plans for a game using the engine but it won't BE a game itself. I really doubt NV would say they were working hand in hand if it wasn't true. Surely MS would have something to say. Who else do you think worked on it with only two major gpu vendors in the running? Why wasn't AMD used for the GDC 2014 demo if it wasn't NV who helped forge it? It's an Xbox1 game running on AMD hardware in a console, but MS chose NV to do the demo? Odd? NO, NV worked on DX12.
The OP was right, AMD never planned to make it open. OPEN=running on Nvidia hardware as MANTLE not some fork of it. Mantle didn't even work on all of AMD's own hardware, let alone anyone else (intel said they were rebuffed multiple times too).
0VERL0RD - Wednesday, March 4, 2015 - link
Debt laden AMD was busy working with both MS & Sony. Don't think MS cripples Mouth that Feeds XBox for Nvidia who's margins for GPU alone kept then out of both consoles!Crunchy005 - Wednesday, March 4, 2015 - link
Love the hate here, again you are very close minded. Why are you saying they made mantle to fix their own CPUs mantle works with intel CPUs as well and offered improvement on both CPUs. The low level API moved the bottleneck to the GPU same as DX12. The fact that they showed the benefits of a low level API like mantle is awesome and the push for low level APIs has been huge since then. Nothing bad here and with DX12 coming out there isn't a huge need for it once that happens because DX12 and mantle perform about the same. I don't see why there is anger in your post. Intel makes better processors with tech that has benefited from AMD influence same with Nvidia. Without the competition we would probably be on 32bit if it was left to intel, with a retarded Itanium 64 architecture on the side that no one wants to use.And besides DX12 will take 5 years before it's used readily in games. I mean it took years for DX11 to come around in games outside of a nitch few. Got stuck on DX9 for forever...thanks consoles.
CPUGPUGURU - Wednesday, March 4, 2015 - link
Hate the AMD fool tool rewriting history here, AMD's Mantle was NEVER open its STILL a closed bug infested Alpha redundant AMD ONLY API experiment that is NOW DOA. DX12 was in the works before malware mantle BUT AMD was too busy hyping Mantle that it miss the DX12 boat that's why every single Nvidia GPU from the GTX 400 to the present day work with DX12 and Maxwell is the only Full DX12 GPU.You AMD fool tools must love watt sucking IPC cripple CPUs/APUs that bottle neck high end GPUs, Bulldozer was a over hyped waste of money that strangles multi GPU setups, have fun with your GPU choking Bulcraper and hot watt sucking rebranded GPUs and need to be water cooled.
After years of never walking its over hyping bogus benchmark talk AMD can go to chiphell, AMD lost all cred and so have you AMD rewriting history fool tools.
Crunchy005 - Wednesday, March 4, 2015 - link
Wow way to miss the mark entirely, you are throwing a lot of accusations with no real proof behind it. Also most of what you wrote has no relevance to what I was saying. Also still no proof showing that Maxwell is the only fully DX12 supported GPU because the AMD one seems to work just fine in the benchmarks shown in the article. If your going to keep spouting crap like you are at least provide me with proof, I am open minded and would like to learn but you provide nothing to learn from."You AMD fool tools must love watt sucking IPC cripple CPUs/APUs that bottle neck high end GPUs"
Actually the 93xx's do fine with this but with DX12 the bottleneck is shifting to the GPUs and away form the CPUs so this isn't really an issue anymore. Yes the bulldozer arch was way overhyped and kinda killed AMDs competitive edge in single threaded benchmarks. Which is why i'm excited to see Zen next year form AMD 14nm and finally throwing out Bullcraper as you put it.
Now please if you respond 1) read what I wrote and try to be relevant 2) provide me with some content to back up what you say because I am interested on both sides and 3) try not to be so angry over this, I mean I know your married to intel and nvidia but what you are doing is a bit excessive.
obsidian24776 - Tuesday, March 3, 2015 - link
Maybe this is what AMD mean by re purposing MantleAMD announces LiquidVR see #LiquidVR on twitter, engadget has the story too
at the bottom of the story it states
"When asked about whether or not AMD is also working with HTC and Valve on the HTC Vive headset, AMD reps hilariously clammed up and asked whether or not they could talk about that yet."
FlushedBubblyJock - Thursday, March 26, 2015 - link
Once again, the "open" "good hearted" "would never do that !" AMD has stabbed everyone in the back:"
however AMD’s broader plans to also release a Mantle SDK to allow full access, particularly allowing it to be implemented on other hardware, has been shelved "
Yes, we know, AMD, you (and your fans) would never keep it to yourselves, you're all about everyone....
Can we just claim AMD is full of it off the bat next time ? How many times can they play their holier than thou PR game and every fan falls for it ?