For what it's worth, when we first asked AMD about it back at CES, FP64 performance wasn't among the features they were even throttling/holding back on. So for a time, 1/2 was on the table.
Performance wise it did better than I expected. This card is pretty loud and runs a bit hot for my tastes. Nice review. Where are the 8K and 16K tests :)-
When drivers mature, AMD Radeon VII will beat the GF 2080.
Just like Radeon Furry X beats the GF 980 and Radeon Vega 64 beats the GF 1080.
When drivers mature and nVIDIA's blatant sabotage against its older cards (and AMD's cards) gets mitigated, the long time owner of the card will enjoy better performance.
Unfortunately, on the power side, nVIDIA still has the edge, but I'm confident that those 16 GB of VRAM will really show their worth in the following year.
I'd rather have a card that performs better today than one that might perform better in two or three years. By that point, I'll already be looking at new cards.
This card is very impressive for anyone who needs FP64 compute and lots of VRAM, but it's a tough sell if you primarily want it for games.
AMD cards have traditional age much better than Nvidia. GamerNexus just re-benchmarked the 290x from 2013 on modern games and found it comparable to the 980, 1060, and 580.
The GTX 980 came late 2014 with a $550USD tag, now struggles on 1440p.
Not to mention that you can get a lot out of AMD cards if you're willing to tinker. My 56, which I got from Microcenter on Nov, 2017, for $330. (total steal) Now performs at 1080 level after BIOs flash + OC.
My 970 does just fine too, I can play 1440p maxed or near maxed in everything - 4k in older/simpler games too (ie, Overwatch). I was planning on a new card this gen for 4k but pricing is just too high for the gains, going to hold off one more round...
1) AMD Radeon VII is based on the Vega architecture which has been on the platform since June 2017. It's been about 17 months. The drivers had more than enough time to mature. It's obvious that in certain cases there are clear bottlenecks (e.g. GTA V), but this seems to be the fundamental nature of AMD's drivers when it comes to DX11 performance in some games that perform a lot of draw calls. Holding out for improvements here isn't going to please you much.
2) The Radeon Fury X was meant to go against the GTX 980Ti, not the GTX 980. The Fury, being slightly under the Fury X, would easily cover the GTX 980 performance bracket. The Fury X still doesn't beat the GTX 980Ti, particularly due to its limited VRAM where it even falls back in performance compared to the RX480 8GB and its siblings (RX580, RX590).
3) There is no evidence of Nvidia's sabotage against any of its older cards when it comes to performance, and frankly your dig against GameWorks "sabotaging" AMD's cards performance is laughable when the same features, when enabled, also kill performance on Nvidia's own cards. PhysX has been open-source for 3 years and has now moved on to its 4th iteration, being used almost universally now in game engines. How's that for vendor lockdown?
4) 16GB of VRAM will not even begin to show their worth in the next year. Wishful thinking, or more like licking up all the bad decisions AMD tends to make when it comes to product differentiation between their compute and gaming cards. It's baffling at this point that they still didn't learn to diverge their product lines and establish separate architectures in order to optimize power draw and bill of materials on the gaming card by reducing architectural features that are unneeded for gaming. 16GB are unneeded, 1TB/s of bandwidth is unneeded, HBM is expensive and unneeded. The RTX 2080 is averaging higher scores with half the bandwidth, half the VRAM capabity, and GDDR6.
The money is in the gaming market and the professional market. The prosumer market is a sliver in comparison. Look at what Nvidia do, they release a mere handful of mascots every generation, all similar to one another (the Titan series), to take care of that sliver. You'd think they'd have a bigger portfolio if it were such a lucrative market? Meanwhile, on the gaming end, entire lineups. On the professional end, entire lineups (Quadro, Tesla).
This is a Radeon Instinct M150. This is a compute card that was never intended to be a gaming card. The biggest integration AMD had to do were drivers. Drivers will indeed be better in the next 3 months.
So please explain why AMD are designating this as a gaming card. I explained this in my previous post. Their lack of product differentiation is exhausting them and the apologetic acrobatics pulled by their die-hard fanboys is appallingly misleading. This is the same Vega architecture. Why isn't AMD releasing their cards with the drivers optimized beforehand? They have been doing this since the 7970. Remember that card getting beat by the GTX 680 just because they had unoptimized drivers? It took AMD almost a year to release drivers that thoroughly bested that series. CrossFire performance on these was also way ahead of SLi. It took them around another year to solve microstuttering. I had 2x 7970's back then. These delays need to stop, they're literally murdering AMD's product launches.
Nvidia is also guilty of the same thing. The entire Turing lineup is a die designed and R&Ded for the Enterprise customers. The "flagship features" they keep raving about for Turing are slapped together ways to use all the ASIC cores on the dies designed for AI and content creation.
Its why when you compare a 1080 to a 2070, or a 1080ti to a 2080 (the same price bracket) you get almost zero rasterization improvements. Its a huge and expensive die reused from the enterprise department because no one else has anything competitive in the same space.
Nvidia is likely holding back their actual 12nm/7nm gamer design for 2020 out of concern for what Intel might have and possible concern over Navi. I also think Nvidia vastly underestimated how poorly the repackaged cards would sell. I expect Turing to be a very short generation with the next series being announced in late 2019 early 2020 (depending on what intel and AMD end up fielding).
The updated NVENC encoder chip may become a major selling point for the RTX cards for streamers/content creators. I'm actually disappointed Nvidia is not emphasizing this feature more. Once OBS Studio releases their new build that will further increase NVENC encoding efficiency it will create an even more compelling argument to switch to NVENC.
I have been testing the new encoder and it's rivaling and beating medium preset x264 at 1080p60 using 8k bitrate. Single pc streamers will see steam quality improvements along with massive cpu resource savings. I'm of the opinion the dark horse selling point of these cards will be the new NVENC encoder. It appears the Turning generation is more of an advancement for content creators than the average gamer. Ray tracing is superfluous at this point for sure.
I'm ok with this. I still run a 1080Ti in my gaming rig and I'm comfortable waiting another generation. But the RTX 2070 in my streaming rig is delivering the best quality stream to date. That is comparing against x264 medium running on an i9 9900k@5GHz. This flies in the face of conventional wisdom and people with more credibility than me will need to help change the winds here. But this is my anecdotal experience.
If and I mean IF nvidia is holding back it's a purely financial move due to the huge overstock on GPU's caused by crypto currencies. Both AMD and nvidia massively underestimate how much demand crypto was creating. (IIRC AMD said during the earnings call now that crypto has dropped off that monthly GPU sales are less than half what they were) Supposedly there are more than 100K of nvidia cards (again the stuff I saw said that was somewhere between 3-6months normal gamer sales) sitting out there on store shelves rotting because of it, and it's so bad nvidia is having to take stock back from retailers that want the shelf space freed up.
That's prime incentive to sit on the designs until the existing stock is used up. For both nvidia and AMD. Sure they might push out some high end high price product but they aren't doing anything in the middle of the market until that stock is cleared out.
at the least.. this could force, what seems to be your saint nvida.. to drop the price of their cards.. as it stands before today.. ALL of their 20 series cards.. are out of the price range i would pay for a video card, or are pushing it/hard to justify the cost over my current 1060, as the 1070/80 were way out of my price range..
well.. where i am at least.. the radeon 7 starts at 949 ( preorder ), only 2 cards listed, Asus and xfx. the gtx 2080 ( which the radeon 7 is aimed at ) starts at 1130, almost 200 more... IMO.. 200 is not worth the premium for a 5-6% faster card.. the top of the line 2080 is the GeForce RTX 2080 SEA HAWK X which is priced at $1275... for my cash... id be looking at the radeon 7.. and saving 200+ bucks to use some where else in my comp...
Meh, I went looking for a 16GB card about a week before they announced Radeon VII because gaming was using up all 8gb of VRAM and 14gb of system RAM. This card is a no brainer upgrade from my Vega 64.
Perhaps but Turing is also a new architecture, so it's probable it'd get better with newer drivers too.
Maxwell is from 2014 and still performs as it should.
As for GPU-accelerated gameworks, obviously nvidia is optimizing it for their own cards only, but that doesn't mean they actively modify the code to make it perform worse on AMD cards; not to mention it would be illegal. (GPU-only gameworks effects can be disabled in game options if need be)
Many (most?) games just utilize the CPU-only gameworks modules; no performance difference between cards.
you joking right ? 1st game they did just that is crysis (they hide modely under water so ati card will render these too and be slower and after that they cheat full time ...
At the moment I am not concerned about the drivers. This card comes in at pretty impressive numbers.. looks to be slightly better than the 1080ti but with 16G of mem.. and not cheap mem either so it will be useful in a few years (likely) I want one!!
Which is pretty much the state of affairs regardless is it not cmd? Are you majorly impressed with the 2080ti??? it's only marginally faster than the 3 year old 1080ti as well.
I own 1080s and vega56s. Those vega56s would be a huge upgrade if I went to the new Vega. The 1080s? Meh.. yeah a little .. not much.. not worth the upgrade.
Time is relative. What if Nvidia and everybody else would choose to release a new generation every 5 years? Most so-called gamers in the World don't even have the "old" GTX 1080Ti.
you are totally right...i wonder why no reviewer ever says that... its been many times proven that all radeon cards 6 months after release always take lead from their nvidia competitors...nvidia leads in older games...amd future proof...i dont buy a 700 euro card for 1 year..i keep its 3 plus years at least
That's the difference between you and these tech reviewers and their accompanying "unbiased" trol...I mean bragge...commentators, COMMENTATORS! They "upgrade" to the latest and greatest each new generation :)
Not a bad showing by AMD but this card isn't the victory that they needed either. The gaming side is OK and lines up with the GTX 1080 Ti and RTX 2080 fairly well. On the compute side it is actually very good with the extra memory capacity and more bandwidth. I have a feeling that this card should have shipped with 128 ROPs which would have given it an edge at higher resolutions.
I'm also curious as to how this card would fair at even higher resolutions like 5K and 8K. The memory bandwidth is there to humor that idea and might be feasible to get playable frame rates on specific modern games. I'd also be interesting to see how it'd fair with some older, less demanding titles at these resolutions too.
This card feels like its meant to full the gap and now allow Nvidia to be the only player in the game for an extended period of time. This buys them time for their next architecture release.
Can You please retest running the Radeon VII (an AMD part) on a Ryzen II with X470 with 16 gig of RAM? You always run AMD parts on a non AMD processor. Please retest and post results!
SO why do they only test on Intel machines? Why not run the same tests on an RYZEN/Nvidia and Ryzen/Radeon combo? My point is that it simply never happens. Put aside the fact that Radeon always fairs better on a AMD machine, it just seems odd is all. For the longest time, nearly every Intel machine ran Nvidia graphics. You are more likely to find a Radeon in a AMD machine than you will an Intel one. See my point?
What link is that DS - and if you ask me too google it, I will not take anything you say seriously. Or are you deliberately trolling? I know they do a side by side with intel processors with their own to show the diff, bu thats all. What is the link to the tests you are referring to? Either way - it is unbiased as they bench with both. Not so here which was my point.
"Testing done by AMD performance labs 1/21/19 on an Intel Core i7 7700k, 16GB DDR4 3000MHz, Radeon VII, Radeon RX Vega 64, AMD Driver 18.50 and Windows 10. Using Resident Evil 2 @ 3840x2160, Max settings, DirectX® 11:Radeon VII averaged 53 fps. Radeon RX Vega 64 averaged 41 fps. PC manufacturers may vary configurations yielding different results. All scores are an average of 3 runs with the same settings. Performance may vary based on use of latest drivers. RX-291"
Because they are for the most part running gaming tests, and if you want to remove CPU bottlenecks you pick the CPU which you have that's fastest in games.
Which is Intel.
If you pick anything else then you are artificially constraining performance which tends to show a regression to the mean - in other words it'll make the difference between AMD and nVidia smaller (whichever one wins)
Equally the fact that AMD works best with AMD means they absolutely should *not* put an AMD processor in the system - that way they are artificially boosting system performance and skewing their benchmarks.
You really need to do some reading on how you do a/b testing. Wikipedia has a good article.
It's not like itll perform any better though... Intel still has generally better gaming performance. There's no reason to artificially hamstring the card, as it introduces a CPU bottleneck
Once again - in gaming for the most part....try again with other apps and their is a marked difference. Many of which are in AMD's favor. try again.....
There choice of processor is kind of strange. An 8-core Intel on *plain* 14nm, now 2! years old, with rather low clocks at 4.3Ghz, is not ideal for a gaming setup. I would have used a 9900K or 2700X personally[1]. For a content creator I'd be using a Threadripper or similar. Re-testing would be an undertaking for AT though. Probably too much to ask. Maybe next time they'll choose some saner processor. [1] 9900K is 4.7Ghz all cores. The 2700X runs at 4.0Ghz turbo, so you'd loose frequency, but then you could use faster RAM. For citations see: https://www.intel.com/content/www/us/en/products/p... https://images.anandtech.com/doci/12625/2nd%20Gen%... https://images.anandtech.com/doci/13400/9thGenTurb...
I wonder why this card absolutely dominates in the "LuxMark 3.1 - LuxBall and Hotel" HDR test? Its pulling in numbers 1.7x higher than the RTX 2080 on that test. That's a funky outlier.
How much video memory is used? That is the key. Since many games and benchmarks are set up to test with a fairly low amount of video memory being needed(so those 3GB 1050 cards can run the test), what happens when you try to load 10-15GB into video memory for rendering? Cards with 8GB and under(the majority) will suddenly look a lot slower in comparison.
Could be still difference between AMD's and Nvidia's OpenCL drivers. Nvidia only fairly recently started to focus on them. (Quite few 2.0 features are still listed as experimental)
That they changed the FP64 rate cap entirely in BIOS makes me wonder, should the iMac Pro be updated with something like this (as Navi is supposed to be launching with the mid range first), if it would have the double precision rate cap at all as Apple would be co-writing the drivers and all.
I feel AT needs to update the game list. I understand that these are probably easier to bench and are demanding but most of us are curious on how it performs on games we actually play. Lets be real how many of you or your friends play these game on the daily? BF1 and MAYBE GTA are popular but not on the grand scheme of things .
Need a better spread of the API's and denote which games are engineered specifically for AMD or Nvidia or neither. I think that would be helpful when deciding which card should be in your rig.
Perhaps tell game developers to get with the times then? You cant test what isnt there, and the vast majority of games with repeatable benchmarks are DX11 titles. That is not Anandtech's fault.
Didn't say it was. Merely a suggestion/request. There are around 30 games that are released with DX 12 support and about a dozen with Vulkan. Some of the DX 11 titles tested for this review offer DX 12 & Vulkan supt. They exist and can be tested. If there is a reason to NOT test a DX version or Vulkan version, for example RE2's broken DX12 implementation, OK fair enough. I think it would offer a better picture of how each card performs overall.
DX11 DX12 Vulkan BF1 Tested Yes No FC5 Tested No No AotS Yes Tested Yes Wolf Yes Yes Tested FF Tested Maybe? No GTA Tested No No SoW Tested No No F1 Tested No No TW Tested Yes No
4 of the games tested with DX11 have DX 12 implementations and AotS has a Vulkan implementation. If the implementation is problematic, fair enough. Put a foot note or a ** but there are games with DX 12 and Vulkan out there on current engines so it can be done.
Ryan, perhaps and article on the games, the engines, their API implementations and how/why you choose to use/not use them in testing? Think it would be a good read.
What are these dozen games? Last time I checked there were only three or four modern games suitable for vulkan benchmarking: Wolfenstein 2, Doom, Strange Brigade and perhaps AotS.
IMO Wolfenstein 2 is enough to represent vulkan.
"Wolf Yes Yes Tested"
Wolfenstein 2 is vulkan only; no DX12.
As for DX12, yes, I too think they could add more.
My bad on Wolf. I thought it was. It's on XB1 which is DX12 and DX12 supt was confirmed by a few places so I didn't check further.
As for Vulkan Games, off the top of my head(whats in my library), TWS:ToB, TW: Warhammer II (should have been in my table..oops), Warhamer 40K DoW III, Serious SAM VR games, x-plane. I'm sure there are others. Easy to look up.
IMO FPS should not be the definitive test for all API's. Variety is always nice.
Cherry pick my mistakes but my point stands. I get the test bed needs to be locked down so consistent results can be achieved. Anandtech needs to be able to give specific measurable and repeatable results and they do that. I'm just merely expressed my desire to see a more balanced test suite in regards to APIs & games that are design for NVidia or AMD GPU's.
The benchmark suite gets updated on a roughly yearly basis. It was last updated for the Turing launch, so we're only about 5 months into it. As part of ensuring we cover a reasonable selection of genres, these were the best games available in the fall of 2018.
The next time we update it will presumably be for AMD's Navi launch, assuming that still happens in 2019. Though it's never too early to suggest what games you'd like to see.
WOW, Starcraft 2 Diablo 3 and some older games... games that dont really " need " a card like this.. my current asus strix 1060, plays these just fine at almost max eye candy... the only game i can think of that i have, and play that might need this card.. is Supreme commander, but im not sure if that game needs a strong cpu, or gpu, maybe a bit of both...
Love me some Supreme Commander. Solid followup to Total Annihilation. As to its performance I think its more CPU based and quite frankly the engine is not optimized for modern hardware.
thirded on still enjoying SupCom! i have however long ago given up on attempting to find the ultimate system to run it. i7 920 @ 4.2Ghz, nope. FX-8150 @ 4.5Ghz, nope. The engine still demands more CPU for late-game AI swarms! (and i like playing on 81x81 maps which makes it much worse)
ive run supcom on a i7 930 OC'd to 4.2 on a 7970, slow as molasses late in the game VS the AI, and on my current i7 5930k and strix 1060 and.. same thing.. very slow late in the game.... the later patches supposedly helped the game use more then 1 or 2 cores, i think Gas Powered games called it " multi core aware "
makes me wonder how it would run on something newer like a threadripper, top en Ryzen or top end i7 and an i9 with a 1080 + vid card though, compared to my current comp....
I would like to see a mixture of games that are dedicated to a singular API, and ones that support all three or at least two of them. I think that would make for a good spread.
Not sure that I expected more. The clock for clock against the V64 is telling. @$400 for the V64 vs $700 for the VII, ummm....if you need a compute card as well sure, otherwise, Nvidia got the juice you want at better temps for the same price. Not a bad card, but it's not a great card either. I think a full 64CU's may have improved things a bit more and even put it over the top.
Could you do a clock for clock compare against the 56 since they have the same CU count?? I'd be curious to see this and extrapolate what a VII with 64CU's would perform like just for shits and giggles.
Are you really suggesting that, given two products which are basically the same, you automatically default to NVIDIA because of temperatures?? This really is the NPC mindset at work. At least AMD isn't ripping you off with the price, Radeon VII is expensive to make, whereas NVIDIA's margin is enormous. Remember the 2080 is what should actually have been the 2070, the entire stack is a level higher than it should be, confirmed by die code numbers and the ludicrous fact that the 2070 doesn't support SLI.
Otoh, Radeon II is generally too expensive anyway; I get why AMD have done it, but really it's not the right way to tackle this market. They need to hit the midrange first and spread outwards. Stay out of it for a while, come back with a real hammer blow like they did with CPUs.
Well, they're not basically the same. Who's the NPC LOL? I have a V64 in my gaming rig. It's loud but I do like it for the price. The 2080 is a bit faster than the VII for the same price. It does run cooler and quieter. For some that is more important. If games is all you care about, get it. If you need compute, live with the noise and get the VII.
I don't care how expensive it is to make. If AMD could put out a card at this level of performance they would and they would sell it at this price. Barely anyone uses SLI/Crossfire. It's not worth it. I previously had 2 290X 8GB in Crossfire. I needed a beter card for VR, V64 was the answer. It's louder but it was far cheaper than competitors. The game bundle helped. Before that, I had bought a 1070 for the wife's computer. It was a good deal at the time. Some of yall get too attached to your brands get all frenzied at any criticism. I buy what suits my needs at the best price/perf.
Are you really suggesting that, given two products which are basically the same, you automatically default to NVIDIA because of temperatures?? This really is the NPC mindset at work. At least AMD isn't ripping you off with the price, Radeon VII is expensive to make, whereas NVIDIA's margin is enormous. Remember the 2080 is what should actually have been the 2070, the entire stack is a level higher than it should be, confirmed by die code numbers and the ludicrous fact that the 2070 doesn't support SLI.
Otoh, Radeon II is generally too expensive anyway; I get why AMD have done it, but really it's not the right way to tackle this market. They need to hit the midrange first and spread outwards. Stay out of it for a while, come back with a real hammer blow like they did with CPUs.
Anandtech doesn't ever seem to update reviews or prices. They'll compare a device from their history even if there have been months of driver updates that fixed performance issues, so they'll be using non-current info and everyone will assume it's current.
"Anandtech doesn't ever seem to update reviews or prices."
On the contrary, quite a bit was updated for this review. Though as driver performance has been rather stable as of late, performance hasn't exactly gone anywhere for most cards on most games.
If you see anything that seems wrong, please let us know. But we go out of our way to try to avoid using any card/driver combinations that result in performance issues.
It is still a nice card for professional/compute/rendering. But for gaming, the price is maybe 50$ too expensive, and AMD really needs to get some better quality fans.
AHAHAHAHA... Ray Tracing... you know the real problem of Ray Tracing? It was never on the table until Jensen brainwashed shill that it was important. by defending it, you obviously prove that you have no critical judgement.
By the way, the problem with RT/DLSS is that it will never be implemented because AMD owns consoles, and that devs develop on consoles. There is no monetary benefit to implement gimmick proprietary gameworks features for 1% of the PC user base, unless if Nvidia is paying you to do so.
It will never be a thing for the upcoming console generation. See you in 7 years, where it might be remotely relevant to the industry. As of now, unless you are rendering a CGI movie, it is worthless.
Both the next gen consoles are going to have ray tracing. Microsoft - who wrote and own the spec for the DX12 ray tracing extension currently used by PC's and hence a strong backer of ray tracing - will make one of them.
Not going to happen because RTX is proprietary, it is a closed environment, and require hardware acceleration that AMD is not going to pursue in the short time. Nvidia shoot themselves in the foot by pushing it. Open source is the only way a new standard can be adopted. The whole G-synch fiasco should have been enough to prove it.
Hardware could run it still, but the impact on performances is just to important. At that point, developers like Sony have incredible talent in creating new effect that look way more realistic.
Just looking at The Last of Us Part 2 is a good example.
Open source is NOT the only way a new standard can be adopted. Microsoft has been pushing DirectX 9/10/11, etc. and those are HUGELY popular standards. If MS is adopting it in their API, than yes it'll show up in PC games.
Raytracing is not a gimmick, its been around since before you were born or Nvidia was even founded. It hasn't been "feasible" for real-time and as such as been largely ignored in gaming. Many other technologies were not feasible until they were and than got incorporated. Graphics is more than just getting 60FPS otherwise everything would just be black and white without shadows. Its about realism, which means proper lighting, shadows, physics.
Ppl need to call out the price, if you're a regular joe who's just getting a card for gaming and not mining or business use, why would you buy this over the competition? They seriously need to drop the price by $100 or it'll be a tiny seller.
RTX is just Nvidia's way of doing DXR which is the IP of Microsoft. AMD has already announced specific development for it in future to be integrated in their GPU's. RT has been announced by both Sony and MS for their next consoles. Of course because of their use of AMD GPUs, the application of RT would be of a lower quality compared to what RTX can do. It is very much like the current console implementation of anti-aliasing, HBAO or tessellation, where on consoles you get a very basic level of those features, but on decent PCs they can be cranked up much higher.
"The whole G-synch fiasco should have been enough to prove it." This is nothing like G-Sync. The problem with GSync is the extra cost. Now considering that the 2080 is the same price/performance as a Radeon VII, but has hardware DXR (RTX) as well, you're essentially getting the ray-tracing add-in for free.
Thirdly, while many things can be faked with rasterization to be within the approximation of ray-tracing, it requires far greater work (not to mention, artistic talent) to do it. In rasterization, a graphics designer has to first guess what a certain reflection or shadow would look like and then painstakingly make something that could pass off for the real thing. Raytracing takes that guesswork out of the task. All you, as a developer, would need to do is place a light or a reflective surface and RT would do the rest with mathematical accuracy, resulting in higher quality, a much faster/smoother development, fewer glitches, and a much smaller memory/storage footprint for the final product.
RTX is a proprietary implementation that is compatible with DirectX RT. AMD may eventually do DirectX RT but it will be there own version. As far as consoles go, unless NAVI has some kind of RT implementation, youre right, no RT of any significance. At best it will be a simple PC graphics option that works in a few titles maybe like hair works lol.
It is ... a GAMEWORKS feature... as of now. RTX/DLSS are nothing more than 2 new gameworks features... that will just break games, once again to cripple the competition.
The goal is not even to have RTX or DLSS, it is to force developers to use their proprietary tools to break game codes and sabotage the competition, like The Witcher 3.
RTX is nothing good as of now. It is a tax, and it breaks performances. Let's talk about it when it can be implemented in real-time. until then, let's Nvidia feel the burden of it.
I do agree that these RTX/DLSS features absolutely do not justify the current prices and that nvidia should've waited for 7nm to mature before adding them, but let's not get so emotional.
Gameworks are simply modules that can be added to a game and are not part of the main code. Also, its GPU based features can be disabled in options, as was the case in witcher 3.
I agree, Huang should have listed to himself when he said that Ray tracing would have been a thing in 10 years (but he wanted to bring it to market now). Remember when there were 2D and 3D accelerators? I say we should be able to choose 3D or Ray-tracing accelerators.
Because everyone is already playing Anthem at 4k 60fps with a $400 card? Ray tracing is totally useless and we need way more rasterization performance per dollar than we have right now. Give me a 7nm 2080 ti without the RT cores for $699 and then we'll talk.
Fair, the main objective of gaming GPU are shaders per $. Gameworks gimmick are not something I call a selling factor... and Nvidia is forced to cook their books because of it.
Man, I've never seen such a hostile response to an Anandtech article. People need to relax, it's just a videocard.
I don't see this as a win for AMD. Using HBM2 the card is expensive to produce, so they don't have a lot of freedom to discount it. Without a hefty discount, it's louder, hotter, and slower than a 2080 at the same price. And of course no ray-tracing, which may or may not matter, but I'd rather have it just in case.
For OpenCL work it's a very attractive option, but again, that's a loser for AMD because they ALREADY sold this card as a workstation product for a lot more money. Now it's discounted to compete with the 2080, meaning less revenue for AMD.
Even once the drivers are fixed, I don't see this going anywhere. It's another Vega64.
There's still a lot of people for whom a Radeon Instinct was just never going to happen, INCLUDING people who might have a workstation where they write code that will mostly run on servers, and it means you can run/test your code on your workstation with a fairly predictable mapping to final server performance.
As Nate said in the review, it's also very attractive to academics, which benefits AMD in the long run if say, a bunch of professors and grad students learn to write ML/CL on Radeon before say, starting or joining companies.
Yes, it's attractive to anyone who values OpenCL performance. They're getting workstation-class hardware on the cheap. But that does devalue AMD's workstation productline.
Anyone else think that the Mac Pro is lurking behind the Radeon VII release? Apple traditionally does a March 2019 event where they launch new products, so the timing fits (especially since there's little reason to think the Pro would need to be launched in time for the Q4 holiday season).
-If Navi is "gamer-focused" as Su has hinted, that may well mean GDDR6 (and rays?), so wouldn't be of much/any benefit to a "pro" workload -This way Apple can release the Pro with the GPU as a known quantity (though it may well come in a "Pro" variant w/say, ECC and other features enabled) -Maybe the timing was moved up, and separated from the Apple launch, in part to "strike back" at the 2080 and insert AMD into the GPU conversation more for 2019.
The timeline and available facts seem to fit pretty well here...
I mean, sure? but I'm not sure WHAT market Apple is going after with the Mac Pro anyways... I mean, would YOU switch platforms (since anyone who seriously needs the performance necessary to justify the price tag in a compute-heavy workload has almost certainly moved on from their 2013 Mac Pro) with the risk that Apple might leave the Pro to languish again?
There's certainly A market for it, I'm just not sure what the market is.
The Radeon VII does seem to be one piece of the puzzle, as far as the new Mac Pro goes. On the CPU side Apple still needs to wait for Cascade Lake Xeon W if they want to do anything more than release a modular iMac Pro though. I can't imagine Apple will ever release another dual-socket Mac, and I'd be very surprised if they switched to AMD Threadripper at this point. But even still, they would need XCC based Xeon W chips to beat the iMac Pro in terms of core count. Intel did release just such a thing with the Xeon W 3175X, but I'm seriously hoping for Cascade Lake over Skylake Refresh for the new Mac Pro. That would push the release timeline out to Q3 or Q4 though.
The Radeon VII also appears to lack DisplayPort DSC, which means single cable 8K external displays would be a no-go. A new Mac Pro that could only support Thunderbolt 3 displays up to 5120 x 2880, 10 bits per color, at 60 Hz would almost seem like a bit of a letdown at this point. Apple is in a bit of an awkward position here anyway, as ICL-U will have integrated Thunderbolt 3 and an iGPU that supports DP 1.4a with HBR 3 and DSC when it arrives, also around the Q3 2019 timeframe. I'm not sure Intel even has any plans for discrete Thunderbolt controllers after Titan Ridge, but with no PCIe 4.0 on Cascade Lake, there's not much they can even do to improve on it anyway.
So maybe the new Mac Pro is a Q4 2019 product and will have Cascade Lake Xeon W and a more pro-oriented yet Navi-based GPU?
Possibly, but I'm not 100% sure that they need to be at the iMac Pro on core count to have a product. More RAM (with a lot of slots that a user can get to) and a socketed CPU with better thermals than you can get on the back of a display might do it. I'd tend to think that moving to Threadripper (or EPYC) is a pipe dream, partly because of Thunderbolt support (which I guess, now that it's open, Apple could THEORETICALLY add, but it just seems unlikely at this point, particularly since there'd be things where a Intel-based iMac Pro might beat a TR-based Mac Pro, and Apple doesn't generally like complexities like that).
Also, I'd assumed that stuff like DSC support would be one of the changes between the consumer and Pro versions (and AMD's Radeon Pro WX 7100 already does DSC, so its not like they don't have the ability to add it to pro GPUs).
The Radeon Pro WX 7100 is Polaris 10, which does not do DSC. DSC requires fixed function encoding blocks that are not present in any of the Polaris or Vega variants. They do support DisplayPort 1.3 / 1.4 and HBR3, but DSC is an optional feature of the DP spec. AFAIK, the only GPUs currently shipping that have DSC support are NVIDIA's Turing chips.
The CPU in the iMac Pro is a normal, socketed Xeon W, and you can max the RAM out at 512 GB using LRDIMMs if you're willing to crack the sucker open and shell out the cash. So making those things user accessible would be the only benefit to a modular Mac Pro. CPU upgrades are highly unlikely for that platform though, and I doubt Apple will even provide two DIMM slots per channel in the new Mac Pro. However, if they have to go LGA3647 to get an XCC based Xeon W, then they'd go with six slots to populate all of the memory channels. And the back of a display that is also 440 square inches of aluminum radiator is not necessarily a bad place to be, thermally. Nothing is open about Thunderbolt yet, by the way, but of course Apple could still add existing Intel TB3 controllers to an AMD design if they wanted to.
So yeah, in order to have a product, they need to beat the iMac Pro in some meaningful way. And simply offering user accessible RAM and PCIe slots in a box that's separate from the display isn't really that, in the eyes of Apple at least. Especially since PCIe slots are far from guaranteed, if not unlikely.
Apple cannot ship Mac Pro with a vacuum cleaner. That 43 dBA is isane. Even if Apple downclocked and undervolted the bios, I doubt they could make it very quiet.
Also, I doubt AMD is willing to sell the tons of them at a loss.
So I got a 1080Ti at launch, because there was no other alternative at 4K. Finally we have an answer from AMD, unfortunately it's no faster than my almost 2 year old GPU, priced the same no less.
I really think this would've benefitted from 128 rops, or 96.
If they had priced this at 500 dollars, it would've been a much better bargain.
I can't think of anyone who I would recommend this to.
To be fair, you could almost say the same thing about the 2080, "I got a 1080 Ti at launch and 2 years later, Nvidia released a GPU that barely performs better if you don't care about gimmicks like ray tracing."
People who do gaming and compute might be very well tempted, people who don't like Nvidia (or just do like AMD) might be tempted.
Unfortunately, the cost of the RAM in this thing alone is probably nearly $350, so there's no way AMD could sell this thing for $500 (but it wouldn't surprise me if we see it selling a little under MSRP if there is plentiful supply and Nvidia can crank out enough 2080s).
That was the whole point of RTX. Beside the 2080 TI, there was nothing new. You were having the same performances for around the same price than the last generation. There was no price disruption.
We're supposed to buy a clearly inferior product (look at that noise) just so they can sell leftover and defective Instincts?
We're supposed to buy an inferior product because AMD's bad business moves have resulted in Nvidia being able to devalue the GPU market with Turing?
Nope. We're supposed to either buy the best product for the money or sit out and wait for something better. Personally, I would jump for joy if everyone would put their money into a crowdfunded company, with management that refuses to become eaten alive by a megacorp, to take on Nvidia and AMD in the pure gaming space. There was once space for three players and there is space today. I am not holding my breath for Intel to do anything particularly valuable.
Wouldn't it be nice to have a return to pure no-nonsense gaming designs, instead of this "you can buy our defective parts for high prices and feel like you're giving to charity" and "you can buy our white elephant feature well before its time has come and pay through the nose for it" situation.
Capitalism has had a bad showing for some time now in the tech space. Monopolies and duopolies reign supreme.
Well, at least you can buy a 2080Ti, eventhough the 2080 is of course at the same price point as the 1080Ti. But I won't buy a 2080Ti either, it's too expensive and the performance increase is too small.
The last decent AMD card I had, was the R9 290X. Had that for a few years until the 1080 came out, and then, replaced that to a 1080Ti when I got a Acer Predator XB321HK.
I will wait until something better comes along. Would really like HDMI 2.1 output, so that I can use VRR on the upcoming LG OLED C9.
Oh, also, FWIW: The other way of looking at it is "damn, that 1080 Ti was a good buy. Here I am 2 years later and there's very little reason for me to upgrade."
Yeah, of course I am looking at it that way :-) But I also like tech, and find the progress lacking these last years. Longer development cycles and diminishing returns for a lot more dollars.
In short, ML results take longer to put together than these relatively short embargoes allow for. It's also not a primary market for this card, so other things such as gaming performance testing get priority.
That said, we're curious about it as well. Now that we're past the embargo, check back in later this month. We have an RTX Titan review coming up, which will give us a great opportunity to poke at the ML performance of the Radeon VII as well.
I will be curious to see that. Compute/ML/Rendering/Content Creation comparison. I was more looking for this in all honesty since we knew what to expect from the card from the beginning.
I would think this is expected, AMD trying there best to go against NVidia video and probably release because some of struggles that RTX is having with unit issues.
But in stage in my life, personally I don't need a high end graphics card but I would go nVidia because of past good experience. But in any case how many owners actually need high end card. For majority 90+ % of people Integrated graphics are good enough for spreadsheets, internet and word processing
Read what he said again. "For majority 90+ % of people Integrated graphics are good enough for spreadsheets, internet and word processing"
Guess what nearly every office laptop and desktop uses? If it wasnt "good enough" there would be a push for more powerful iGPUs in widespread circulation.
The basic intel iGPU if far mroe then enough to do office work, stream video, or normal workstation content. more powerful GPUs are only needed in specific circumstances.
I know what he said and I know what he means... and I know he is painted Intel all over his body.
And no, basic HD 520 is not enough, period. You can barely do office work and play videos.
If it was so true, the mobile market would not be the most lucrative for games/entertainment. As of now, a smart phone is having more GPU power than an HD520.
Out of 500 PCs at my job a whopping two have video cards, they are random low ends ones purchased to add additional video ports for two stations that run quad monitors. Otherwise there is zero need for a dGPU.
This is typical of the vast majority of businesses.
1. I believe there will be better drivers for VII, it was quite clear that there are many optimisation not done in time, although I don't know how long it will take. The new AMD seems to be quick to react though.
2. What if AMD decided to release the MI60 VII at $899.
VII is the VEGA arch, with more ROPs. If AMD managed to leave that much performance on the table, they must be the most incompetent code writers in all of existence.
The VEGA arch has long been optimized for, adding some ROPs isnt going to require much work to optimize for, and AMD has likely already done that.
Optimisation are now nearly done on a per AAA game level. And more importantly not only the drivers but the game itself. Whether the developer are willing to optimise the game ( at the help of AMD ) will be another story.l
Nice to see AMD being competitive again. It's a pity they've priced the card so high in Europe that you can get a RTX 2080 for 100 euros less. At that price point they won't be selling many.
1) Undervolting is a crap shoot due to binning and other factors, not a solution that you can simply apply as a fix for every one of these cards.
2) Saying that some other cards are even louder is a complete avoidance of the issue. The issue is that Nvidia is crushing the noise-to-performance metric with the 2080, according to the presented data in this article. AMD is not, at all, competitive.
It's also possible that a GPU will run at a lower voltage that what is optimal without artifacting and yet perform more slowly. Chips are typically able to do some compensation with error correction to handle inadequate voltage but the result is reduced speed.
Your just looking for reasons to not like it.. It's a awesome card according to reviews. Is it a 2080ti killer? No. (..shrug) Maybe it might force some pricing down though so you can get one of those.. maybe. For me the 2080ti is 2x the price of the 1080s I own... and I'll not pay that for a video card unless I am in a business setting that requires it.
It doesn't "sux". It is just not disruptive enough for Nvidia fans to expect a price cut on RTX, which is pissing off mroe Nvidia fans than AMD ones it seems.
The performances in games are okay, and the compute is really strong. If it is cheaper, it is a better buy. At the same price, I will go Nvidia.
However in Canada, the 2080 RTX is 50-100$ more expensive for blower style cards... with similar accoustics and worst temps.
"If it is cheaper, it is a better buy. At the same price, I will go Nvidia. However in Canada, the 2080 RTX is 50-100$ more expensive for blower style cards... with similar accoustics and worst temps."
Tu quoque = some blower models are loud, too.
$50-$100 is a very low price tag for one's hearing, comfort, and ability to enjoy audio whilst gaming and/or using the card for other intensive purposes.
1) Headphones don't negate all noise. Not even the combination of earplugs and headphones designed to absorb noise (and not produce audio) will get rid of noise. It still comes through. One can blast the audio at a higher volume and damage one's hearing to try to cover up noise but that is why the iPod/iPhone younger generations are facing epidemic levels of hearing damage.
2) Headphones, as a requirement, are a limitation of the product's functionality.
Firstly, they become uncomfortable. Secondly, they tend to aggravate tinnitus for people with it. Thirdly, they are an extra expense. Fourthly, some have good speaker systems they want to make us of. Etc.
Why advocate limiting one's possibilities for basically the same price, when compared with other, more flexible, products? It's silly. You're gaining nothing and losing potential usefulness.
The only way the headphones point works much in your favor is if the same thing is required of Nvidia's GPU. Otherwise, it's merely you stating that you are a subset of the use cases for this GPU that isn't affected by the noise problem. A subset is not the entirety by any means.
Deaf folks don't have to worry about noise, too. Does that mean they should attempt to dismiss noise problems for everyone else?
Only $699? This is a midrange GPU in much the same way the $750 monitor was a midrange screen. By recent Anandtech standards, the price does not warrant any mention of high-end. Come on people, we need some consistency on the use of these terms!
All teasing about the writing aside, it is nice to see a bit of competition. The Radeon VII is way out of my interest range as a product (it has 8x more VRAM than my daily use laptop has system RAM) but I hope it causes a Red and Green slapfest and brings prices down across all graphics cards. Maybe I'm being too optimistic though.
peachncream... maybe not in your books.. but this is not a midrange card... maybe high end midrange :-) um seems all you have are notebooks... your not in the market for a discrete card ;-) your laptop only has 2 gigs of ram ?? wow....
Sorry about that. The Radeon VII is very much out of the range of prices I'm willing to pay for any single component or even an whole system for that matter. I was zinging about the GPU being called high-end (which it rightfully is) because in another recent article, a $750 monitor was referred to as midrange. See:
It was more to make a point about the inconsistency with which AT classifies products than an actual reflection of my own buying habits.
As for my primary laptop, my daily driver is a Bay Trail HP Stream 11 running Linux so yeah, it's packing 2GB of RAM and 32GB of eMMC. I have a couple other laptops around which I use significantly less often that are older, but arguably more powerful. The Stream is just a lot easier to take from place to place.
it could be.. that maybe the manufacturer refers it as a mid range product ( the monitor ) in their product stack.. and AT.. just calls it that, because of that ?
Purely on gaming field this can't really compete with RTX 2080 (unless some big enough perf change comes with new drivers soon)... it's performing almost same, but at a little bit more power, hotter and almost 10dB louder, which is quite a lot. Given that it won't be able to offer anything more (as oposed to possible adoptions of DXR) I would expect it not trying to compete for same price level RTX 2080 does.
If it can get $50-$100 lower otoh, you get what many people asked for... kind of "GTX 2080" ... classic performance without ray tracing and DLSS extensions.
With current price though It only makes sense if they are betting they can get enough compute buyers.
Well, it's "lab conditions", it can always get dampened with good chasis or chasis position to reasonable levels and hopefully noone should be playing with head stuck inside the chasis... For me subjectively it would be too loud, but I wanted to give the card advantage of doubt, non-reference designs should hopefully get to lower levels.
1) The Nvidia card will be quieter in a chassis. So, that excuse fails.
2) I am not seeing significant room for doubt. Fury X was a quiet product (except at idle which some complained about, and in terms of, at least in some cases, coil whine). AMD has chosen to move backward, severely, in the noise department with this product.
This card has a fancy copper vapor chamber with flattened heatpipes and three fans. It also runs hot. So, how is it, at all, rational to expect 3rd-party cards to fix the noise problem? 3rd-party makers typically use 3 slot designs to increase clocks and they typically cost even more.
Well, not really. If the quieter chassis cuts of enough dB to get it out of disturbing level it will be enough. Also depends on environment... If you play in loud environment (day, loud speakers) the noise won't be percieved as bad as if you play it during night with quiter speakers. Ie. what can be sufferable during day can turn in complete hell during night.
That being said I am by any means not advocating +10dB, because it is a lot, but in the end it doesn't have to present so terrible obstacle.
It is very early, there can always be a bug in drivers or bios causing this temp/noise issue or it can be a design problem that cannot be circumvented. But that will be seen only after some time. I remember bug in ForceWare causing my old GTX580 not dropping to 2D frequencies once it kicked in 3D (or was it on 8800GT, I don't really remember)... You had to restart the machine. Such things simply can happen, which doesn't make them any better ofc.
"If the quieter chassis cuts of enough dB to get it out of disturbing level it will be enough."
Nope. I've owned the Antec P180. I have extensively modified cases and worked hard with placement to reduce noise.
Your argument that the noise can simply be eliminated by putting it into a case is completely bogus. In fact, Silent PC Review showed that more airflow, from less restriction (i.e. a less closed-in case design) can substantially reduce GPU noise — the opposite of the P180 philosophy that Silent PC Review once advocated (and helped to design).
The other problem for your argument is that it is 100% logically true that there is zero reason to purchase an inferior product. Since this GPU is not faster than a 2080 and costs the same there is zero reason to buy a louder GPU, since, in actuality, noise doesn't just get absorbed and disappear when you put it into a case. In fact, this site wrote a review of a Seasonic PSU that could be heard "from rooms away" and I can hear noisy GPUs through walls, too.
"It is very early, there can always be a bug in drivers or bios causing this temp/noise issue"
Then it shouldn't be on the market and shouldn't have been sampled. Alpha quality designs shouldn't be review subjects, particularly when they're being passed off as the full product.
Lisa Su is liar and AMD hates gamers. This is just a publicity stunt and a way to give a gift to their friends in the Tech Media. This was created for YouTube content creators and not for people who play games. Another Vega dumpster fire.
But many YouTubers play games as their content. And people vicariously watch them, so effectively it's letting many people play at once, just for the cost of the video decode - which is far more efficient!
1) Because it has worse performance than even Piledriver.
2) Because the two Jaguar-based pseudo-consoles splinter the PC gaming market unnecessarily.
Overpriced and damaging to the PC gaming platform. But consumers have a long history of being fooled by price tags into paying too much for too little.
What does that have to do with anything? No console game, ever, could be installed on a PC.
Current consoles having x86 processors means absolutely nothing. Consoles are defined by their platform, not processors.
It'd be like complaining about switch (which you deem a real console) not being able to install android games; or complain they switch games can't be installed on android phones.
1) wheres the proof ?? links to this perhaps ? 2) again.. where is the proof ?? considering they are also DirectX based.. that should make porting them to the comp.. a little easier..... so, not splintering anything....
never said that... while the core of the game could be the same.. the underlying software that allows the games to be run, is different.. as Eddman said.. no console game can be run on a comp, and vice versa... i know i can't take any of the console games i have in install them on my comp.. cant even read the disc.. same goes for a comp game on a console... just wont read it...
D lister.. are you able to do this some how ? ( and i dont mean by use of an emulator, either )
They look like GameWorks or something to me but I can't see why anyone cares about FF anyway. I hurt my face smirking when I saw the footage from that benchmark. Those hairstyles and that car... and they're going fishing. It was so bad it was Ed Wood territory, only it takes itself seriously.
People don't forget that Vega GPUs have the memory beside the GPU core, therefore making it more hot that normal GPUs out there. That has a lot to do with how hot it seems to be, the temperature tends to raise more due to memory temps in same area.
True enough but owners of the 56/64 have found many work arounds to such things as the cards have not needed as much power as they push out. My cards (56s) use 220W of power per card They never go over 65c in any situation and usually sit in the high 50s to low 60s. with their undervolts.
I believe that Vega GPUs have the memory beside the GPU core, therefore making it more hot that normal GPUs out there. That might have a lot to do with how hot it seems to be, the temperature tends to raise more due to memory temps in same area.
Better than a 64 in all situations and comparable to a 1080ti in all situations with only 5-6% performance hits against the 2080 which is costing 50-100 more here in Canada (according to pre-order sales) Yep, Im sold.
Your favorite spelling/grammar guy is here. (AT Audience: Boo!) "Faced with a less hostile pricing environment than many were first expecting, AMD has decided to bring Vega 20 to consumers after all, duel with NVIDIA one of these higher price points." Missing words (and & at): "Faced with a less hostile pricing environment than many were first expecting, AMD has decided to bring Vega 20 to consumers after all, and duel with NVIDIA at one of these higher price points."
"Which is to say that there's have been no further developments as far as AMD's primitive shaders are concerned." Verb tense problem: "Which is to say that there's been no further developments as far as AMD's primitive shaders are concerned."
Thanks for the review! I read the whole thing. The F@H results for Vega are higher than I predicted (Which is a good thing!).
I was joking. Some site content creators call people like me "The spelling and grammar trolls". I can never really be certain, so I try to be a little funny in hopes that no body will take my corrections as "troll" actions. I don't know how you guys feel, but you've always taken mine and others corrections into consideration.
Our flaws and errors are our own doing. When pointed out, it's our job as journalists to correct them. So as long as people are being polite about it, we appreciate the feedback.
This card is a turkey for gamers. AMD fixed the noise level problem with Fury X and now we're getting less value than we did then. It's too loud.
"Also new to this card and something AMD will be keen to call out is their triple-fan cooler, replacing the warmly received blower on the Radeon RX Vega 64/56 cards."
Is the sarcasm really necessary? If you're going to mention the cooler thing why not point out just how far AMD has regressed in terms of noise. Remember Fury X, a card that is nice under load?
"Vega 20 has nothing on paper to push for its viability at consumer prices. And yet thanks to a fortunate confluence of factors, here we are."
Oh please:
Fiji: 596 mm2 for $650. Vega 10 495 mm2 for $500. Vega 20 331 mm2 for $700.
Anandtech says it's all so shocking that Vega 20 is available to consumers at all. Eyeroll. No. For $700, AMD could have put that extra die area to more use and given us 8 GB of VRAM. But that would involve doing the impossible and making a GPU that is attractive to gamers, not just peddling low-end Polaris rehashes indefinitely.
Consumers aren't getting the best value here. They're getting leftovers just as they did with Bulldozer/Piledriver — parts that were targeted at the server market first and not consumers. At least with Vega 20, though, there is some competitiveness, although this is mainly because Nvidia is artificially crippling the value of the GPU market with its inflated pricing strategy. That is what monopolies do, of course. Look at how long Intel was able to coast with Sandy-level performance.
"At 3.5 TLFLOPS of theoretical FP64 performance, the Radeon VII is in a league of its own for the price. There simply aren’t any other current-generation cards priced below $2000 that even attempt to address the matter."
That's marvelous for the people who are able to care about FP64, unlike gamers.
This is what happens when there isn't enough competition in a market. Gamers get the choice of two shafts: Turing and Vega.
At least the Switch is a real console. I'm not talking about that. I'm talking about awful low-end PCs being falsely called consoles, which has been the practice since Jaguar became an (unfortunate) thing.
like in a previous post of yours.. are you forgetting that the xbox and xbox 360 were also, " low end " pc's that your are claiming ?? the switch is a real console ?? ha.. the nintendo switch, is based off of the Tegra SoC's from nvidia... in a way.. " still " a low end PC......
The reason the Switch qualifies as a console is that it does something differently vis-à-vis the x86 gaming PC platform. It has a different form factor and related functionality. Artificial software walled gardens do not truly differentiate Sony and MS's low-end PCs from the PC gaming market. They are merely anti-consumer kludge that people have chosen to prop up with their cash.
Merely having an x86 processor does not make something equivalent to an x86 PC. The Switch is clearly not the same thing as a low-end PC box like a Jaguar-based rubbish console. I am not particularly enamored with the Switch but at least Nintendo is offering something different to better justify its approach.
this sounds more like your own personal opinion and nothing more.. for some reason you hate the current consoles, and seems like there is NO reason for your hate...
nintendo has offered something different for a console since the 1st Wii, and honestly, look where it has gotten them... the xbox and playstation platforms outsold the nintendo systems, up to the switch, which has out sold the other 2.. but the games them selves on the nintendo systems.. are lacking..
"this sounds more like your own personal opinion and nothing more.. for some reason you hate the current consoles, and seems like there is NO reason for your hate..."
Pointing out that (according to the poster) you're just expressing your opinion and "hate" without reasoning isn't an Ad hominem, you used the term incorrectly earlier in this thread also. Pretty embarrassing to be simultaneously so conceited and so wrong.
"You should never listen to a word Oxford Guy has to say because he's a frothing fanboy whose posts reek of desperation and are probably indicative of an inability to get laid"
" You should never listen to a word Oxford Guy has to say because he's a frothing fanboy whose posts reek of desperation and are probably indicative of an inability to get laid "
about someone.. doesnt prove your point any better...
It's not a turkey at all.. it beats a Vega64 for around 30% ads 2x the ram (which is not really utilized yet) has a 3 fan design with Amd's top end shroud/block takes less power, runs cooler, and has the same characteristics which means Amd was generous on power so undervolting it without appreciable performance losses will be easy enough to do as will overclocking.
For me that's a winner. I have blower 1080s and their very loud if I let them or run things at stock (i undervolt there to..) and I've seen how loud the Vega56/64 blowers can be.. this with the 3 fans? pfft.. way quieter.
If you want to play with AI, you need tensorflow, and for a "server" card, at this price, it doesn't not makes sense to not support tensorflow. AI is everywhere today. this card is obsolete.
So is this too little too late? I'm bewildered that even at 7nm this card is pulling 300W of power and generating insane noise.
It's also unfortunate that the rumor of 128 ROPs was bunk. These cards definitely have an imbalance in the CU to ROP ratio. Nvidia Titan Xp had 96 ROPs strapped to 3840 SPs but AMD is shipping a max of 64?
"It's also unfortunate that the rumor of 128 ROPs was bunk."
That rumor typifies the irrational thinking that plagues the gaming community. AMD isn't going to make the effort of changing the Instinct GPU to better suit gamers. It isn't and it hasn't.
I wonder, do people actually read and comprehend these articles? By now it should be obvious to everyone, that VII is not and was never supposed to be AMD's next generation of GPU. In fact, they always denied that Vega 7nm would make it into the consumer market - and for very good reason: they had Navi for that. Now that Navi is delayed, they need something for people to talk about - and talk about it we do.
The first part of your conclusion describes what this product is. It is surprising to see this card's existence at 7nm, a Vega with 16GB of HBM2. It appears to me that AMD/TSMC is learning the 7nm process for GPUs/CPUs and the few chips they produce be sold as a high end part (as the volume/yields is being improved). AMD really shot high with its power consumption (clocks) and memory to reach the pricing of the GTX 2080.
However, I haven't seen a publisher to show undervolting results. Most Vegas perform better with this tweak.
I think you are being a little too critical of this card. Considering it’s an older architecture, it’s impressive it’s in the 2080’s ballpark.
And for those like me that only care about Frostbite Engin based games, this card is obviously a better option between the two cards at the same price.
You also ignored the overclockong potential of the headroom given by moving to 7nm
"You also ignored the overclockong potential of the headroom given by moving to 7nm"
Unfortunately it seems to be already overclocked to the max on the core. VRAM has some headroom but another couple of hundred MHz isn't going to do wonders considering the already exorbitant amount available.
"I think you are being a little too critical of this card."
Unless someone can take advantage of the non-gaming aspects of it, it is dead in the water at the current price point. There is zero reason to purchase a card, for gaming only, that uses more power and creates vastly more noise at the same price point of one that is much more efficient for gaming purposes. And, the only way to tame the noise problem is to either massively undervolt it or give it water. Proponents of this GPU are going to have to show that it's possible to buy a 3 slot model and massively undervolt it to get noise under control with air. Otherwise, the claim is vaporware.
Remember this information? Fiji: 596 mm2 for $650. Vega 10 495 mm2 for $500. Vega 20 331 mm2 for $700.
Yes, the 16 GB of RAM costs AMD money but it's irrelevant for gaming. AMD not only gave the community nearly 600 mm2 of chip it paired it with an AIO to tame the noise. All the talk from Su about improving AMD's margins seems to be something that gamers need to stop lauding AMD about and starting thinking critically about. If a company only has an inferior product to offer and wants to improve margins that's going to require that buyers be particularly foolish.
I wouldn't call the 16GB irrelevant. It trumps the 2080 in the two most demanding 4K titles, and comes relatively close in other ultra high resolution benchmarks.
It could be assumed that's a sign of things to come as resolutions continue to increase.
"It could be assumed that's a sign of things to come as resolutions continue to increase."
Developers adapt to Nvidia, not to AMD. That appears to be why, for instance, the visuals in Witcher 3 were watered-down at the last minute — to fit the VRAM of the then standard 970. Particularly in the context of VRAMgate there was an incentive on the part of Nvidia to be certain that the 970's VRAM would be able to handle a game like that one.
AMD could switch all of its discreet cards to 32 GB tomorrow and no developers would bite unless AMD pays them to, which means a paucity of usefulness of that 32 GB.
This offering is truly a milestone in engineering.
The Radeon VII has none of the RTX or tensor cores of the competition, uses markedly more power *and* is built with a half node process advantage and still, inexplicably, is slower than their direct competitor?
I've gone back and looked, I can't find another example that's close to this.
Either TSMC has *massive* problems with 7 nm or AMD has redefined terrible engineering in this segment. One of those, at least, has to be at play here.
The RTX and Tensor die area may help with power dissipation when it's shut down, in terms of hot spot reduction for instance. Vega 20 is only 331 mm2. However, it does seem clear enough that Fiji/Vega is only to be considered a gaming-centric architecture in the context of developers creating engines that take advantage of it, à la DOOM.
Since developers don't have an incentive to do that (even DOOM's engine is apparently a one-off), here we are with what looks like a card designed for compute and given to gamers as an expensive and excessively loud afterthought.
There is also the issue of blasting clocks to compensate for the small die. Rip out all of the irrelevant bits and add more gaming hardware. Drop the VRAM to 8 GB. Make a few small tweaks to improve efficiency rather than just shrink Vega. With those things done I wonder how much better the efficiency/performance would be.
BenSkywalker, the short answer is this is based on a dated architecture (2 generations behind Turing) so there is no real way it's going to beat it in efficiency: It doesn't even try to compete with the 2080Ti.
But the fact that a GCN\Vega-based card can nearly tie a 2080 is commendable. I think the problem this card has is it's $100 too expensive.
If we were comparing ray traced performance that would be a valid point, but we are talking about traditional rendering. They have a half node process advantage and are using more power than a 2080 by a comfortable amount.
Try finding another chip, CPU or gpu that was built with a half node advantage, used more power *and* was slower.
Either TSMC is having major problems with 7nm or AMD set a new standard for poor engineering in this segment.
It is a shame the infinity fabric is disabled, because crossfire would actually give these cards a reason to use ALL of that bandwidth and capacity - at least on one card. Is there a way to enable this or is it a hardware limitation?
Nate or Ian, can AMD choose to enable pci-express 4.0 on this card when Ryzen/TR4 3000 is released? Also can crossfire be implemented by popular gamer demand?
"Though AMD hasn’t made a big deal of it up to now, Vega 20 is actually their first PCI-Express 4.0-capable GPU, and this functionality is enabled on the Radeon Instinct cards. However for Radeon VII, this isn’t being enabled, and the card is being limited to PCIe 3.0 speeds"
Oh God, how much I hate marketoids! Morons who cannot get an A even in the primitive school math are hired into marketing depts, and ruin EVERYTHING.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
289 Comments
Back to Article
i4mt3hwin - Thursday, February 7, 2019 - link
So FP64 is 1:4 and not 1:8 or 1:2 as previously known?tipoo - Thursday, February 7, 2019 - link
Yep, looks like they changed the cap in vBIOS based on feedback.Which also means they could have uncapped it, but it's still cool that they did that.
Ganimoth - Thursday, February 7, 2019 - link
Does that mean it could be potentially unlocked by some bios mod?tipoo - Friday, February 8, 2019 - link
I hope so!Hul8 - Thursday, February 7, 2019 - link
I don't think it was ever reported or assumed to be 1/2 - that best possible ratio is only for the pro MI50 part. Early reports said 1/16.Ryan Smith - Thursday, February 7, 2019 - link
For what it's worth, when we first asked AMD about it back at CES, FP64 performance wasn't among the features they were even throttling/holding back on. So for a time, 1/2 was on the table.GreenReaper - Thursday, February 7, 2019 - link
So it was *your* fault! ;-pBigMamaInHouse - Thursday, February 7, 2019 - link
Asrock just posted vBios: is this with the FP 1:4 or newer?https://www.asrock.com/Graphics-Card/AMD/Phantom%2...
Ryan Smith - Thursday, February 7, 2019 - link
We're not currently aware of any Radeon VII cards shipping with anything other than 1/4 rate FP64.BigMamaInHouse - Friday, February 8, 2019 - link
So maybe it's new bios with some fixes?Did you tried it since all cards are the same reference design?
peevee - Tuesday, February 12, 2019 - link
"that the card operates at a less-than-native FP64 rate"The chip is capapble of 2 times higher f64 performance. Marketoids must die.
FreckledTrout - Thursday, February 7, 2019 - link
Performance wise it did better than I expected. This card is pretty loud and runs a bit hot for my tastes. Nice review. Where are the 8K and 16K tests :)-IGTrading - Thursday, February 7, 2019 - link
When drivers mature, AMD Radeon VII will beat the GF 2080.Just like Radeon Furry X beats the GF 980 and Radeon Vega 64 beats the GF 1080.
When drivers mature and nVIDIA's blatant sabotage against its older cards (and AMD's cards) gets mitigated, the long time owner of the card will enjoy better performance.
Unfortunately, on the power side, nVIDIA still has the edge, but I'm confident that those 16 GB of VRAM will really show their worth in the following year.
cfenton - Thursday, February 7, 2019 - link
I'd rather have a card that performs better today than one that might perform better in two or three years. By that point, I'll already be looking at new cards.This card is very impressive for anyone who needs FP64 compute and lots of VRAM, but it's a tough sell if you primarily want it for games.
Benjiwenji - Thursday, February 7, 2019 - link
AMD cards have traditional age much better than Nvidia. GamerNexus just re-benchmarked the 290x from 2013 on modern games and found it comparable to the 980, 1060, and 580.The GTX 980 came late 2014 with a $550USD tag, now struggles on 1440p.
Not to mention that you can get a lot out of AMD cards if you're willing to tinker. My 56, which I got from Microcenter on Nov, 2017, for $330. (total steal) Now performs at 1080 level after BIOs flash + OC.
eddman - Friday, February 8, 2019 - link
What are you talking about? GTX 980 still performs as it should at 1440.https://www.anandtech.com/bench/product/2142?vs=22...
Icehawk - Friday, February 8, 2019 - link
My 970 does just fine too, I can play 1440p maxed or near maxed in everything - 4k in older/simpler games too (ie, Overwatch). I was planning on a new card this gen for 4k but pricing is just too high for the gains, going to hold off one more round...Gastec - Tuesday, February 12, 2019 - link
That's because, as the legend has it, Nvidia is or was in the past gimping their older generation cards via drivers.kostaaspyrkas - Sunday, February 10, 2019 - link
in same frame rates nvidia gameplay gives me a sense of choppiness...amd radeon more fluid gameplay...yasamoka - Thursday, February 7, 2019 - link
This wishful in-denial conjecture needs to stop.1) AMD Radeon VII is based on the Vega architecture which has been on the platform since June 2017. It's been about 17 months. The drivers had more than enough time to mature. It's obvious that in certain cases there are clear bottlenecks (e.g. GTA V), but this seems to be the fundamental nature of AMD's drivers when it comes to DX11 performance in some games that perform a lot of draw calls. Holding out for improvements here isn't going to please you much.
2) The Radeon Fury X was meant to go against the GTX 980Ti, not the GTX 980. The Fury, being slightly under the Fury X, would easily cover the GTX 980 performance bracket. The Fury X still doesn't beat the GTX 980Ti, particularly due to its limited VRAM where it even falls back in performance compared to the RX480 8GB and its siblings (RX580, RX590).
3) There is no evidence of Nvidia's sabotage against any of its older cards when it comes to performance, and frankly your dig against GameWorks "sabotaging" AMD's cards performance is laughable when the same features, when enabled, also kill performance on Nvidia's own cards. PhysX has been open-source for 3 years and has now moved on to its 4th iteration, being used almost universally now in game engines. How's that for vendor lockdown?
4) 16GB of VRAM will not even begin to show their worth in the next year. Wishful thinking, or more like licking up all the bad decisions AMD tends to make when it comes to product differentiation between their compute and gaming cards. It's baffling at this point that they still didn't learn to diverge their product lines and establish separate architectures in order to optimize power draw and bill of materials on the gaming card by reducing architectural features that are unneeded for gaming. 16GB are unneeded, 1TB/s of bandwidth is unneeded, HBM is expensive and unneeded. The RTX 2080 is averaging higher scores with half the bandwidth, half the VRAM capabity, and GDDR6.
The money is in the gaming market and the professional market. The prosumer market is a sliver in comparison. Look at what Nvidia do, they release a mere handful of mascots every generation, all similar to one another (the Titan series), to take care of that sliver. You'd think they'd have a bigger portfolio if it were such a lucrative market? Meanwhile, on the gaming end, entire lineups. On the professional end, entire lineups (Quadro, Tesla).
Get real.
eva02langley - Thursday, February 7, 2019 - link
I think you are the one in denial over this.This is a Radeon Instinct M150. This is a compute card that was never intended to be a gaming card. The biggest integration AMD had to do were drivers. Drivers will indeed be better in the next 3 months.
yasamoka - Thursday, February 7, 2019 - link
So please explain why AMD are designating this as a gaming card. I explained this in my previous post. Their lack of product differentiation is exhausting them and the apologetic acrobatics pulled by their die-hard fanboys is appallingly misleading. This is the same Vega architecture. Why isn't AMD releasing their cards with the drivers optimized beforehand? They have been doing this since the 7970. Remember that card getting beat by the GTX 680 just because they had unoptimized drivers? It took AMD almost a year to release drivers that thoroughly bested that series. CrossFire performance on these was also way ahead of SLi. It took them around another year to solve microstuttering. I had 2x 7970's back then. These delays need to stop, they're literally murdering AMD's product launches.Bp_968 - Thursday, February 7, 2019 - link
Nvidia is also guilty of the same thing. The entire Turing lineup is a die designed and R&Ded for the Enterprise customers. The "flagship features" they keep raving about for Turing are slapped together ways to use all the ASIC cores on the dies designed for AI and content creation.Its why when you compare a 1080 to a 2070, or a 1080ti to a 2080 (the same price bracket) you get almost zero rasterization improvements. Its a huge and expensive die reused from the enterprise department because no one else has anything competitive in the same space.
Nvidia is likely holding back their actual 12nm/7nm gamer design for 2020 out of concern for what Intel might have and possible concern over Navi. I also think Nvidia vastly underestimated how poorly the repackaged cards would sell. I expect Turing to be a very short generation with the next series being announced in late 2019 early 2020 (depending on what intel and AMD end up fielding).
Alistair - Thursday, February 7, 2019 - link
exactly!ToyzRUsKid - Friday, February 8, 2019 - link
The updated NVENC encoder chip may become a major selling point for the RTX cards for streamers/content creators. I'm actually disappointed Nvidia is not emphasizing this feature more. Once OBS Studio releases their new build that will further increase NVENC encoding efficiency it will create an even more compelling argument to switch to NVENC.I have been testing the new encoder and it's rivaling and beating medium preset x264 at 1080p60 using 8k bitrate. Single pc streamers will see steam quality improvements along with massive cpu resource savings. I'm of the opinion the dark horse selling point of these cards will be the new NVENC encoder. It appears the Turning generation is more of an advancement for content creators than the average gamer. Ray tracing is superfluous at this point for sure.
I'm ok with this. I still run a 1080Ti in my gaming rig and I'm comfortable waiting another generation. But the RTX 2070 in my streaming rig is delivering the best quality stream to date. That is comparing against x264 medium running on an i9 9900k@5GHz. This flies in the face of conventional wisdom and people with more credibility than me will need to help change the winds here. But this is my anecdotal experience.
rahvin - Friday, February 8, 2019 - link
If and I mean IF nvidia is holding back it's a purely financial move due to the huge overstock on GPU's caused by crypto currencies. Both AMD and nvidia massively underestimate how much demand crypto was creating. (IIRC AMD said during the earnings call now that crypto has dropped off that monthly GPU sales are less than half what they were) Supposedly there are more than 100K of nvidia cards (again the stuff I saw said that was somewhere between 3-6months normal gamer sales) sitting out there on store shelves rotting because of it, and it's so bad nvidia is having to take stock back from retailers that want the shelf space freed up.That's prime incentive to sit on the designs until the existing stock is used up. For both nvidia and AMD. Sure they might push out some high end high price product but they aren't doing anything in the middle of the market until that stock is cleared out.
Korguz - Thursday, February 7, 2019 - link
yasamoka...at the least.. this could force, what seems to be your saint nvida.. to drop the price of their cards.. as it stands before today.. ALL of their 20 series cards.. are out of the price range i would pay for a video card, or are pushing it/hard to justify the cost over my current 1060, as the 1070/80 were way out of my price range..
D. Lister - Thursday, February 7, 2019 - link
@KorguzThe RVII performs 5%-6% below the 2080 (as per this review) and yet is priced the same. How is that going to force Nvidia to cut prices?
Korguz - Friday, February 8, 2019 - link
well.. where i am at least.. the radeon 7 starts at 949 ( preorder ), only 2 cards listed, Asus and xfx. the gtx 2080 ( which the radeon 7 is aimed at ) starts at 1130, almost 200 more... IMO.. 200 is not worth the premium for a 5-6% faster card.. the top of the line 2080 is the GeForce RTX 2080 SEA HAWK X which is priced at $1275... for my cash... id be looking at the radeon 7.. and saving 200+ bucks to use some where else in my comp...TheinsanegamerN - Thursday, February 7, 2019 - link
The Instinct M150 is a VEGA class card, the architecture is incredibly similar to Vega 56/64. There are not massive gains to be made here.If there are, then AMD must be completely incompetent at driver management.
tipoo - Sunday, February 10, 2019 - link
It's MI50vanilla_gorilla - Thursday, February 7, 2019 - link
As a linux prosumer user who does light gaming, this card is a slam dunk for me.LogitechFan - Friday, February 8, 2019 - link
and a noisy one at thatBaneSilvermoon - Thursday, February 7, 2019 - link
Meh, I went looking for a 16GB card about a week before they announced Radeon VII because gaming was using up all 8gb of VRAM and 14gb of system RAM. This card is a no brainer upgrade from my Vega 64.LogitechFan - Friday, February 8, 2019 - link
lemme guess, you're playing sandstorm?Gastec - Tuesday, February 12, 2019 - link
I was beginning to think that the "money" was in crytocurrency mining with video cards but I guess after the €1500+ RTX 2080Ti I should reconsider :)eddman - Thursday, February 7, 2019 - link
Perhaps but Turing is also a new architecture, so it's probable it'd get better with newer drivers too.Maxwell is from 2014 and still performs as it should.
As for GPU-accelerated gameworks, obviously nvidia is optimizing it for their own cards only, but that doesn't mean they actively modify the code to make it perform worse on AMD cards; not to mention it would be illegal. (GPU-only gameworks effects can be disabled in game options if need be)
Many (most?) games just utilize the CPU-only gameworks modules; no performance difference between cards.
ccfly - Tuesday, February 12, 2019 - link
you joking right ?1st game they did just that is crysis (they hide modely under water so ati card will render these too
and be slower
and after that they cheat full time ...
eddman - Tuesday, February 12, 2019 - link
No, I'm not.There was no proof of misconduct in crysis 2's case, just baseless rumors.
For all we know, it was an oversight on crytek's part. Also, DX11 was an optional feature, meaning it wasn't part of game's main code, as I've stated.
eddman - Tuesday, February 12, 2019 - link
... I mean an optional toggle for crysis 2. The game could be run in DX9 mode.eddman - Tuesday, February 12, 2019 - link
Just to clarify my comment; there was no proof that nvidia deliberately implemented the tessellation feature badly to cripple AMD.just4U - Thursday, February 7, 2019 - link
At the moment I am not concerned about the drivers. This card comes in at pretty impressive numbers.. looks to be slightly better than the 1080ti but with 16G of mem.. and not cheap mem either so it will be useful in a few years (likely) I want one!!cmdrdredd - Thursday, February 7, 2019 - link
Slightly better than 1080ti which is what 3 years old now? Not impressedjust4U - Thursday, February 7, 2019 - link
Which is pretty much the state of affairs regardless is it not cmd? Are you majorly impressed with the 2080ti??? it's only marginally faster than the 3 year old 1080ti as well.I own 1080s and vega56s. Those vega56s would be a huge upgrade if I went to the new Vega. The 1080s? Meh.. yeah a little .. not much.. not worth the upgrade.
LogitechFan - Friday, February 8, 2019 - link
it's 30% on average. if this is only marginally better for you, then of course you deserve an amd card :Deddman - Friday, February 8, 2019 - link
... for a 43% higher launch MSRP, or if we compare it to the currently cheapest 2080 Ti at $1150, 64%.This is one of the worst generational launches so far, where price/performance actually went DOWN.
Gastec - Tuesday, February 12, 2019 - link
Time is relative. What if Nvidia and everybody else would choose to release a new generation every 5 years? Most so-called gamers in the World don't even have the "old" GTX 1080Ti.kostaaspyrkas - Sunday, February 10, 2019 - link
you are totally right...i wonder why no reviewer ever says that... its been many times proven that all radeon cards 6 months after release always take lead from their nvidia competitors...nvidia leads in older games...amd future proof...i dont buy a 700 euro card for 1 year..i keep its 3 plus years at leastGastec - Tuesday, February 12, 2019 - link
That's the difference between you and these tech reviewers and their accompanying "unbiased" trol...I mean bragge...commentators, COMMENTATORS! They "upgrade" to the latest and greatest each new generation :)boozed - Thursday, February 7, 2019 - link
I was underwhelmed at its launch because it seemed like just a speed bump; on paper, it didn't seem that impressive.I am now suitably whelmed.
Kevin G - Thursday, February 7, 2019 - link
Not a bad showing by AMD but this card isn't the victory that they needed either. The gaming side is OK and lines up with the GTX 1080 Ti and RTX 2080 fairly well. On the compute side it is actually very good with the extra memory capacity and more bandwidth. I have a feeling that this card should have shipped with 128 ROPs which would have given it an edge at higher resolutions.I'm also curious as to how this card would fair at even higher resolutions like 5K and 8K. The memory bandwidth is there to humor that idea and might be feasible to get playable frame rates on specific modern games. I'd also be interesting to see how it'd fair with some older, less demanding titles at these resolutions too.
Holliday75 - Friday, February 8, 2019 - link
This card feels like its meant to full the gap and now allow Nvidia to be the only player in the game for an extended period of time. This buys them time for their next architecture release.brokerdavelhr - Thursday, February 7, 2019 - link
Can You please retest running the Radeon VII (an AMD part) on a Ryzen II with X470 with 16 gig of RAM? You always run AMD parts on a non AMD processor. Please retest and post results!mkaibear - Thursday, February 7, 2019 - link
The point of comparative benchmarking is to change just one thing so you can see the impact of the thing you're changing.brokerdavelhr - Thursday, February 7, 2019 - link
SO why do they only test on Intel machines? Why not run the same tests on an RYZEN/Nvidia and Ryzen/Radeon combo? My point is that it simply never happens. Put aside the fact that Radeon always fairs better on a AMD machine, it just seems odd is all. For the longest time, nearly every Intel machine ran Nvidia graphics. You are more likely to find a Radeon in a AMD machine than you will an Intel one.See my point?
DominionSeraph - Thursday, February 7, 2019 - link
Even AMD benches their video cards on Intel processors. Intel is just faster.brokerdavelhr - Thursday, February 7, 2019 - link
What link is that DS - and if you ask me too google it, I will not take anything you say seriously. Or are you deliberately trolling? I know they do a side by side with intel processors with their own to show the diff, bu thats all. What is the link to the tests you are referring to? Either way - it is unbiased as they bench with both. Not so here which was my point.Klimax - Friday, February 8, 2019 - link
So can you post AMDs PR results that use AMD CPUs?krazyfrog - Sunday, February 10, 2019 - link
From AMD's Radeon VII page:"Testing done by AMD performance labs 1/21/19 on an Intel Core i7 7700k, 16GB DDR4 3000MHz, Radeon VII, Radeon RX Vega 64, AMD Driver 18.50 and Windows 10. Using Resident Evil 2 @ 3840x2160, Max settings, DirectX® 11:Radeon VII averaged 53 fps. Radeon RX Vega 64 averaged 41 fps. PC manufacturers may vary configurations yielding different results. All scores are an average of 3 runs with the same settings. Performance may vary based on use of latest drivers. RX-291"
mkaibear - Thursday, February 7, 2019 - link
Because they are for the most part running gaming tests, and if you want to remove CPU bottlenecks you pick the CPU which you have that's fastest in games.Which is Intel.
If you pick anything else then you are artificially constraining performance which tends to show a regression to the mean - in other words it'll make the difference between AMD and nVidia smaller (whichever one wins)
Equally the fact that AMD works best with AMD means they absolutely should *not* put an AMD processor in the system - that way they are artificially boosting system performance and skewing their benchmarks.
You really need to do some reading on how you do a/b testing. Wikipedia has a good article.
mapesdhs - Friday, February 8, 2019 - link
It's going to be hillariously funny if Ryzen 3000 series reverses this accepted norm. :)mkaibear - Saturday, February 9, 2019 - link
I'd not be surprised - given anandtech's love for AMD (take a look at the "best gaming CPUs" article released today...)Not really "hilariously funny", though. More "logical and methodical"
thesavvymage - Thursday, February 7, 2019 - link
It's not like itll perform any better though... Intel still has generally better gaming performance. There's no reason to artificially hamstring the card, as it introduces a CPU bottleneckbrokerdavelhr - Thursday, February 7, 2019 - link
Once again - in gaming for the most part....try again with other apps and their is a marked difference. Many of which are in AMD's favor. try again.....jordanclock - Thursday, February 7, 2019 - link
In every scenario that is worth testing a VIDEO CARD, Intel CPUs offer the best performance.ballsystemlord - Thursday, February 7, 2019 - link
There choice of processor is kind of strange. An 8-core Intel on *plain* 14nm, now 2! years old, with rather low clocks at 4.3Ghz, is not ideal for a gaming setup. I would have used a 9900K or 2700X personally[1].For a content creator I'd be using a Threadripper or similar.
Re-testing would be an undertaking for AT though. Probably too much to ask. Maybe next time they'll choose some saner processor.
[1] 9900K is 4.7Ghz all cores. The 2700X runs at 4.0Ghz turbo, so you'd loose frequency, but then you could use faster RAM.
For citations see:
https://www.intel.com/content/www/us/en/products/p...
https://images.anandtech.com/doci/12625/2nd%20Gen%...
https://images.anandtech.com/doci/13400/9thGenTurb...
ToTTenTranz - Thursday, February 7, 2019 - link
Page 3 table:- The MI50 uses a Vega 20, not a Vega 10.
Ryan Smith - Thursday, February 7, 2019 - link
Thanks!FreckledTrout - Thursday, February 7, 2019 - link
I wonder why this card absolutely dominates in the "LuxMark 3.1 - LuxBall and Hotel" HDR test? Its pulling in numbers 1.7x higher than the RTX 2080 on that test. That's a funky outlier.Targon - Thursday, February 7, 2019 - link
How much video memory is used? That is the key. Since many games and benchmarks are set up to test with a fairly low amount of video memory being needed(so those 3GB 1050 cards can run the test), what happens when you try to load 10-15GB into video memory for rendering? Cards with 8GB and under(the majority) will suddenly look a lot slower in comparison.Dr. Swag - Thursday, February 7, 2019 - link
If I had to guess, those tests probably are more dependent on memory capacity and/or memory bandwidth.Klimax - Friday, February 8, 2019 - link
Could be still difference between AMD's and Nvidia's OpenCL drivers. Nvidia only fairly recently started to focus on them. (Quite few 2.0 features are still listed as experimental)tipoo - Thursday, February 7, 2019 - link
That they changed the FP64 rate cap entirely in BIOS makes me wonder, should the iMac Pro be updated with something like this (as Navi is supposed to be launching with the mid range first), if it would have the double precision rate cap at all as Apple would be co-writing the drivers and all.tvdang7 - Thursday, February 7, 2019 - link
I feel AT needs to update the game list. I understand that these are probably easier to bench and are demanding but most of us are curious on how it performs on games we actually play. Lets be real how many of you or your friends play these game on the daily? BF1 and MAYBE GTA are popular but not on the grand scheme of things .Manch - Thursday, February 7, 2019 - link
7 DX 111 DX 12
1 Vulcan
Need a better spread of the API's and denote which games are engineered specifically for AMD or Nvidia or neither. I think that would be helpful when deciding which card should be in your rig.
TheinsanegamerN - Thursday, February 7, 2019 - link
Perhaps tell game developers to get with the times then? You cant test what isnt there, and the vast majority of games with repeatable benchmarks are DX11 titles. That is not Anandtech's fault.Manch - Friday, February 8, 2019 - link
Didn't say it was. Merely a suggestion/request. There are around 30 games that are released with DX 12 support and about a dozen with Vulkan. Some of the DX 11 titles tested for this review offer DX 12 & Vulkan supt. They exist and can be tested. If there is a reason to NOT test a DX version or Vulkan version, for example RE2's broken DX12 implementation, OK fair enough. I think it would offer a better picture of how each card performs overall.Manch - Friday, February 8, 2019 - link
DX11 DX12 Vulkan
BF1 Tested Yes No
FC5 Tested No No
AotS Yes Tested Yes
Wolf Yes Yes Tested
FF Tested Maybe? No
GTA Tested No No
SoW Tested No No
F1 Tested No No
TW Tested Yes No
4 of the games tested with DX11 have DX 12 implementations and AotS has a Vulkan implementation. If the implementation is problematic, fair enough. Put a foot note or a ** but there are games with DX 12 and Vulkan out there on current engines so it can be done.
Ryan, perhaps and article on the games, the engines, their API implementations and how/why you choose to use/not use them in testing? Think it would be a good read.
Manch - Friday, February 8, 2019 - link
Sorry bout the format didn't realize it would do that to it.eddman - Friday, February 8, 2019 - link
"about a dozen with Vulkan"What are these dozen games? Last time I checked there were only three or four modern games suitable for vulkan benchmarking: Wolfenstein 2, Doom, Strange Brigade and perhaps AotS.
IMO Wolfenstein 2 is enough to represent vulkan.
"Wolf Yes Yes Tested"
Wolfenstein 2 is vulkan only; no DX12.
As for DX12, yes, I too think they could add more.
Manch - Monday, February 11, 2019 - link
My bad on Wolf. I thought it was. It's on XB1 which is DX12 and DX12 supt was confirmed by a few places so I didn't check further.As for Vulkan Games, off the top of my head(whats in my library), TWS:ToB, TW: Warhammer II (should have been in my table..oops), Warhamer 40K DoW III, Serious SAM VR games, x-plane. I'm sure there are others. Easy to look up.
IMO FPS should not be the definitive test for all API's. Variety is always nice.
Cherry pick my mistakes but my point stands. I get the test bed needs to be locked down so consistent results can be achieved. Anandtech needs to be able to give specific measurable and repeatable results and they do that. I'm just merely expressed my desire to see a more balanced test suite in regards to APIs & games that are design for NVidia or AMD GPU's.
eddman - Tuesday, February 12, 2019 - link
Are you basing that on personal experience or simply getting the info from vulkan's wikipedia page, without checking the platform column?TWS:ToB, TW: Warhammer II and Warhamer 40K DoW III use vulkan only on linux.
Despite the vulkan addition, Serious sam games are old, non-demanding and not suitable for benchmarking.
X-plane does not support vulkan yet; it's a work-in-progress. Still, even if it does add it eventually, it too is not suitable for benchmarking.
Ryan Smith - Thursday, February 7, 2019 - link
The benchmark suite gets updated on a roughly yearly basis. It was last updated for the Turing launch, so we're only about 5 months into it. As part of ensuring we cover a reasonable selection of genres, these were the best games available in the fall of 2018.The next time we update it will presumably be for AMD's Navi launch, assuming that still happens in 2019. Though it's never too early to suggest what games you'd like to see.
eva02langley - Thursday, February 7, 2019 - link
Devil May Cry, Resident Evil, Anthem, metro Exodus, The Division 2, Rage 2, Mortal Kombat 11krazyfrog - Sunday, February 10, 2019 - link
Half-Life 3SeannyB - Thursday, February 7, 2019 - link
I would like to see a title from each of the general purpose engines, namely UE4 and Unity.Korguz - Thursday, February 7, 2019 - link
maybe i am the only one here.. but the games AT tests... i dont play ANY of them :-)Ryan Smith - Thursday, February 7, 2019 - link
Out of curiosity, what do you play?Korguz - Friday, February 8, 2019 - link
WOW, Starcraft 2 Diablo 3 and some older games... games that dont really " need " a card like this.. my current asus strix 1060, plays these just fine at almost max eye candy... the only game i can think of that i have, and play that might need this card.. is Supreme commander, but im not sure if that game needs a strong cpu, or gpu, maybe a bit of both...Holliday75 - Friday, February 8, 2019 - link
Love me some Supreme Commander. Solid followup to Total Annihilation. As to its performance I think its more CPU based and quite frankly the engine is not optimized for modern hardware.KateH - Friday, February 8, 2019 - link
thirded on still enjoying SupCom! i have however long ago given up on attempting to find the ultimate system to run it. i7 920 @ 4.2Ghz, nope. FX-8150 @ 4.5Ghz, nope. The engine still demands more CPU for late-game AI swarms! (and i like playing on 81x81 maps which makes it much worse)Korguz - Friday, February 8, 2019 - link
Holliday75 and KateHive run supcom on a i7 930 OC'd to 4.2 on a 7970, slow as molasses late in the game VS the AI, and on my current i7 5930k and strix 1060 and.. same thing.. very slow late in the game.... the later patches supposedly helped the game use more then 1 or 2 cores, i think Gas Powered games called it " multi core aware "
makes me wonder how it would run on something newer like a threadripper, top en Ryzen or top end i7 and an i9 with a 1080 + vid card though, compared to my current comp....
eva02langley - Friday, February 8, 2019 - link
Metal Gear Solid V, Street Fighter 5, Soulcalibur 6, Tekken 7, Senua Sacrifice...Basically, nothing from EA or Ubisoft or Activision or Epic.
ballsystemlord - Thursday, February 7, 2019 - link
Oh oh! Would you be willing to post some FLOSS benchmarks? Xonotic, 0AD, Openclonk and Supertuxkart?Manch - Friday, February 8, 2019 - link
I would like to see a mixture of games that are dedicated to a singular API, and ones that support all three or at least two of them. I think that would make for a good spread.Manch - Thursday, February 7, 2019 - link
Not sure that I expected more. The clock for clock against the V64 is telling. @$400 for the V64 vs $700 for the VII, ummm....if you need a compute card as well sure, otherwise, Nvidia got the juice you want at better temps for the same price. Not a bad card, but it's not a great card either. I think a full 64CU's may have improved things a bit more and even put it over the top.Could you do a clock for clock compare against the 56 since they have the same CU count?? I'd be curious to see this and extrapolate what a VII with 64CU's would perform like just for shits and giggles.
mapesdhs - Friday, February 8, 2019 - link
Are you really suggesting that, given two products which are basically the same, you automatically default to NVIDIA because of temperatures?? This really is the NPC mindset at work. At least AMD isn't ripping you off with the price, Radeon VII is expensive to make, whereas NVIDIA's margin is enormous. Remember the 2080 is what should actually have been the 2070, the entire stack is a level higher than it should be, confirmed by die code numbers and the ludicrous fact that the 2070 doesn't support SLI.Otoh, Radeon II is generally too expensive anyway; I get why AMD have done it, but really it's not the right way to tackle this market. They need to hit the midrange first and spread outwards. Stay out of it for a while, come back with a real hammer blow like they did with CPUs.
Manch - Friday, February 8, 2019 - link
Well, they're not basically the same. Who's the NPC LOL? I have a V64 in my gaming rig. It's loud but I do like it for the price. The 2080 is a bit faster than the VII for the same price. It does run cooler and quieter. For some that is more important. If games is all you care about, get it. If you need compute, live with the noise and get the VII.I don't care how expensive it is to make. If AMD could put out a card at this level of performance they would and they would sell it at this price.
Barely anyone uses SLI/Crossfire. It's not worth it. I previously had 2 290X 8GB in Crossfire. I needed a beter card for VR, V64 was the answer. It's louder but it was far cheaper than competitors. The game bundle helped. Before that, I had bought a 1070 for the wife's computer. It was a good deal at the time. Some of yall get too attached to your brands get all frenzied at any criticism. I buy what suits my needs at the best price/perf.
AdhesiveTeflon - Friday, February 8, 2019 - link
Not our fault AMD decided to make a video card with more expensive components and not beat the competition,mapesdhs - Friday, February 8, 2019 - link
Are you really suggesting that, given two products which are basically the same, you automatically default to NVIDIA because of temperatures?? This really is the NPC mindset at work. At least AMD isn't ripping you off with the price, Radeon VII is expensive to make, whereas NVIDIA's margin is enormous. Remember the 2080 is what should actually have been the 2070, the entire stack is a level higher than it should be, confirmed by die code numbers and the ludicrous fact that the 2070 doesn't support SLI.Otoh, Radeon II is generally too expensive anyway; I get why AMD have done it, but really it's not the right way to tackle this market. They need to hit the midrange first and spread outwards. Stay out of it for a while, come back with a real hammer blow like they did with CPUs.
29a - Thursday, February 7, 2019 - link
As usual in these garbage articles the prices given are nowhere near reality. The Vega 64 is $100 cheaper than what is listed.RSAUser - Thursday, February 7, 2019 - link
Anandtech doesn't ever seem to update reviews or prices.They'll compare a device from their history even if there have been months of driver updates that fixed performance issues, so they'll be using non-current info and everyone will assume it's current.
Ryan Smith - Thursday, February 7, 2019 - link
"Anandtech doesn't ever seem to update reviews or prices."On the contrary, quite a bit was updated for this review. Though as driver performance has been rather stable as of late, performance hasn't exactly gone anywhere for most cards on most games.
If you see anything that seems wrong, please let us know. But we go out of our way to try to avoid using any card/driver combinations that result in performance issues.
Korguz - Thursday, February 7, 2019 - link
29aif you think AT does nothing but garbage articles.. then, lets see YOU do better...
as for prices.. meh.. thats something hard to account for as there are things called exchange rates, and other variables that no one can predict.....
Phil85 - Thursday, February 7, 2019 - link
So when will prices of GPU's decrease? Is this the new normal?eva02langley - Thursday, February 7, 2019 - link
Navi should bring value back to mid-range.It is still a nice card for professional/compute/rendering. But for gaming, the price is maybe 50$ too expensive, and AMD really needs to get some better quality fans.
TEAMSWITCHER - Thursday, February 7, 2019 - link
If Navi is missing next generation features like ray tracing and tensor cores, there will be ZERO value to it.eva02langley - Thursday, February 7, 2019 - link
AHAHAHAHA... Ray Tracing... you know the real problem of Ray Tracing? It was never on the table until Jensen brainwashed shill that it was important. by defending it, you obviously prove that you have no critical judgement.By the way, the problem with RT/DLSS is that it will never be implemented because AMD owns consoles, and that devs develop on consoles. There is no monetary benefit to implement gimmick proprietary gameworks features for 1% of the PC user base, unless if Nvidia is paying you to do so.
It will never be a thing for the upcoming console generation. See you in 7 years, where it might be remotely relevant to the industry. As of now, unless you are rendering a CGI movie, it is worthless.
Dribble - Thursday, February 7, 2019 - link
Both the next gen consoles are going to have ray tracing. Microsoft - who wrote and own the spec for the DX12 ray tracing extension currently used by PC's and hence a strong backer of ray tracing - will make one of them.eva02langley - Thursday, February 7, 2019 - link
Not going to happen because RTX is proprietary, it is a closed environment, and require hardware acceleration that AMD is not going to pursue in the short time. Nvidia shoot themselves in the foot by pushing it. Open source is the only way a new standard can be adopted. The whole G-synch fiasco should have been enough to prove it.Hardware could run it still, but the impact on performances is just to important. At that point, developers like Sony have incredible talent in creating new effect that look way more realistic.
Just looking at The Last of Us Part 2 is a good example.
eva02langley - Thursday, February 7, 2019 - link
I should not say realistic, I should say credible.webdoctors - Thursday, February 7, 2019 - link
Open source is NOT the only way a new standard can be adopted. Microsoft has been pushing DirectX 9/10/11, etc. and those are HUGELY popular standards. If MS is adopting it in their API, than yes it'll show up in PC games.Raytracing is not a gimmick, its been around since before you were born or Nvidia was even founded. It hasn't been "feasible" for real-time and as such as been largely ignored in gaming. Many other technologies were not feasible until they were and than got incorporated. Graphics is more than just getting 60FPS otherwise everything would just be black and white without shadows. Its about realism, which means proper lighting, shadows, physics.
Ppl need to call out the price, if you're a regular joe who's just getting a card for gaming and not mining or business use, why would you buy this over the competition? They seriously need to drop the price by $100 or it'll be a tiny seller.
D. Lister - Friday, February 8, 2019 - link
RTX is just Nvidia's way of doing DXR which is the IP of Microsoft. AMD has already announced specific development for it in future to be integrated in their GPU's. RT has been announced by both Sony and MS for their next consoles. Of course because of their use of AMD GPUs, the application of RT would be of a lower quality compared to what RTX can do. It is very much like the current console implementation of anti-aliasing, HBAO or tessellation, where on consoles you get a very basic level of those features, but on decent PCs they can be cranked up much higher."The whole G-synch fiasco should have been enough to prove it."
This is nothing like G-Sync. The problem with GSync is the extra cost. Now considering that the 2080 is the same price/performance as a Radeon VII, but has hardware DXR (RTX) as well, you're essentially getting the ray-tracing add-in for free.
Thirdly, while many things can be faked with rasterization to be within the approximation of ray-tracing, it requires far greater work (not to mention, artistic talent) to do it. In rasterization, a graphics designer has to first guess what a certain reflection or shadow would look like and then painstakingly make something that could pass off for the real thing. Raytracing takes that guesswork out of the task. All you, as a developer, would need to do is place a light or a reflective surface and RT would do the rest with mathematical accuracy, resulting in higher quality, a much faster/smoother development, fewer glitches, and a much smaller memory/storage footprint for the final product.
D. Lister - Friday, February 8, 2019 - link
A helpful link:https://blogs.msdn.microsoft.com/directx/2018/03/1...
Manch - Friday, February 8, 2019 - link
RTX is a proprietary implementation that is compatible with DirectX RT. AMD may eventually do DirectX RT but it will be there own version. As far as consoles go, unless NAVI has some kind of RT implementation, youre right, no RT of any significance. At best it will be a simple PC graphics option that works in a few titles maybe like hair works lol.eva02langley - Friday, February 8, 2019 - link
It is ... a GAMEWORKS feature... as of now. RTX/DLSS are nothing more than 2 new gameworks features... that will just break games, once again to cripple the competition.The goal is not even to have RTX or DLSS, it is to force developers to use their proprietary tools to break game codes and sabotage the competition, like The Witcher 3.
RTX is nothing good as of now. It is a tax, and it breaks performances. Let's talk about it when it can be implemented in real-time. until then, let's Nvidia feel the burden of it.
eddman - Friday, February 8, 2019 - link
I do agree that these RTX/DLSS features absolutely do not justify the current prices and that nvidia should've waited for 7nm to mature before adding them, but let's not get so emotional.Gameworks are simply modules that can be added to a game and are not part of the main code. Also, its GPU based features can be disabled in options, as was the case in witcher 3.
TheinsanegamerN - Thursday, February 7, 2019 - link
And by flinging insults you have shown yourself to be an immature fanboi that is desperately trying to defend his favorite GPU company.eva02langley - Friday, February 8, 2019 - link
I didn't insult anyone, I just spoke the truth about RTX. I am not defending AMD, I am condemning Nvidia. Little difference...To defend RTX as it is today, is being colored green all over. There is no way to defend it.
ballsystemlord - Thursday, February 7, 2019 - link
I agree, Huang should have listed to himself when he said that Ray tracing would have been a thing in 10 years (but he wanted to bring it to market now).Remember when there were 2D and 3D accelerators?
I say we should be able to choose 3D or Ray-tracing accelerators.
Alistair - Thursday, February 7, 2019 - link
Because everyone is already playing Anthem at 4k 60fps with a $400 card? Ray tracing is totally useless and we need way more rasterization performance per dollar than we have right now. Give me a 7nm 2080 ti without the RT cores for $699 and then we'll talk.eva02langley - Friday, February 8, 2019 - link
Fair, the main objective of gaming GPU are shaders per $. Gameworks gimmick are not something I call a selling factor... and Nvidia is forced to cook their books because of it.RSAUser - Thursday, February 7, 2019 - link
Why are you adding the Final Fantasy benchmark when it has known bias issues?Zizy - Thursday, February 7, 2019 - link
Eh, 2080 is slightly better for games and costs the same, while unfortunately MATLAB supports just CUDA so I can't even play with compute.Hul8 - Thursday, February 7, 2019 - link
On page 19, the "Load GPU Temperatur - FurMark" graph is duplicated.Ryan Smith - Thursday, February 7, 2019 - link
Thanks. The FurMark power graph has been put back where it belongs.schizoide - Thursday, February 7, 2019 - link
Man, I've never seen such a hostile response to an Anandtech article. People need to relax, it's just a videocard.I don't see this as a win for AMD. Using HBM2 the card is expensive to produce, so they don't have a lot of freedom to discount it. Without a hefty discount, it's louder, hotter, and slower than a 2080 at the same price. And of course no ray-tracing, which may or may not matter, but I'd rather have it just in case.
For OpenCL work it's a very attractive option, but again, that's a loser for AMD because they ALREADY sold this card as a workstation product for a lot more money. Now it's discounted to compete with the 2080, meaning less revenue for AMD.
Even once the drivers are fixed, I don't see this going anywhere. It's another Vega64.
sing_electric - Thursday, February 7, 2019 - link
There's still a lot of people for whom a Radeon Instinct was just never going to happen, INCLUDING people who might have a workstation where they write code that will mostly run on servers, and it means you can run/test your code on your workstation with a fairly predictable mapping to final server performance.As Nate said in the review, it's also very attractive to academics, which benefits AMD in the long run if say, a bunch of professors and grad students learn to write ML/CL on Radeon before say, starting or joining companies.
schizoide - Thursday, February 7, 2019 - link
Yes, it's attractive to anyone who values OpenCL performance. They're getting workstation-class hardware on the cheap. But that does devalue AMD's workstation productline.Manch - Thursday, February 7, 2019 - link
Not really. The instinct cards are still more performant. They tend to be bought by businesses where time/perf is more important than price/perf.schizoide - Thursday, February 7, 2019 - link
Sure it does, at the bottom-end. It basically IS an instinct mi50 on the cheap.GreenReaper - Thursday, February 7, 2019 - link
Maybe they weren't selling so well so they decided to repurpose before Navi comes out and makes it largely redundant.schizoide - Thursday, February 7, 2019 - link
IMO, what happened is pretty simple. Nvidia's extremely high prices allowed AMD to compete with a workstation-class card. So they took a swing.eva02langley - Friday, February 8, 2019 - link
My take to. This card was never intended to be released. It just happened because the RTX 2080 is at 700+$.In Canada, the RVII is 40$ less than the cheapest 2080 RTX, making it the better deal.
Manch - Thursday, February 7, 2019 - link
It is but its slightly gimped perf wise to justify the price diff.sing_electric - Thursday, February 7, 2019 - link
Anyone else think that the Mac Pro is lurking behind the Radeon VII release? Apple traditionally does a March 2019 event where they launch new products, so the timing fits (especially since there's little reason to think the Pro would need to be launched in time for the Q4 holiday season).-If Navi is "gamer-focused" as Su has hinted, that may well mean GDDR6 (and rays?), so wouldn't be of much/any benefit to a "pro" workload
-This way Apple can release the Pro with the GPU as a known quantity (though it may well come in a "Pro" variant w/say, ECC and other features enabled)
-Maybe the timing was moved up, and separated from the Apple launch, in part to "strike back" at the 2080 and insert AMD into the GPU conversation more for 2019.
The timeline and available facts seem to fit pretty well here...
tipoo - Thursday, February 7, 2019 - link
I was thinking a better binned die like VII for the iMac Pro.Tbh the Mac Pro really needs to support CUDA/Nvidia if it's going to be a serious contendor for scientific compute.
sing_electric - Thursday, February 7, 2019 - link
I mean, sure? but I'm not sure WHAT market Apple is going after with the Mac Pro anyways... I mean, would YOU switch platforms (since anyone who seriously needs the performance necessary to justify the price tag in a compute-heavy workload has almost certainly moved on from their 2013 Mac Pro) with the risk that Apple might leave the Pro to languish again?There's certainly A market for it, I'm just not sure what the market is.
repoman27 - Thursday, February 7, 2019 - link
The Radeon VII does seem to be one piece of the puzzle, as far as the new Mac Pro goes. On the CPU side Apple still needs to wait for Cascade Lake Xeon W if they want to do anything more than release a modular iMac Pro though. I can't imagine Apple will ever release another dual-socket Mac, and I'd be very surprised if they switched to AMD Threadripper at this point. But even still, they would need XCC based Xeon W chips to beat the iMac Pro in terms of core count. Intel did release just such a thing with the Xeon W 3175X, but I'm seriously hoping for Cascade Lake over Skylake Refresh for the new Mac Pro. That would push the release timeline out to Q3 or Q4 though.The Radeon VII also appears to lack DisplayPort DSC, which means single cable 8K external displays would be a no-go. A new Mac Pro that could only support Thunderbolt 3 displays up to 5120 x 2880, 10 bits per color, at 60 Hz would almost seem like a bit of a letdown at this point. Apple is in a bit of an awkward position here anyway, as ICL-U will have integrated Thunderbolt 3 and an iGPU that supports DP 1.4a with HBR 3 and DSC when it arrives, also around the Q3 2019 timeframe. I'm not sure Intel even has any plans for discrete Thunderbolt controllers after Titan Ridge, but with no PCIe 4.0 on Cascade Lake, there's not much they can even do to improve on it anyway.
So maybe the new Mac Pro is a Q4 2019 product and will have Cascade Lake Xeon W and a more pro-oriented yet Navi-based GPU?
sing_electric - Thursday, February 7, 2019 - link
Possibly, but I'm not 100% sure that they need to be at the iMac Pro on core count to have a product. More RAM (with a lot of slots that a user can get to) and a socketed CPU with better thermals than you can get on the back of a display might do it. I'd tend to think that moving to Threadripper (or EPYC) is a pipe dream, partly because of Thunderbolt support (which I guess, now that it's open, Apple could THEORETICALLY add, but it just seems unlikely at this point, particularly since there'd be things where a Intel-based iMac Pro might beat a TR-based Mac Pro, and Apple doesn't generally like complexities like that).Also, I'd assumed that stuff like DSC support would be one of the changes between the consumer and Pro versions (and AMD's Radeon Pro WX 7100 already does DSC, so its not like they don't have the ability to add it to pro GPUs).
repoman27 - Thursday, February 7, 2019 - link
The Radeon Pro WX 7100 is Polaris 10, which does not do DSC. DSC requires fixed function encoding blocks that are not present in any of the Polaris or Vega variants. They do support DisplayPort 1.3 / 1.4 and HBR3, but DSC is an optional feature of the DP spec. AFAIK, the only GPUs currently shipping that have DSC support are NVIDIA's Turing chips.The CPU in the iMac Pro is a normal, socketed Xeon W, and you can max the RAM out at 512 GB using LRDIMMs if you're willing to crack the sucker open and shell out the cash. So making those things user accessible would be the only benefit to a modular Mac Pro. CPU upgrades are highly unlikely for that platform though, and I doubt Apple will even provide two DIMM slots per channel in the new Mac Pro. However, if they have to go LGA3647 to get an XCC based Xeon W, then they'd go with six slots to populate all of the memory channels. And the back of a display that is also 440 square inches of aluminum radiator is not necessarily a bad place to be, thermally. Nothing is open about Thunderbolt yet, by the way, but of course Apple could still add existing Intel TB3 controllers to an AMD design if they wanted to.
So yeah, in order to have a product, they need to beat the iMac Pro in some meaningful way. And simply offering user accessible RAM and PCIe slots in a box that's separate from the display isn't really that, in the eyes of Apple at least. Especially since PCIe slots are far from guaranteed, if not unlikely.
halcyon - Friday, February 8, 2019 - link
Apple cannot ship Mac Pro with a vacuum cleaner. That 43 dBA is isane. Even if Apple downclocked and undervolted the bios, I doubt they could make it very quiet.Also, I doubt AMD is willing to sell the tons of them at a loss.
dark_light - Thursday, February 7, 2019 - link
Well written, balanced and comprehensive review that covers all the bases with just the rightamount of detail.
Thanks Nate Oh.
Anandtech is still arguably the best site for this content. Kudos guys.
drgigolo - Thursday, February 7, 2019 - link
So I got a 1080Ti at launch, because there was no other alternative at 4K. Finally we have an answer from AMD, unfortunately it's no faster than my almost 2 year old GPU, priced the same no less.I really think this would've benefitted from 128 rops, or 96.
If they had priced this at 500 dollars, it would've been a much better bargain.
I can't think of anyone who I would recommend this to.
sing_electric - Thursday, February 7, 2019 - link
To be fair, you could almost say the same thing about the 2080, "I got a 1080 Ti at launch and 2 years later, Nvidia released a GPU that barely performs better if you don't care about gimmicks like ray tracing."People who do gaming and compute might be very well tempted, people who don't like Nvidia (or just do like AMD) might be tempted.
Unfortunately, the cost of the RAM in this thing alone is probably nearly $350, so there's no way AMD could sell this thing for $500 (but it wouldn't surprise me if we see it selling a little under MSRP if there is plentiful supply and Nvidia can crank out enough 2080s).
eva02langley - Thursday, February 7, 2019 - link
That was the whole point of RTX. Beside the 2080 TI, there was nothing new. You were having the same performances for around the same price than the last generation. There was no price disruption.Oxford Guy - Thursday, February 7, 2019 - link
Poor AMD.We're supposed to buy a clearly inferior product (look at that noise) just so they can sell leftover and defective Instincts?
We're supposed to buy an inferior product because AMD's bad business moves have resulted in Nvidia being able to devalue the GPU market with Turing?
Nope. We're supposed to either buy the best product for the money or sit out and wait for something better. Personally, I would jump for joy if everyone would put their money into a crowdfunded company, with management that refuses to become eaten alive by a megacorp, to take on Nvidia and AMD in the pure gaming space. There was once space for three players and there is space today. I am not holding my breath for Intel to do anything particularly valuable.
Wouldn't it be nice to have a return to pure no-nonsense gaming designs, instead of this "you can buy our defective parts for high prices and feel like you're giving to charity" and "you can buy our white elephant feature well before its time has come and pay through the nose for it" situation.
Capitalism has had a bad showing for some time now in the tech space. Monopolies and duopolies reign supreme.
eva02langley - Friday, February 8, 2019 - link
Honestly, beside a RX570/580, no GPUs make sense right now.Funny that Polaris is still the best bang for the $ still today.
drgigolo - Saturday, February 9, 2019 - link
Well, at least you can buy a 2080Ti, eventhough the 2080 is of course at the same price point as the 1080Ti. But I won't buy a 2080Ti either, it's too expensive and the performance increase is too small.The last decent AMD card I had, was the R9 290X. Had that for a few years until the 1080 came out, and then, replaced that to a 1080Ti when I got a Acer Predator XB321HK.
I will wait until something better comes along. Would really like HDMI 2.1 output, so that I can use VRR on the upcoming LG OLED C9.
sing_electric - Thursday, February 7, 2019 - link
Oh, also, FWIW: The other way of looking at it is "damn, that 1080 Ti was a good buy. Here I am 2 years later and there's very little reason for me to upgrade."drgigolo - Saturday, February 9, 2019 - link
Yeah, of course I am looking at it that way :-) But I also like tech, and find the progress lacking these last years. Longer development cycles and diminishing returns for a lot more dollars.remedo - Thursday, February 7, 2019 - link
Why isn't there any benchmarks for machine learning or deep learning?imaheadcase - Thursday, February 7, 2019 - link
Because the card is not for that...loleva02langley - Thursday, February 7, 2019 - link
It kind of is... it is a Radeon Instinct M150 with less memory.DigitalFreak - Thursday, February 7, 2019 - link
The buy a Radeon Instinct M150GreenReaper - Thursday, February 7, 2019 - link
Sure, if you want to pay two and a half times as much! Maybe get two and blow the rest on juice.eva02langley - Friday, February 8, 2019 - link
Well, you are buying a Vega 20 gimped... >:/So you do in reality... >:/
Ryan Smith - Thursday, February 7, 2019 - link
In short, ML results take longer to put together than these relatively short embargoes allow for. It's also not a primary market for this card, so other things such as gaming performance testing get priority.That said, we're curious about it as well. Now that we're past the embargo, check back in later this month. We have an RTX Titan review coming up, which will give us a great opportunity to poke at the ML performance of the Radeon VII as well.
eva02langley - Friday, February 8, 2019 - link
I will be curious to see that. Compute/ML/Rendering/Content Creation comparison. I was more looking for this in all honesty since we knew what to expect from the card from the beginning.HStewart - Thursday, February 7, 2019 - link
I would think this is expected, AMD trying there best to go against NVidia video and probably release because some of struggles that RTX is having with unit issues.But in stage in my life, personally I don't need a high end graphics card but I would go nVidia because of past good experience. But in any case how many owners actually need high end card. For majority 90+ % of people Integrated graphics are good enough for spreadsheets, internet and word processing
eva02langley - Thursday, February 7, 2019 - link
Love my 2400g, agree with an iGPU like that, however you will not make me believe that an HD 520 is "enough".My only GPUs that died were Nvidia ones... 3 in total. 0 AMD.
TheinsanegamerN - Thursday, February 7, 2019 - link
Read what he said again. "For majority 90+ % of people Integrated graphics are good enough for spreadsheets, internet and word processing"Guess what nearly every office laptop and desktop uses? If it wasnt "good enough" there would be a push for more powerful iGPUs in widespread circulation.
The basic intel iGPU if far mroe then enough to do office work, stream video, or normal workstation content. more powerful GPUs are only needed in specific circumstances.
eva02langley - Friday, February 8, 2019 - link
I know what he said and I know what he means... and I know he is painted Intel all over his body.And no, basic HD 520 is not enough, period. You can barely do office work and play videos.
If it was so true, the mobile market would not be the most lucrative for games/entertainment. As of now, a smart phone is having more GPU power than an HD520.
So basically, I am not agreeing at all.
AdhesiveTeflon - Friday, February 8, 2019 - link
We do plenty of CAD work and GIS functions with Intel's iGPUs, so what were you saying about them barely able to do office work and videos?We also have a lot more issues with AMD's mobile and professional cards than nVidia's.
Icehawk - Saturday, February 9, 2019 - link
Out of 500 PCs at my job a whopping two have video cards, they are random low ends ones purchased to add additional video ports for two stations that run quad monitors. Otherwise there is zero need for a dGPU.This is typical of the vast majority of businesses.
ksec - Thursday, February 7, 2019 - link
1. I believe there will be better drivers for VII, it was quite clear that there are many optimisation not done in time, although I don't know how long it will take. The new AMD seems to be quick to react though.2. What if AMD decided to release the MI60 VII at $899.
TheinsanegamerN - Thursday, February 7, 2019 - link
VII is the VEGA arch, with more ROPs. If AMD managed to leave that much performance on the table, they must be the most incompetent code writers in all of existence.The VEGA arch has long been optimized for, adding some ROPs isnt going to require much work to optimize for, and AMD has likely already done that.
ksec - Friday, February 8, 2019 - link
Optimisation are now nearly done on a per AAA game level. And more importantly not only the drivers but the game itself. Whether the developer are willing to optimise the game ( at the help of AMD ) will be another story.lcrotach - Thursday, February 7, 2019 - link
Nice to see AMD being competitive again. It's a pity they've priced the card so high in Europe that you can get a RTX 2080 for 100 euros less. At that price point they won't be selling many.Manch - Thursday, February 7, 2019 - link
Need to see the VAT free price.just4U - Thursday, February 7, 2019 - link
Really? I see the pre-order pricing here in Canada as 50-100 less then the 2080.Oxford Guy - Thursday, February 7, 2019 - link
I think you missed the noise-to-performance metric.This GPU isn't even close to competitive with the 2080 because of that.
just4U - Friday, February 8, 2019 - link
Sure it is.. a mild undervolt (which the cards support in wattman) is all that's needed. Lower temps lower fan noise. (..shrug)Also since many own blower style cards these 3 fan designs are usually less noise (unless maybe coil whine comes into play..)
Oxford Guy - Friday, February 8, 2019 - link
1) Undervolting is a crap shoot due to binning and other factors, not a solution that you can simply apply as a fix for every one of these cards.2) Saying that some other cards are even louder is a complete avoidance of the issue. The issue is that Nvidia is crushing the noise-to-performance metric with the 2080, according to the presented data in this article. AMD is not, at all, competitive.
eva02langley - Friday, February 8, 2019 - link
Results are there and Vega is greatly improved with undervolting. It is like this since it launch. It is related to the uarch.D. Lister - Friday, February 8, 2019 - link
Which is why I can never recommend AMD GPUs. I mean how competent can they really be if they don't even know how to set proper voltages on their GPUs?Oxford Guy - Friday, February 8, 2019 - link
It's also possible that a GPU will run at a lower voltage that what is optimal without artifacting and yet perform more slowly. Chips are typically able to do some compensation with error correction to handle inadequate voltage but the result is reduced speed.Oxford Guy - Friday, February 8, 2019 - link
You're avoiding the point.eddman - Thursday, February 7, 2019 - link
It's performing better than I expected. It doesn't fully match a 2080 but still performs good enough as a stopgap solution.A bit lower price would've been nice but I suppose it can be justified by the 16GB memory to some extent.
Oxford Guy - Thursday, February 7, 2019 - link
"It doesn't fully match a 2080 but still performs good enough as a stopgap solution."No. It sucks unless you can use the areas of compute it excels at.
There is zero reason to buy a product for gaming that is so much louder for equivalent gaming performance. None.
just4U - Friday, February 8, 2019 - link
Your just looking for reasons to not like it.. It's a awesome card according to reviews. Is it a 2080ti killer? No. (..shrug) Maybe it might force some pricing down though so you can get one of those.. maybe. For me the 2080ti is 2x the price of the 1080s I own... and I'll not pay that for a video card unless I am in a business setting that requires it.Oxford Guy - Friday, February 8, 2019 - link
"It's a awesome card according to reviews."I read this review. Its noise-to-performance ratio is pathetic in comparison with the 2080 and the Fury X. Full stop.
If you're going to argue at least do something beyond trotting out the lamest dodge technique there is: the ad hominem fallacy.
eva02langley - Friday, February 8, 2019 - link
It doesn't "sux". It is just not disruptive enough for Nvidia fans to expect a price cut on RTX, which is pissing off mroe Nvidia fans than AMD ones it seems.The performances in games are okay, and the compute is really strong. If it is cheaper, it is a better buy. At the same price, I will go Nvidia.
However in Canada, the 2080 RTX is 50-100$ more expensive for blower style cards... with similar accoustics and worst temps.
Oxford Guy - Friday, February 8, 2019 - link
"If it is cheaper, it is a better buy. At the same price, I will go Nvidia. However in Canada, the 2080 RTX is 50-100$ more expensive for blower style cards... with similar accoustics and worst temps."Tu quoque = some blower models are loud, too.
$50-$100 is a very low price tag for one's hearing, comfort, and ability to enjoy audio whilst gaming and/or using the card for other intensive purposes.
Holliday75 - Friday, February 8, 2019 - link
I couldn't give two shits about noise. I wear headphones. I've never paid any attention to it on any product I buy.Oxford Guy - Friday, February 8, 2019 - link
1) Headphones don't negate all noise. Not even the combination of earplugs and headphones designed to absorb noise (and not produce audio) will get rid of noise. It still comes through. One can blast the audio at a higher volume and damage one's hearing to try to cover up noise but that is why the iPod/iPhone younger generations are facing epidemic levels of hearing damage.2) Headphones, as a requirement, are a limitation of the product's functionality.
Firstly, they become uncomfortable. Secondly, they tend to aggravate tinnitus for people with it. Thirdly, they are an extra expense. Fourthly, some have good speaker systems they want to make us of. Etc.
Why advocate limiting one's possibilities for basically the same price, when compared with other, more flexible, products? It's silly. You're gaining nothing and losing potential usefulness.
The only way the headphones point works much in your favor is if the same thing is required of Nvidia's GPU. Otherwise, it's merely you stating that you are a subset of the use cases for this GPU that isn't affected by the noise problem. A subset is not the entirety by any means.
Deaf folks don't have to worry about noise, too. Does that mean they should attempt to dismiss noise problems for everyone else?
LarsBars - Thursday, February 7, 2019 - link
I wish you guys would add Vega 64 liquid to the spec chart comparison: 1700MHz, 13.7 TFLOPs...Ryan Smith - Thursday, February 7, 2019 - link
Unfortunately that's not a card we have. AMD didn't widely sample that one.PeachNCream - Thursday, February 7, 2019 - link
Only $699? This is a midrange GPU in much the same way the $750 monitor was a midrange screen. By recent Anandtech standards, the price does not warrant any mention of high-end. Come on people, we need some consistency on the use of these terms!All teasing about the writing aside, it is nice to see a bit of competition. The Radeon VII is way out of my interest range as a product (it has 8x more VRAM than my daily use laptop has system RAM) but I hope it causes a Red and Green slapfest and brings prices down across all graphics cards. Maybe I'm being too optimistic though.
Korguz - Thursday, February 7, 2019 - link
peachncream... maybe not in your books.. but this is not a midrange card... maybe high end midrange :-)um seems all you have are notebooks... your not in the market for a discrete card ;-)
your laptop only has 2 gigs of ram ?? wow....
PeachNCream - Thursday, February 7, 2019 - link
Sorry about that. The Radeon VII is very much out of the range of prices I'm willing to pay for any single component or even an whole system for that matter. I was zinging about the GPU being called high-end (which it rightfully is) because in another recent article, a $750 monitor was referred to as midrange. See:https://www.anandtech.com/show/13926/lg-launches-3...
It was more to make a point about the inconsistency with which AT classifies products than an actual reflection of my own buying habits.
As for my primary laptop, my daily driver is a Bay Trail HP Stream 11 running Linux so yeah, it's packing 2GB of RAM and 32GB of eMMC. I have a couple other laptops around which I use significantly less often that are older, but arguably more powerful. The Stream is just a lot easier to take from place to place.
Korguz - Friday, February 8, 2019 - link
it could be.. that maybe the manufacturer refers it as a mid range product ( the monitor ) in their product stack.. and AT.. just calls it that, because of that ?:-)
eva02langley - Friday, February 8, 2019 - link
I follow you on that. I bought a 1080 TI and I told myself this is the maximum I am willing to put for a GPU.I needed something for 4k and it was the only option. If Navi is 15% faster than Vega 64 for 300$, I am buying one on launch.
D. Lister - Saturday, February 9, 2019 - link
But why would you want to spend $300 for a downgrade from your 1080Ti?HollyDOL - Thursday, February 7, 2019 - link
Purely on gaming field this can't really compete with RTX 2080 (unless some big enough perf change comes with new drivers soon)... it's performing almost same, but at a little bit more power, hotter and almost 10dB louder, which is quite a lot. Given that it won't be able to offer anything more (as oposed to possible adoptions of DXR) I would expect it not trying to compete for same price level RTX 2080 does.If it can get $50-$100 lower otoh, you get what many people asked for... kind of "GTX 2080" ... classic performance without ray tracing and DLSS extensions.
With current price though It only makes sense if they are betting they can get enough compute buyers.
Oxford Guy - Thursday, February 7, 2019 - link
Yeah, because losing your hearing to tinnitus is definitely worth that $50-100.HollyDOL - Friday, February 8, 2019 - link
Well, it's "lab conditions", it can always get dampened with good chasis or chasis position to reasonable levels and hopefully noone should be playing with head stuck inside the chasis... For me subjectively it would be too loud, but I wanted to give the card advantage of doubt, non-reference designs should hopefully get to lower levels.Oxford Guy - Friday, February 8, 2019 - link
1) The Nvidia card will be quieter in a chassis. So, that excuse fails.2) I am not seeing significant room for doubt. Fury X was a quiet product (except at idle which some complained about, and in terms of, at least in some cases, coil whine). AMD has chosen to move backward, severely, in the noise department with this product.
This card has a fancy copper vapor chamber with flattened heatpipes and three fans. It also runs hot. So, how is it, at all, rational to expect 3rd-party cards to fix the noise problem? 3rd-party makers typically use 3 slot designs to increase clocks and they typically cost even more.
HollyDOL - Friday, February 8, 2019 - link
Well, not really. If the quieter chassis cuts of enough dB to get it out of disturbing level it will be enough. Also depends on environment... If you play in loud environment (day, loud speakers) the noise won't be percieved as bad as if you play it during night with quiter speakers. Ie. what can be sufferable during day can turn in complete hell during night.That being said I am by any means not advocating +10dB, because it is a lot, but in the end it doesn't have to present so terrible obstacle.
It is very early, there can always be a bug in drivers or bios causing this temp/noise issue or it can be a design problem that cannot be circumvented. But that will be seen only after some time. I remember bug in ForceWare causing my old GTX580 not dropping to 2D frequencies once it kicked in 3D (or was it on 8800GT, I don't really remember)... You had to restart the machine. Such things simply can happen, which doesn't make them any better ofc.
Oxford Guy - Friday, February 8, 2019 - link
"If the quieter chassis cuts of enough dB to get it out of disturbing level it will be enough."Nope. I've owned the Antec P180. I have extensively modified cases and worked hard with placement to reduce noise.
Your argument that the noise can simply be eliminated by putting it into a case is completely bogus. In fact, Silent PC Review showed that more airflow, from less restriction (i.e. a less closed-in case design) can substantially reduce GPU noise — the opposite of the P180 philosophy that Silent PC Review once advocated (and helped to design).
The other problem for your argument is that it is 100% logically true that there is zero reason to purchase an inferior product. Since this GPU is not faster than a 2080 and costs the same there is zero reason to buy a louder GPU, since, in actuality, noise doesn't just get absorbed and disappear when you put it into a case. In fact, this site wrote a review of a Seasonic PSU that could be heard "from rooms away" and I can hear noisy GPUs through walls, too.
"It is very early, there can always be a bug in drivers or bios causing this temp/noise issue"
Then it shouldn't be on the market and shouldn't have been sampled. Alpha quality designs shouldn't be review subjects, particularly when they're being passed off as the full product.
HollyDOL - Sunday, February 10, 2019 - link
Please, read what others write before you start accusing others.eva02langley - Friday, February 8, 2019 - link
Yeah, when your speaker sound is at 70-80 dB next to you when playing CoD... /sarcasmAMD is going to solve the fan problems. Temps are lower than the RTX 2080, they can play with the fan profile a little bit better.
SeaTurtleNinja - Thursday, February 7, 2019 - link
Lisa Su is liar and AMD hates gamers. This is just a publicity stunt and a way to give a gift to their friends in the Tech Media. This was created for YouTube content creators and not for people who play games. Another Vega dumpster fire.GreenReaper - Thursday, February 7, 2019 - link
But many YouTubers play games as their content. And people vicariously watch them, so effectively it's letting many people play at once, just for the cost of the video decode - which is far more efficient!Korguz - Thursday, February 7, 2019 - link
yea.. amd hates gamers.. you DO know AMD makes the cpu and vid cards that are in the current playstation and xbox... right ???Oxford Guy - Thursday, February 7, 2019 - link
Yes, it's difficult to forgot the fiasco that is the Jaguar-based "console"(actually a poor-quality x86 PC with a superfluous anti-consumer walled software garden).
Korguz - Friday, February 8, 2019 - link
how is it a fiasco ??the original xbox used a Pentium 3 and Geforce for its cpu and gpu... the 360, and IBM CPU and ATI GPU...
Oxford Guy - Friday, February 8, 2019 - link
1) Because it has worse performance than even Piledriver.2) Because the two Jaguar-based pseudo-consoles splinter the PC gaming market unnecessarily.
Overpriced and damaging to the PC gaming platform. But consumers have a long history of being fooled by price tags into paying too much for too little.
eddman - Friday, February 8, 2019 - link
Consoles have nothing to do with PC. They've existed for decades and PC gaming is still alive and even thriving.Why do you even care what processor is in consoles?
Oxford Guy - Friday, February 8, 2019 - link
False. The only difference between the MS and Sony "consoles" and the "PC gaming" platform is the existence of artificial software barriers.eddman - Saturday, February 9, 2019 - link
What does that have to do with anything? No console game, ever, could be installed on a PC.Current consoles having x86 processors means absolutely nothing. Consoles are defined by their platform, not processors.
It'd be like complaining about switch (which you deem a real console) not being able to install android games; or complain they switch games can't be installed on android phones.
Korguz - Friday, February 8, 2019 - link
1) wheres the proof ?? links to this perhaps ?2) again.. where is the proof ?? considering they are also DirectX based.. that should make porting them to the comp.. a little easier..... so, not splintering anything....
the same can be said about cpus and gpus.
Oxford Guy - Friday, February 8, 2019 - link
The proof is that PS and MS "console" games won't install and run in Windows nor in Linux.Korguz - Friday, February 8, 2019 - link
sorry man.. but thats not proof.... thats just differences in the programming of the games..D. Lister - Saturday, February 9, 2019 - link
@Korguz:You actually believe developers make seperate versions for every platform? Wow.
Korguz - Saturday, February 9, 2019 - link
never said that... while the core of the game could be the same.. the underlying software that allows the games to be run, is different.. as Eddman said.. no console game can be run on a comp, and vice versa... i know i can't take any of the console games i have in install them on my comp.. cant even read the disc.. same goes for a comp game on a console... just wont read it...D lister.. are you able to do this some how ? ( and i dont mean by use of an emulator, either )
Oxford Guy - Wednesday, February 13, 2019 - link
You're hopeless with logic.Korguz - Wednesday, February 13, 2019 - link
oxford guy.. d.lister, or me? and how so ?DracoDan - Thursday, February 7, 2019 - link
I think you're missing a digit on the Radeon Instinct MI50 launch price... only $999?Ryan Smith - Thursday, February 7, 2019 - link
Forgot to scrub a cell when cleaning out a table. At the moment there isn't an official price for the card.Icehawk - Thursday, February 7, 2019 - link
FFXV results sure look CPU limited to me - why aren't you running at least an 8700 @ 5ghz?Oxford Guy - Thursday, February 7, 2019 - link
They look like GameWorks or something to me but I can't see why anyone cares about FF anyway. I hurt my face smirking when I saw the footage from that benchmark. Those hairstyles and that car... and they're going fishing. It was so bad it was Ed Wood territory, only it takes itself seriously.luisfp - Thursday, February 7, 2019 - link
People don't forget that Vega GPUs have the memory beside the GPU core, therefore making it more hot that normal GPUs out there. That has a lot to do with how hot it seems to be, the temperature tends to raise more due to memory temps in same area.just4U - Thursday, February 7, 2019 - link
True enough but owners of the 56/64 have found many work arounds to such things as the cards have not needed as much power as they push out. My cards (56s) use 220W of power per card They never go over 65c in any situation and usually sit in the high 50s to low 60s. with their undervolts.luisfp - Thursday, February 7, 2019 - link
I believe that Vega GPUs have the memory beside the GPU core, therefore making it more hot that normal GPUs out there. That might have a lot to do with how hot it seems to be, the temperature tends to raise more due to memory temps in same area.just4U - Thursday, February 7, 2019 - link
Better than a 64 in all situations and comparable to a 1080ti in all situations with only 5-6% performance hits against the 2080 which is costing 50-100 more here in Canada (according to pre-order sales) Yep, Im sold.ballsystemlord - Thursday, February 7, 2019 - link
Your favorite spelling/grammar guy is here. (AT Audience: Boo!)"Faced with a less hostile pricing environment than many were first expecting, AMD has decided to bring Vega 20 to consumers after all, duel with NVIDIA one of these higher price points."
Missing words (and & at):
"Faced with a less hostile pricing environment than many were first expecting, AMD has decided to bring Vega 20 to consumers after all, and duel with NVIDIA at one of these higher price points."
"Which is to say that there's have been no further developments as far as AMD's primitive shaders are concerned."
Verb tense problem:
"Which is to say that there's been no further developments as far as AMD's primitive shaders are concerned."
Thanks for the review!
I read the whole thing.
The F@H results for Vega are higher than I predicted (Which is a good thing!).
Ryan Smith - Thursday, February 7, 2019 - link
"Your favorite spelling/grammar guy is here. (AT Audience: Boo!)"You're always welcome here. Pull up a chair!
ballsystemlord - Friday, February 8, 2019 - link
I was joking. Some site content creators call people like me "The spelling and grammar trolls".I can never really be certain, so I try to be a little funny in hopes that no body will take my corrections as "troll" actions.
I don't know how you guys feel, but you've always taken mine and others corrections into consideration.
Ryan Smith - Saturday, February 9, 2019 - link
Our flaws and errors are our own doing. When pointed out, it's our job as journalists to correct them. So as long as people are being polite about it, we appreciate the feedback.Oxford Guy - Thursday, February 7, 2019 - link
This card is a turkey for gamers. AMD fixed the noise level problem with Fury X and now we're getting less value than we did then. It's too loud."Also new to this card and something AMD will be keen to call out is their triple-fan cooler, replacing the warmly received blower on the Radeon RX Vega 64/56 cards."
Is the sarcasm really necessary? If you're going to mention the cooler thing why not point out just how far AMD has regressed in terms of noise. Remember Fury X, a card that is nice under load?
"Vega 20 has nothing on paper to push for its viability at consumer prices. And yet thanks to a fortunate confluence of factors, here we are."
Oh please:
Fiji: 596 mm2 for $650. Vega 10 495 mm2 for $500. Vega 20 331 mm2 for $700.
Anandtech says it's all so shocking that Vega 20 is available to consumers at all. Eyeroll. No. For $700, AMD could have put that extra die area to more use and given us 8 GB of VRAM. But that would involve doing the impossible and making a GPU that is attractive to gamers, not just peddling low-end Polaris rehashes indefinitely.
Consumers aren't getting the best value here. They're getting leftovers just as they did with Bulldozer/Piledriver — parts that were targeted at the server market first and not consumers. At least with Vega 20, though, there is some competitiveness, although this is mainly because Nvidia is artificially crippling the value of the GPU market with its inflated pricing strategy. That is what monopolies do, of course. Look at how long Intel was able to coast with Sandy-level performance.
"At 3.5 TLFLOPS of theoretical FP64 performance, the Radeon VII is in a league of its own for the price. There simply aren’t any other current-generation cards priced below $2000 that even attempt to address the matter."
That's marvelous for the people who are able to care about FP64, unlike gamers.
This is what happens when there isn't enough competition in a market. Gamers get the choice of two shafts: Turing and Vega.
Oxford Guy - Thursday, February 7, 2019 - link
Oh, yes... and the "console".At least the Switch is a real console. I'm not talking about that. I'm talking about awful low-end PCs being falsely called consoles, which has been the practice since Jaguar became an (unfortunate) thing.
Korguz - Friday, February 8, 2019 - link
like in a previous post of yours.. are you forgetting that the xbox and xbox 360 were also, " low end " pc's that your are claiming ?? the switch is a real console ?? ha.. the nintendo switch, is based off of the Tegra SoC's from nvidia... in a way.. " still " a low end PC......Oxford Guy - Friday, February 8, 2019 - link
The reason the Switch qualifies as a console is that it does something differently vis-à-vis the x86 gaming PC platform. It has a different form factor and related functionality. Artificial software walled gardens do not truly differentiate Sony and MS's low-end PCs from the PC gaming market. They are merely anti-consumer kludge that people have chosen to prop up with their cash.Merely having an x86 processor does not make something equivalent to an x86 PC. The Switch is clearly not the same thing as a low-end PC box like a Jaguar-based rubbish console. I am not particularly enamored with the Switch but at least Nintendo is offering something different to better justify its approach.
Korguz - Friday, February 8, 2019 - link
this sounds more like your own personal opinion and nothing more.. for some reason you hate the current consoles, and seems like there is NO reason for your hate...nintendo has offered something different for a console since the 1st Wii, and honestly, look where it has gotten them... the xbox and playstation platforms outsold the nintendo systems, up to the switch, which has out sold the other 2.. but the games them selves on the nintendo systems.. are lacking..
Oxford Guy - Friday, February 8, 2019 - link
"this sounds more like your own personal opinion and nothing more.. for some reason you hate the current consoles, and seems like there is NO reason for your hate..."Ad hominem isn't a rebuttal.
Korguz - Friday, February 8, 2019 - link
still just sounds like your personal opinion, regardlessHorzaG - Sunday, February 10, 2019 - link
Pointing out that (according to the poster) you're just expressing your opinion and "hate" without reasoning isn't an Ad hominem, you used the term incorrectly earlier in this thread also. Pretty embarrassing to be simultaneously so conceited and so wrong."You should never listen to a word Oxford Guy has to say because he's a frothing fanboy whose posts reek of desperation and are probably indicative of an inability to get laid"
That's an Ad hominem.
Korguz - Tuesday, February 12, 2019 - link
and saying this :" You should never listen to a word Oxford Guy has to say because he's a frothing fanboy whose posts reek of desperation and are probably indicative of an inability to get laid "
about someone.. doesnt prove your point any better...
Oxford Guy - Wednesday, February 13, 2019 - link
"Pretty embarrassing to be simultaneously so conceited and so wrong."It must be.
just4U - Friday, February 8, 2019 - link
It's not a turkey at all.. it beats a Vega64 for around 30% ads 2x the ram (which is not really utilized yet) has a 3 fan design with Amd's top end shroud/block takes less power, runs cooler, and has the same characteristics which means Amd was generous on power so undervolting it without appreciable performance losses will be easy enough to do as will overclocking.For me that's a winner. I have blower 1080s and their very loud if I let them or run things at stock (i undervolt there to..) and I've seen how loud the Vega56/64 blowers can be.. this with the 3 fans? pfft.. way quieter.
Oxford Guy - Friday, February 8, 2019 - link
I think you should look at the data in this review because your analysis is way off.ballsystemlord - Thursday, February 7, 2019 - link
They are sold out! All the online retailers I checked have no Radeon VIIs! Unless you go to ebay and pay way too much.Oxford Guy - Thursday, February 7, 2019 - link
Overpriced underbaked vaporware? Never-coulda-happen.It's an ugly time to be a "serious" PC gamer.
ballsystemlord - Friday, February 15, 2019 - link
Well, it's been a week. They came into stock for about 5min.LogitechFan - Friday, February 8, 2019 - link
amdumb defense force in full denial mode, sorry, we can't hear you over the 55db noise level of the radeon VII ;)))))))))rukufe - Friday, February 8, 2019 - link
If you want to play with AI, you need tensorflow, and for a "server" card, at this price, it doesn't not makes sense to not support tensorflow. AI is everywhere today. this card is obsolete.gsalkin - Friday, February 8, 2019 - link
So is this too little too late? I'm bewildered that even at 7nm this card is pulling 300W of power and generating insane noise.It's also unfortunate that the rumor of 128 ROPs was bunk. These cards definitely have an imbalance in the CU to ROP ratio. Nvidia Titan Xp had 96 ROPs strapped to 3840 SPs but AMD is shipping a max of 64?
Oxford Guy - Friday, February 8, 2019 - link
"It's also unfortunate that the rumor of 128 ROPs was bunk."That rumor typifies the irrational thinking that plagues the gaming community. AMD isn't going to make the effort of changing the Instinct GPU to better suit gamers. It isn't and it hasn't.
dr.denton - Saturday, February 9, 2019 - link
I wonder, do people actually read and comprehend these articles? By now it should be obvious to everyone, that VII is not and was never supposed to be AMD's next generation of GPU. In fact, they always denied that Vega 7nm would make it into the consumer market - and for very good reason: they had Navi for that. Now that Navi is delayed, they need something for people to talk about - and talk about it we do.zodiacfml - Friday, February 8, 2019 - link
The first part of your conclusion describes what this product is. It is surprising to see this card's existence at 7nm, a Vega with 16GB of HBM2.It appears to me that AMD/TSMC is learning the 7nm process for GPUs/CPUs and the few chips they produce be sold as a high end part (as the volume/yields is being improved).
AMD really shot high with its power consumption (clocks) and memory to reach the pricing of the GTX 2080.
However, I haven't seen a publisher to show undervolting results. Most Vegas perform better with this tweak.
Samus - Saturday, February 9, 2019 - link
I think you are being a little too critical of this card. Considering it’s an older architecture, it’s impressive it’s in the 2080’s ballpark.And for those like me that only care about Frostbite Engin based games, this card is obviously a better option between the two cards at the same price.
You also ignored the overclockong potential of the headroom given by moving to 7nm
D. Lister - Saturday, February 9, 2019 - link
"You also ignored the overclockong potential of the headroom given by moving to 7nm"Unfortunately it seems to be already overclocked to the max on the core. VRAM has some headroom but another couple of hundred MHz isn't going to do wonders considering the already exorbitant amount available.
D. Lister - Saturday, February 9, 2019 - link
*...considering the already exorbitant amount [of bandwidth] available.Oxford Guy - Saturday, February 9, 2019 - link
"I think you are being a little too critical of this card."Unless someone can take advantage of the non-gaming aspects of it, it is dead in the water at the current price point. There is zero reason to purchase a card, for gaming only, that uses more power and creates vastly more noise at the same price point of one that is much more efficient for gaming purposes. And, the only way to tame the noise problem is to either massively undervolt it or give it water. Proponents of this GPU are going to have to show that it's possible to buy a 3 slot model and massively undervolt it to get noise under control with air. Otherwise, the claim is vaporware.
Remember this information? Fiji: 596 mm2 for $650. Vega 10 495 mm2 for $500. Vega 20 331 mm2 for $700.
Yes, the 16 GB of RAM costs AMD money but it's irrelevant for gaming. AMD not only gave the community nearly 600 mm2 of chip it paired it with an AIO to tame the noise. All the talk from Su about improving AMD's margins seems to be something that gamers need to stop lauding AMD about and starting thinking critically about. If a company only has an inferior product to offer and wants to improve margins that's going to require that buyers be particularly foolish.
Samus - Sunday, February 10, 2019 - link
I wouldn't call the 16GB irrelevant. It trumps the 2080 in the two most demanding 4K titles, and comes relatively close in other ultra high resolution benchmarks.It could be assumed that's a sign of things to come as resolutions continue to increase.
Oxford Guy - Sunday, February 10, 2019 - link
"It could be assumed that's a sign of things to come as resolutions continue to increase."Developers adapt to Nvidia, not to AMD. That appears to be why, for instance, the visuals in Witcher 3 were watered-down at the last minute — to fit the VRAM of the then standard 970. Particularly in the context of VRAMgate there was an incentive on the part of Nvidia to be certain that the 970's VRAM would be able to handle a game like that one.
AMD could switch all of its discreet cards to 32 GB tomorrow and no developers would bite unless AMD pays them to, which means a paucity of usefulness of that 32 GB.
BenSkywalker - Saturday, February 9, 2019 - link
This offering is truly a milestone in engineering.The Radeon VII has none of the RTX or tensor cores of the competition, uses markedly more power *and* is built with a half node process advantage and still, inexplicably, is slower than their direct competitor?
I've gone back and looked, I can't find another example that's close to this.
Either TSMC has *massive* problems with 7 nm or AMD has redefined terrible engineering in this segment. One of those, at least, has to be at play here.
Oxford Guy - Saturday, February 9, 2019 - link
The RTX and Tensor die area may help with power dissipation when it's shut down, in terms of hot spot reduction for instance. Vega 20 is only 331 mm2. However, it does seem clear enough that Fiji/Vega is only to be considered a gaming-centric architecture in the context of developers creating engines that take advantage of it, à la DOOM.Since developers don't have an incentive to do that (even DOOM's engine is apparently a one-off), here we are with what looks like a card designed for compute and given to gamers as an expensive and excessively loud afterthought.
Oxford Guy - Saturday, February 9, 2019 - link
There is also the issue of blasting clocks to compensate for the small die. Rip out all of the irrelevant bits and add more gaming hardware. Drop the VRAM to 8 GB. Make a few small tweaks to improve efficiency rather than just shrink Vega. With those things done I wonder how much better the efficiency/performance would be.Samus - Sunday, February 10, 2019 - link
BenSkywalker, the short answer is this is based on a dated architecture (2 generations behind Turing) so there is no real way it's going to beat it in efficiency: It doesn't even try to compete with the 2080Ti.But the fact that a GCN\Vega-based card can nearly tie a 2080 is commendable. I think the problem this card has is it's $100 too expensive.
BenSkywalker - Monday, February 18, 2019 - link
If we were comparing ray traced performance that would be a valid point, but we are talking about traditional rendering. They have a half node process advantage and are using more power than a 2080 by a comfortable amount.Try finding another chip, CPU or gpu that was built with a half node advantage, used more power *and* was slower.
Either TSMC is having major problems with 7nm or AMD set a new standard for poor engineering in this segment.
Ganjir - Saturday, February 9, 2019 - link
It is a shame the infinity fabric is disabled, because crossfire would actually give these cards a reason to use ALL of that bandwidth and capacity - at least on one card. Is there a way to enable this or is it a hardware limitation?Alistair - Saturday, February 9, 2019 - link
I calculate OxfordGuy has made 11 percent of all comments in this thread ;)Zingam - Sunday, February 10, 2019 - link
AMD should invest in power stations. And maybe even sell their future Radeon XIV in a bundle with a little power station!Crion66 - Sunday, February 10, 2019 - link
Nate or Ian, can AMD choose to enable pci-express 4.0 on this card when Ryzen/TR4 3000 is released?Also can crossfire be implemented by popular gamer demand?
ccfly - Tuesday, February 12, 2019 - link
did anyone test this card in c4d ,radeon pro vs octane for speed ?peevee - Tuesday, February 12, 2019 - link
"Though AMD hasn’t made a big deal of it up to now, Vega 20 is actually their first PCI-Express 4.0-capable GPU, and this functionality is enabled on the Radeon Instinct cards. However for Radeon VII, this isn’t being enabled, and the card is being limited to PCIe 3.0 speeds"Oh God, how much I hate marketoids! Morons who cannot get an A even in the primitive school math are hired into marketing depts, and ruin EVERYTHING.