Are you unable to read basic graphs or something? Ryzen 4000 (Ryzen 7 4800U to be specific) beats the best Ice Lake chip Intel has (i7-1065G7) in literally every single metric (+4% in single-thread, +90% in multi-thread, & +28% in iGPU). Perhaps go back to elementary school lol; it seems you need it more than you might think.
The only metric that remains is battery life. If Renoir can match Ice Lake on efficiency (specifically idle power draw) then AMD should finally get some premium designs wins.
It they could match i think they would make a fancy graph about it, but nowhere in the article was battery life mentioned which makes me think it'll still be behind intel.
I can't buy that with the absolutely INSANE power efficiency advantage 7nm Zen 2 based Ryzen 3000 CPU's have on desktop (nearly 2x vs 14nm CL-R).
My guess is that it's only a definitive win against the Skylake based machines (aka Coffee Lake Refresh Mobile & Comet Lake Mobile), with 10nm Ice Lake able to put up a MUCH closer (and less flattering) fight. And unless you're winning in a category across the board like they are with the other parameters, perhaps it is best not to bring any direct attention to it at all.
They don't have a power efficiency advantage on desktop for idle power: the chipset alone wants 11-15W. Most laptops spend most of their time idle; it's the low end of perf AMD needs to demonstrate wins in to be viable in notebooks (except giant gaming monsters with terrible battery life anyway).
I'm not saying they can't do it, I'm saying they haven't proven that they have done it yet.
- mobile Ryzen 4000 laptops only support PCIe Gen 3 - Ryzen APUs have traditionally had 8 fewer PCIe links than regular CPUs; makes sense for mobile use, since it will cut down die area and power consumption - on desktop, the I/O dies are manufactured on 12/14 nm nodes and are the same design as the X570 chipset; since the mobile Ryzen is a monolithic 7 nm die, this power consumption should be reduced - Ryzen CPUs (even on desktop) are SoCs, with a limited number of integrated SATA and USB controllers, so they don't necessarily need a chipset at all - even if a chipset were to be used, it could be much smaller than the desktop X570
What I'm getting at is that basing power estimates on the vastly different desktop design (with completely different design goals) doesn't help here.
Thunderbolt 4 - Is this even out yet or just a spec? Also, this is an Intel spec and Intel doesn't have PCIe 4.0 either so ...
eGPUs - Given that eGPUs only use 4 lanes, it seems like this is the best use case.
Ultra-high-speed external storage - There is only one PCIe 4.0 internal controller (Phison) at the moment (Samsung soon to join) and it isn't even very good. External storage generally lags behind internal storage so there is plenty of room to improve external storage within the bounds of PCIe 3.0.
Ultra-high-speed networking - PCIe 3.0 supports 10G ethernet as well as the latest wireless standards just fine. When I see 10G ethernet as common place, I'll reconsider the need here.
So it really just comes down to eGPUs. An alternate solution is to give them more lanes.
Exactly, I don`t have any reason to doubt Renoir`s efficiency improvements over Picasso. But Picasso was miles behind Intel on mobile efficiency. Hopefully the aforementioned power gating and decoupling Infinity Fabric from memory frequency will do the trick. If Renoir laptops get within 5% of Intel on battery life that would be a huge positive step for AMD.
Cooe you really need to tone down the aggression. Starting from your first comment you talk down to everybody like you're some kind of genius while we are kids. The reality will probably be the opposite. That said, AMD was like 50% behind because of the aforementioned inefficient idle power. Even the cherry picked 3780U in the MS Surface was like 40% behind. If AMD had closed this huge gap, they would be bragging about it. Even if they reduce this to a 10-20% loss would be a huge accomplishment.
These chips will have all the laptop I/O they need integrated, there will be no separate chipset.
Most likely it'll be a PCIe x8/x16 for the discrete GPU, PCIe x4 link for the SSD, a SATA for additional HDD, and USB for everything else.
Perhaps x8 for mobile, x16 for desktop (which may have use a separate chipset for more I/O). APU likely has two PCIe x4 links, two enabled for desktop, one enabled for mobile. These appear to be PCIe 3 in the mobile SKUs, I hope the desktop APUs will bump these to PCIe 4.
That's completely untrue. I have a mini desktop system (ASRock A300) with a 2400G that is stuffed full of two spinning 2.5" HDDs, two m.2 drives, and 16GB of RAM. Looking over at my Kill-A-Watt as I type this, it's sitting at 12.4W total power draw. If what you said was true, I'd be seeing at least 20-25W idle.
More importantly, it outperforms my last gaming system in the same games at the same resolution (1080p) at a max of 70W total power draw for the entire system, versus the 400W+ power draw of my last system with a Core i7 and GTX 1050 Ti. If that isn't power efficiency I don't know what is.
Whilst your comment is informative, I would dispute your 400w figure 100w for CPU, 300w for 1050ti no sir , no! About 150w max for that 1050ti at full chat, I want to know when these mobile parts will transformed into desktop or mini-pc . ALtho saving the best gfx for the hi-end parts is a bit of segmentation, they could afford to put MORE gfx on the 6c parts , not less. Bad AMD, Bad.
"They don't have a power efficiency advantage on desktop for idle power" - True, but who cares. It's not determinant and it's not completely out of school. Nobody buys a highest-end laptop for the fricking battery life lol. It's a consideration down like second to last.
I'd be much more concerned about my brand new flagship laptop CPU speculatively leaking Ring-0 access before I even plug it in. Intel chips are right now flawed until they prove otherwise.
As someone else pointed out, these won't have power hungry PCIe 4.0 chipsets nor separate I/O dies. Everything is very likely integrated and ultra low power. They're a mobile-first monolithic design, and thus their design doesn't have all that much in common with their desktop brethren (aside from core architecture).
Heck even the big X570 chipsets don't IDLE at 11-15W. OEMs can configure them however they want (hence the range of TDPs), but 11-15W is would be if you have the chipset I/O pretty well pegged.
With that being said I wouldn't swear they've caught up to Intel on idle power, and a lot of it is up to the OEM boards and firmware. But if they can get even close, and they beat them on performance, they'll make great machines. Especially interested to see how many gaming laptop wins they can get with those 45W octacore models!
You mean like the single-core where they're absolutely, where they're equal (at best), but AMD is still promoting it as a "4%" win? If they had made great strides in efficiency, clearly they would have shown it, no? It's literally the biggest issue with current APUs of theirs. Not saying it hasn't improved, but it still needs to improve by around 40% to catch up to Ice Lake, and with doubling of cores and even higher turbo frequency, it's difficult to see how much more room they have left.
Are you even serious? They can't test power efficiency until the entire machine is built. Testing chips ex-situ of their platform and chipset and defaults and all that, it proves nothing.
Unless intel is substantially, amazingly better than before (and they aren't) the difference between the two in power consumption isn't going to be more than it was before and it's probably less.
But if you're really trying to pretend someone interested in buying the highest performing laptop platform is going to quibble over 10-20% more power consumption for 20-100% more performance, your speculative execution is scheduled for the moment you open your eyes.
You can't list battery life when you have no battery specs, especially when it's a CPU article. Battery specs will go with the individual laptop designs, in size and weight.
It'll be interesting to see the individual wins and how they compare with Intel laptops. The number of design wins suggest AMD is winning mind-share with the OEM's. Given the cheaper price AMD might make some good progress in laptop sales this year.
Agreed. I'm eager to see what gains they've made on idle power draw this time around, as the Ryzen 2000/3000 mobile series were quite a disappointment in that regard.
I have yet to get anywhere close to the advertised battery life on any modern Intel laptop anyways. I think its all marketing BS. They are certainly longer than they used to be but the only way I get a full work day out of an Intel laptop is if it spends most of the day asleep.
With respect to AMD clients in the mobile world, I always say "With Friends like these who needs Enemies". The most recent example is Microsoft with its Surface AMD edition which is more expensive, less performing, less battery life than the Intel Edition that multiple websites declared on the day Anandtech did the comparison. So let us see if Asus forces the OEMs to give what AMD deserves.
You are acting like AMD's pre-release performance graphs haven't been totally accurate (or close enough to it) since the OG Ryzen launch nearly 3 years ago.... This isn't "hide a 5HP chiller under the table" Intel or "everything we say is mostly a lie" Bulldozer era AMD anymore. This is Lisa Su's AMD in 2020, but it seems your tin foil hat must be just too super precious for you to ignore.
Now I'm mot saying you should put 100% faith in every bench result to be 100% accurate under every usage scenario, but AMD's current track record says ignoring the benches entirely & believing them likely to be fraudulent, just makes you look like a damn fool.
Did you read this? https://www.anandtech.com/show/15213/the-microsoft... This will give you a good idea just how far ahead ice lake is compared to zen+, honestly i don't believe zen2 mobile will be able to catch up to it... sure maybe in cinebench but who uses that on a 15w laptop. Btw i have an r5 3600 so i'm no ones fanboy, just a skeptic of pre-release cherry picked benchmarks.
Yes, but you're COMPLETELY ignoring the fact that Zen+ to Zen 2 is a >=+20% improvement to single-thread performance (+15% IPC & +5% clocks), with an even more massive power saving. So literally the only way for anything you are saying to be even remotely true is if they straight up aren't actually using 7nm Zen 2 as the CPU arch (which is a known quantity) & thus AMD's provided benchmarks are off by orders of magnitude...
AMD used Intel's BEST Ice Lake chip as the comparison for a reason, & as I said above, AMD's pre-release performance graphs have been DEAD ON since the OG Ryzen announcement in late 2016.
Either you are saying that AMD's either blatantly lying about using Zen 2 or they are blatantly lying about their shown performance numbers by a huge margin. So which is it?
What benchmarks do you need to get over yourself? CPU-Z? Geekbench? Because the results would remain very consistent in those as well. And what's your issue with Cinebench other than that it ever so slightly favors AMD chips (as in by a handful of %)? As seen with the desktop CPU's (Ryzen 3000 vs Coffee Lake Refresh) exclusively single-thread performance will only vary by about 5% at the most depending on the particular application/benchmark used to test it. Thus, even by "cherry picking" you'll only be able to change the results by a tiny fraction in either direction.
Also, way to avoid having to try & back up your statement that doubling the core count / multi-threaded performance in the same power/thermal envelope is pointless in a world with 8c/16t consoles dropping in months! You are a delusional Intel fanboy of the worst kind.
Maybe out of happiness lol. You being a dumbass doesn't make these APU's themselves any less awesome. Rather, it just makes you well, look like a dumbass...
Shabby you're a fool. Intel picks random benchmarks that help their case EXCLUSIVELY.
When they were winning on single core they had all kinds of random benchmark citations to try to prove that was more important "overall" than having a fast bus - well, now they have NEITHER.
Except Ice Lake is a 18% IPC gain over Coffee Lake, making it around 10% over Zen 2 in IPC as well. Meaning that in order 4800U AMDi7-1065G7 to even match i7-1065G7 in single-core performance, it has to be clocked at 4.3 GHz in boost. It's instead around 4.2, meaning AMD is in fact a few percent behind, not ahead, of Ice Lake in single-core. In any case, we can assume it's parity.
As for the rest, the most important metric is efficiency, and AMD did not show us anything there, which is a bit worrying. Ice Lake was ahead of Zen+ by a factor of 40%. AMD only stated "up to 20% better efficiency" at the same frequency. That's simply not enough.
I'm also interested in sustained performance. At the end of the day the benchmarks that Andrei and others provide are completely useless. For example, in his benchmark of Surface Pro 3 Laptop, Ice Lake was reaching desktop Coffee Lakes levels of performance. That's just nonsense, and I'm surprised he makes no reservations about it. I use an Ice Lake laptop regularly, and no way in hell dos it even come close to performing as well as desktop CPUs for even everyday tasks. Even something as a simple scrolling in Spotify is more janky on laptops than desktop CPUs (which do run at base clocks in such tasks too, mind you). SPEC does not properly represent this reality, and it's also why I'm disappointed that 4000U chips have regressed in base clock. Base clock improvements is where the real performance is at.
@generalako AMD stated up to 20% overall lower soc power at the same frequency (while doubling the core count). That's much different than 20% better efficiency. Performance per watt has doubled (in multi threaded loads). I doubt the gains will be as large in single threaded loads. And I agree, AMD would certainly have shown a direct comparison to Ice Lake if they had somehow surpassed it in absolute power consumption. But AMD could be very close.
If AMD can reach their 4.2Ghz boost then they will beat the new Intel mobile processors. Users in TomsHardware are having issues with Intel high end mobile CPU not reaching their Turbo speed even under good temps. They are reaching a max of 4Ghz whereas the boost speed is 4.5Ghz.
Even though AMD used Intel best Ice Lakes which are by the way only 4 Core why these new AMD chips have 8 Cores - one has to believe Intel has 6 and 8 core mobile CPU with new architexture. It might be called Tiger Lake - but it would be foolish to think Intel is just sitting on their butts.
hstewart come on man.. will you EVER learn how to spell architecture correctly ???? you claim to know so much about computers.. but you STILL cant spell that?? unless you can prove it... intel may not have more then 4 cores with sunny cove, tiger lake, or any of the new late 2019 or later lakes it has out that is now.. and only you would believe other wise
Intel doesn't benefit from high frequency RAM. Ryzen benefits from higher frequency RAM up to 3800. Ryzen will for sure support RAM speeds faster than 2400.
It's funny, because Ice Lake beat Zen+ APUs by 40% in single-core, 50% multi-core and 40% better power efficiency, and yet people called AMD's alternatives "competitive". There's clearly a different standard to what both parties are called.
Let's also not forget that we still know very little about upcoming Zen 2's efficiency. And also that Tiger Lake is coming this summer, which will for sure take back the single-core thrones and possibly also iGPU (Intel is promising 2x jump here from Ice Lake--Suny Cove).
You really are throwing everything including the kitchen sink at generating FUD here. The only one of your numbers that represent reality is the 40% better power efficiency - the rest are clearly not comparing apples to apples, as the average ST/MT performance difference between Ice Lake and Zen+ difference is closer to 10-15%. That's what people mean when they say "competitive" (especially considering cost).
Tiger Lake is also most likely not coming "this summer". It's H2 2020 in Intel speak, and given Ice Lake was ~3 months later than they said it would be, I'm not prepared to assume that's anywhere near "Summer".
Yes you are. That article is exactly where I went to check the data *before I wrote my first reply*. The only specific ST test quoted has a 20% advantage to Intel, and it's in Cinebench, which is apparently Not A Real Benchmark now that AMD are predicting wins in it. So that's your one factual claim cut in half, and still nothing to back up your speculation.
The biggest wins for Intel in the article you're mentioning are in Handbrake (MT, not ST) and 7zip (unsure). 7zip is strongly affected by memory bandwidth, which is an advantage that Renoir negates by supporting LPDDR4x. The other big wins for Intel were in the web browsing tests - Renoir's changes to boost behaviour should help there, too, though we don't yet know by how much. What's absolutely certain is that none of those performance gaps will change in Intel's favour.
It's impossible to gaslight people when they have access to the same information you do - you just come out of it looking like a troll.
Dude, You really don't know how to read, that is the best answer to you. It writes like this after spec benchmark: with the Intel variant of the Surface Laptop 3 being ahead by 37% in the integer suite, and 46% in the floating-point suite. What is so hard to believe? Zen 2 is something like 15-20% better than Zen+. Ice lake is another easy 10% over Zen 2 in ipc. Add that and you will get the figure the dude you are arguing about said.
Depends if they done the bench suite at 25W or 15W. With last Ryzem mobile they were smart enough to test at 25W to show strange things. Anyway my bet with all cpus active they will destroy the battery with an huge power consumption utilizing the Tskin methodology to cool the SKU. It is now all gold under the sun. The idle power will be huge and the SOC availability small. No volume no party.
You will have bad surprises when someone will test these 8 core SKUs on a real laptop constrained to 15W. Knowing to be unable to beat Intel at the same number of cores, AMD chose another sad street. They claimed great things with present Ryzen Mobile, but real world power measures say the sad truth. Basically they not even try to ship a good low idle power four core eight threads, the most wanted in tiny devices.
@Gondalf Did you even read the article? Running Cinebench Renoir is twice as efficient as Picasso. Why do you anticipate heat issues at full load?
Even if we grant that AMD have cherry picked their single threaded benchmark to show a 4% lead, they should at least be equal or very close to equal with Ice Lake in single threaded performance.
Intel might still use less power on idle and with low loads but I suspect that moving to 7nm (in combination with AMD's other improvements) will have significantly reduced that gap.
You think it's sad that AMD have doubled the core count within the same TDP? I think it's sad that Intel finally released Sunny Cove with significant IPC improvements but don't have the capability to make high core count or high frequency chips, and probably won't for a while.
@Solarbear28 - the funniest bit is how hard Gondalf's projecting. He's predicting that AMD won't have enough Renoir chips to go around, which is precisely the situation Intel are in with their 10nm CPUs. As you also noted, the only reason they haven't pushed the core count further... is they cant.
Since when was 4 cores 8 threads "the most wanted" in mobile? Intel didn't even bother offering one until after AMD announced the Ryzen 2000 mobile chips. Why would I limit myself to that when I could have 6/12 for less money?
I'm not going to have any "bad surprises" - I'm not predicting the second coming, here. You're the one pulling out every possible reason you can come up with for this unreleased product to be bad.
Precisely. I have never saw extra cores and threads, in the same TDP envelope, for the same amount of money as some kind of unpleasant surprise. I guess it's only seen as a negative if it's the competitor to your favorite brand that's doing it.
Heaven forfend that the truth lie somewhere between extremes, and that not everybody who believes these graphs to be a reasonable indication of what to expect is also an idiot.
How. If by competitive you mean the 4c/8t i7 10G7's barely competes with the 6c/6t Ryzen 5 4500U then sure. Not to mention the worse yields and availability that come with Intel's 10nm.
Frustrating to see the 45W parts aren't shipping without GPUs. I'd love to see a model such as the Dell XPS 15 with a 45W CPU part and no dGPU. Navi video decoder is comparable to a dGPU anyway and the higher TDP CPU is useful for development.
He is asking for laptops that use very decent AMD iGPU instead of dGPU. And I completelly agree- I also want an ultraportable with 3550H at 45W, but no such model was released for RR/Picasso (just launched Lenovo model has the H-series APU, but they configured it down to cTDP of ~25W). We'll see, if anything ultraportable at 45W comes out with these new Renoir series.
I have a 2700u notebook, it was cheap, most demanding game I wanted to run on it was Overwatch, which is does okay at 720P, but I think it's bandwidth starved.
I would upgrade to a 4700u in a heartbeat if it offers a big uptick in performance, the 3700u was a side grade not worth bothering with.
Doesn't sound like complaining, just sounds like he's sharing his experiences. LPDDR4x should solve the bandwidth issue on Renoir. AMD is claiming 54 fps at 1080p, low settings on Overwatch with the 4800U. But if you really want a major upgrade obviously go with a dedicated GPU.
Does your notebook have dual-channel RAM? It should manage Overwatch pretty well with it, but without it would definitely struggle. If it has a free slot, throwing in another stick of RAM is your best bet for a relatively inexpensive and very effective upgrade.
Cooling is the other problem that tended to affect earlier 2700U devices. Not much to be done for that besides a paste swap.
Failing all that, hopefully the 4700U should be a good future option!
Apparently you didn't read the full article. Navi iGPUs will have to wait for the 5000 series of AMD APUs. These APUs all have Vega iGPUs. Higher clocked, fabbed at 7nm (and thus more power efficient), but still Vega.
People that want their laptop to be powerful if needed, and long-lasting if needed? My XPS 15 has an i7-7700HQ, GTX1050 and still gets a solid 7-10 hours of battery life for web browsing, document processing etc.
Many 15” “premium” laptops (e.g. MacBook Pro, Dell XPS 15...) use 45W chip that are still very much care about battery life. Having an iGPU or only dGPU is the difference between 9~6 Hours or 2 Hours battery life.
I actually think iGPU being Vega or GPGPU focused uArch is better than having Gaming Focused like Navi / RDNA. Those GPGPU could be put to good use once software catches up.
So I do hope the 5000 Ryzen Series to be Arcturus GPU based.
I am inexpert, but maybe we all are? The apu is monolithic. They cant mis and match. It needs to cover a range of needs in one form, & i suspect lisa has her eye on apps for the apu that place great value on compute - a strong point of vega.
embedded processors for AI onthe edge come to mind - smart cars?
What are you talking about? AMD does ship the 45W parts without dGPU's. Look at the ASUS machine; it's a Ryzen 4000 APU + an Nvidia RTX 2060 (aka, AMD only sold them the APU by itself). It's entirely down to the OEM's whether or not to use an H or U series CPU as well as whether to pair it with a dGPU, but with the H series APU flagship having worse iGPU performance than the U series model (1x less CU), and a 45W CPU inherently requiring a cooling system redesign to implement, not adding a dGPU to H using models in these very early days doesn't make much sense.
As the 14" ASUS shows, Ryzen 4000 H devices can be small enough already that you really wouldn't save much more space or weight by axing the RTX 2060 but staying w/ an H series CPU, with the only real reasons to do so being price & battery life related (the latter of which can be dealt with by simply force disabling the dGPU when not explicitly being used). IMO, to make a truly noticeable difference in real world use conditions you'd need to axe both the dGPU AND switch to a U series part, such that you could dramatically scale back the cooling system & required VRM board space, as well as the battery if further size reduction is desired.
I believe that comment was about there not being any designs with the powerful H-series APU without any kind of dGPU (AMD or Nvidia) - not about AMD bundling APU+dGPU together.
What'd be the benefit? As it is you get to use the iGPU for low power scenarios, then switch to the dGPU for heavy lifting. When the dGPU is enabled the iGPU is barely doing anything (passing information from the dGPU to your display outputs), so at most you'll be losing out on a couple of watts from it being there. I don't think that's enough to make a difference to clock speeds when you're already ad a 45W TDP.
super-wide vector operations are best done on gpus or other dedicated accelerators units as "real programs" with conditional jumps make no sense whatsoever on vector operations....
No, but this is near completely irrelevant to what you'll realistically be doing on a laptop (... any laptop). AVX-512 already negatively hits clock-speeds & power efficiency so hard on Ice Lake that it was already of pretty ndebatable usefullness on a mobile platform anyway, let alone if the instruction set actually had a decent amount of software support (which it most definitely doesn't). Having full speed (1x operation per clock) AVX2 (256bit) with Zen 2 is the vastly bigger & more important deal here.
Haha, more AMD bullshit with retarded cinebench on laptops.
Almost no discernible ST performance gains (and likely just doctored benchmark results), power consumption/idle still horrible as per usual AMD history of doing things, I'm so buying a new i7-10G7 or whatever laptop instead of any of this overheating unstable AMD garbage.
AMD is absolutely excelling at one thing tho, making products nobody actually needs or wants. Nobody encodes video or runs cinebench on laptops. If they do, they're using dgpu and nvenc, and AMD video coding engine is trash, too.
Pretty sure "LogitechFan" is a sockpuppet. The very idea of having a sock on the Anandtech comments is desperately sad, but then so are their posts, so...
I don't know how or why we've suddenly all gone from lauding each other over perf increases to disowning performance increases (in the same power envelope) as pointless. Truly puzzling.
Nobody who cares about video quality is using NVENC or Quicksync. Software video encoding is the only way to go. In most cases these 8 core parts are approaching NVENC and Quicksync in performance anyways.
Oh, also forgot to add that in the enthusiast market AMD is outselling Intel desktop CPUs almost 2:1. Really excelling at making something nobody wants.
Holy delusional fanboy, Batman!!! Zen 2 is dramatically more efficient as far as both power AND thermals than ANYTHING Intel has; with the difference compared to the latest 14nm+++ Skylake derivative nearly on the order of 2x (at least as far as power draw is concerned). 10nm Ice Lake gets things closer, but Zen 2 is still ahead to the point it can pack in 2x the core/thread count in it's max config to boot.
AMD's literally giving you nearly 2x the multi-threaded performance in the exact same 15W ULV power/thermal envelope as Ice Lake, & with notably better single-thread + significantly better iGPU perf thrown in for good measure. I seriously don't know what more you could have been realistically expecting to have ended up so damn disappointed.
> AMD's literally giving you nearly 2x the multi-threaded performance Which I (and most people) don't need. 4 fast cores > 16 shitty cores. Also: "benchmarks that benefit from multiple cores perform better with more cores. Who would have thought?!". Oh, but how about real-life usage? Ah... Oops.
> with notably better single-thread + Citation needed. 4% as per their graphs is literally measurement noise and/or some cherry-picked "benchmark" that favored AMD.
> significantly better iGPU perf Yeah but then you have to install fucking radeon drivers, lol.
Holy shit... I'm going to guess you're an extremely uneducated "gamer". If so, here's a news flash buddy, next gen consoles have 8-cores/16-threads (Zen 2 ones in fact), meaning that's the MINIMUM you'll need to get max performance in next gen titles. So do you still wanna tell me that doubling multi-threaded performance in the same thermal/power envelope is still pointless?
And you are seriously someone who will refuse to install AMD Adrenalin, but then go & voluntarily install the near spyware that is GeForce Experience with a smile on your face? Rofl, oh the hypocrisy *facepalm*...
Maybe stop taking it from Intel in the butt & being a total freaking dumbass for just a second.... You might want to enjoy it.
No, I don't game and I actually use my laptop for work. For which I require decent graphics (CAD) and (you guessed it) good ST performance. Neither of which AMD will deliver. Which is why my new laptop in 2020 is going to be 10th gen i7 or something newer if it shows up in Q1.
I couldn't give any less fucks about how many cores are in next gen consoles, game "developers" have been lazy fucks for years now, writing shitty code just because.
Even the Zen+ based APUs have better 3D graphics performance than Ice Lake. You won't see a night and day difference doing CAD between Ice Lake and these new AMD chips.
And 4 percent higher ST performance isn't Cinebench. I'm surprised that you claim to be a professional and yet do not know what noise is.
Except that AMD have matched Intel in single threaded performance at a lower cost. And thrown in more cores for better multitasking if you need it. So why rule out AMD already?
Then all the power to you. Sure, Ice Lake probably would have better ST performance than Renoir (the magnitude of which we wouldn't know until both are available en masse), but I'm curious what sort of CAD work you do that does not benefit from MT performance (but benefits from improved graphics). And just so you know, it is very difficult to write good multithreaded code. Sounds like the "lazy game developers" you are talking about who can write good MT code that runs well on current gen console hardware (8 Jaguar cores) is smarter than someone who chooses to run a GPU-accelerated CAD software on a non Quadro/Firepro hardware.
timecop1818 you do realize amd has better ipc then intel now.. right ??? clock for clock.. core for core.. amd is better.. the ONLY reason intel wins any benchmarks.. is because of the higher clocks.. clock the 2 at the same clocks.. and intel looses...
@ Korguz On the desktop that is true. But Sunny Cove has taken back the IPC lead in mobile. According to AMD Renoir beats Ice Lake by 4% in a single threaded workload but its 4.2GHz vs 3.9GHz. So Ice Lake has a small IPC advantage. However, by the time Sunny Cove comes to the desktop AMD should be able to retake the IPC lead with Zen 3.
i bet sonny cove only has the lead still because of the clock speed... what happens when you clock both cpus at the same clock speed ?? intel vs Zen 2 based cores....
Ryzen 7 4800U has a single core turbo of 4.2 GHz versus 3.9 GHz for Ice Lake Core i7-1065G7. That's a 7.7% advantage for AMD. Yet AMD claims to have a single core performance advantage of only 4%, so Ice Lake has slightly higher IPC.
Actually, in the mobile segment, Intel has lower clocks than AMD, their high end is capped/throttled at 4Ghz while they are advertising 4.5Ghz, its all over the place in Tomshawrdware forums. If AMD manages to boost to 4.2Ghz, they will beat Intel by a good margin in both ST and MT.
"4 fast cores > 16 shitty cores" Except you're not getting 16 shitty cores, you're getting 8 cores that are comparable in ST performance to the 4 cores Intel is offering. Derp.
"Citation needed" You first, pal. AMD's graphs have been pretty solid lately; Intel's... not so much. You're giving out what *could* be solid reasons to ignore the AMD results, if true, but they could just as easily not be true. You have no more idea than the rest of us, but you made your mind up already. Cute!
"Yeah but then you have to install fucking radeon drivers" LOL. Ever used Intel iGPU drivers before? This old "AMD drivers suck" rhetoric is old and has no basis in truth, and using it in comparison with Intel is downright hilarious.
If you search for driver issues, you'll find lots of people with driver issues. Who'd have thought?
AMD's drivers aren't significantly more or less buggy than Nvidia's - they tnd to circle each other in terms of who currently has the most irritating bugs.
Intel's drivers are a whoooole other story, which is why it was hilarious that this troll was trying to make that argument.
people keep saying amds drivers suck.. or nvidias suck.. but i have used both with no issues.. i bet if you did the same for nvidia.. you would find the same...
vladx knows the game they're playing. If you Google "AMD driver issues", you'll get more results than for "Nvidia driver issues" by a factor of about 3:2. One could naively assume that means AMD's drivers are worse and be done.
Google an outright derogatory line like "AMD drivers suck", though, and you'll get ~3 million results - while "Nvidia drivers suck" only gets ~700,000. Very few people with actual problems will state them in that way, so this likely reflects the vocal fanboy contingent - it also looks like their market share turned inside-out.
"Motivated individuals" who exclusively buy Nvidia have been spamming about AMD drivers being terrible for so long that they've generated the illusion of it being true - dupes and shills reinforce it by repeating it ad nauseam.
Having used both vendors for decades, I'll happily state that both have issues and both are fine 99% of the time. Neither are anywhere near as bad as Intel, which was the whole damn point of this topic offshoot in the first place.
I own both Nvidia and AMD GPUs so I can tell with outmost confidence that Nvidia drivers unlike AMD drivers are problem free at least for basic stuff like gaming and video decode and encode. I can't claim how it fares in other workload, maybe there are indeed issues in other workloads.
then it must be just you vlad.. cause i have no issues either way.. and i have vid cards..that are a tad old.. still in use from both in comps..where i just need a vid card...
why not ??? sounds to me like you think you would write better drivers... then amd or nvidia would...
Like I said, neither Intel's iGPUs or Nvidia cards gave me any issues, while the RX480 I have went from working to glitchy and back in certain games with each drivers update. Like I said, QA is pretty much non-existant at AMD.
And now the newest and hottest Radeon 5700 XT is full of gfx issues in games, so indeed the infamous reputation of AMD's drivers is well deserved to this day.
And go read on Tomshardware how the high end Intel CPUs are not reaching their Boost clocks, they are capped at 4Ghz instead of 4.5Ghz. AMD is also much cheaper than Intel and has better ST AND waay better MT performance. Enjoy your overpriced slow 4 cores throttling Intel.
Why are you making the arguments used against Bulldozer? It's quite well known that Zen 2 cores are very competitive against Skylake+++++++++++++, and on par with Ice Lake. Also, real life usage undoubtedly uses more and more cores - notably, Intel's stupid marketing "benchmarks" are all applications that don't even load CPU. Nobody is buying an expensive laptop to use Microsoft Edge - you can do that on a $300 potato. And based on what we know, 4% is very reasonable given desktop Zen 2 performance. Without a chipset, the IO die, and reduction of other things, Renoir is very likely able to achieve that.
And 4% was from Cinebench R20 1t. Love how Intel fanboys paraded their cinebench dong when Bulldozer was getting quashed in FP perf compared to Sandy - and now that we have AMD leading in FP performance suddenly it's the literal worst? Idk, it's a pretty good representation of compute performance of a CPU, and now that 4000 supports LPDDR4-4266 we can be assured that memory is pretty good too...
Exactly that. And the leaked benchmarks from weeks ago have been confirmed for better or for worst -- a 4c/8t ICL is about as good as 8c/8t AMD, but way more power efficient... BUT a whole bunch of AMD brainwashed fanatics will tell you otherwise, I'm sure.
Rofl, so you'll trust a single potentially leaked pre-release bench with absolutely no context & countless unchecked variables more than AMD's officially provided numbers? And you call ME a stupid fanboy??? O_____O
About the only way I can respond to that level of absurdity is to laugh my freaking ass off.
Their pre-release estimates for Zen *under-represented* the performance gains from 'dozer, and they did the same again from Zen to Zen+. Their marketing slides from the releases were all borne out by independent benchmarks.
That's why Ryzen 3000 CPUs are selling like hotcakes on the desktop.
So... we should trust a random leak on the internet over AMD's official slides?
AMD uses these slides as both a means to sell to us, the consumer, and to their investors. They may be cherry picked, but they're also as truthful as they can be.
There's a laundry list of things Intel have done to downplay the Zen arch.
I don't get what's with referring to timecop as "her" or "fangirl" as a form of mockery.
Call them mendacious, a troll, time-waster, FUD-spreader, dipshit, whatever - but there's nothing inherently insulting or degrading about being female.
Except we don`t yet know how efficiency compares between Ice Lake and Renoir. We are all just speculating based on AMD`s improvements over Picasso (and our knowledge of Zen 2 desktop parts).
Are you timecop1818's sockpuppet, or do you just follow him around agreeing with all his posts and disagreeing with his critics in the least factual manner possible?
timecop1818 yea.. and intel is better ?? you forgetting all those quad core chips intel kept giving the mainstream, while telling every one we dont need more then 4 cores for the mainstream.. or how about their 95 watt cpus using up to 200 watts ??
Even if tile-based rendering isn't often done on laptops, what in that makes this *benchmark* irrelevant?
Benchmarks aren't always chosen based on what lots of people use, but rather than by their ability to produce useful metrics - in this case multi-threaded computing when restricted by only the execution resources, power and thermals.
I swear some of the most rabid fanboys have a few words they Ctrl-F from the page and react to - without even reading the entire post or article. And that goes for both camps - I've seen it from some people, to defend AMD against some imagined slights (a few words taken out of context).
Yup. You can guarantee that whatever is actually being said, somebody will carry it off-topic to a talking point they feel more secure in - even if it's totally irrelevant and they're not actually right about that, either. Tends to be how most political discussions go, too...
Yeah, there have been lottsa scandals about doctored benchmarks. Oh wait....
AMD under Lisa, have a rep as very straight shooters. Intel only tell the truth for practice. They have zip cred ATM - a laughing stock.
"Absolutely excelling" is a tautology btw. I highly doubt a semi literate finds paid work to justify a fancy laptop.
Nobody makes a living running benchmarks either - any pc - any brand. Benchmarks means "indicators", & are used by both - perhaps innappropriately, but no side can set the rules. they are simply what consumers have come to expect
I think a large part of this stems from confusing benchmarks with real word performance (at a given task that is different than the benchmark).
Benchmarks usually highlight one aspect or side of a product, and you use multiple benchmarks to prong it from different sides.
Cinebench and other tile-based renderers are used to gauge the maximum achievable multi-threaded performance under full load, when there is no inter-thread communication required that would affect the compute efficiency. You use other benchmarks to get at the single- and lightly threaded workloads, and do some real world tests.
It's only the totality of different kinds of tests that tells (enough of) the whole story. Since reviewers have a large audience, they can only give general recommendations on the kind of workloads a product is suitable for, and the kinds it's the best or one of the best for. It's on every individual to consider the tests that are relevant to their specific use case and make their own decisions. (Also: Follow multiple review outlets that do the kind of testing that caters to your use case.)
I think the big whinging from the Intel fans comes because, when Zen first released, Cinebench was very much a best-case scenario for AMD's chips - especially Threadripper. It showed them in the best possible light, while performance elsewhere wasn't so hot.
AMD have sorted most of those issues now, but Cinebench has become an easy way to compare their product generations - and sure, it still shows AMD's thread-heavy products in their best light. But the trolls don't have new arguments because Intel don't have new products, so they're going back to the same old ones.
Its also not just up to AMD. As it was noted in various independent tests, developer optimisations for hw can make or break results in both games and data centre space.
Most developers implement Intel coding which of course does tend to behave somewhat better on Intel hw, and AMD is left to 'reach' Intel from brute forcing alone (which doesn't exactly makes the playing field fair to begin with).
In most recent tests of HEDT and data centre CPU's such as ThreadRipper and EPYC, it was noted that when developers optimised a program for Zen uArch, performance improved by over 50%.
So, we need to bear in mind that in the software field, performance can radically vary as code can be selective. We need devs to write code (or have AI write code) which can execute as efficiently as possible (and make use of anything the said hw has to offer) on any hw without discrimination.
I know that some people don't think there are people who do serious gaming and productivity work on laptops, but there are.
For such people (like me), we do like high performing multithreaded performance which can be sustained indefinitely (especially for things such as 3d Studio Max which easily maxes out all the cores/threads when rendering animations).
Then there's video-editing, and occasional gaming.
We also need these systems to be portable, so people like me actually like systems such as Acer Helios 500 PH517-61 which have desktop 2700 and Vega 56 with powerful cooling. I know its a desktop replacement with not so great battery life, however, it IS highly portable (infinitely so in comparison to a desktop), not to mention, quiet, and can easily sustain maxed out CPU/GPU performance indefinitely while remaining quiet and cool (cooler than some desktops even).
Anyway, I like the fact that 4800H for example seemingly comes within a spitting distance of 3700x (performance-wise) in just 45W TDP envelope, and has a capable iGP to boot which would enhance battery to a high degree (plus, depending on which dGPU is used, it would also be a capable gaming machine).
Emphasis on single threaded performance is not a big deal for lots of people since Zen uArch has been quite capable of doing this nicely. Zen 2 is another ballpark though, so its certainly welcome that we are now going to have real mobile hw with some serious performance punch.
However, as you know, OEM execution will be key. AMD can have superior hw, but if OEM's don't include capable cooling and then mismatch other internal components and cut corners, its going to be a problem (but this is not something AMD has influence over sadly).
I just hope OEM's stop treating mobile users as second class citizens and do things competently with high quality control on AMD hw this time around.
Real world performance should reach the advertised benchmarks/numbers if the cooling is done competently... if its not, we know OEM's will be to blame.
No matter what you say, AMD will be better because they offer similar or better ST performance AND they have better technology 7nm which is more efficient and faster.
hmm To further prove that the second gen Ryzens constituted the tipping point for AMD’s success, the Ryzen 7 2700X is currently the top best-selling CPU on Amazon, with a very enticing US$159 price tag. Intel’s only top 10 CPU is the i5-9600K occupying the 10th place, currently selling for US$222.99. The i9-9900K is occupying the 11th position, and most of the AMD CPUs are mid-range models. Here is the complete top 10 list:
From the slide deck, looks like Athlon Gold/Silver is not the ideal replacement for A6-9220C and A4-9120C, since the TDP is at 15W instead of 6W. The A6-9220C replacement is rumored to be "Dali" and I guess it will be announced later.
Athlon Gold 3150U is just under Ryzen 3 3200U, if we go by the name and clocks. So it's not meant to replace 3200U per se.
At least there will be more diversity at the low end.
I looked at the bottom of the slides and it says "Renoir/Dali product launch press deck". So I guess Athlon Gold 3150U and Silver 3050U are in fact Dali.
I still want to see what a fanless Ryzen laptop chip could do.
If I were to guess it's likely still 12 lanes to cut down on die space (aka technically 20x on die, but 8x are reserved by the iGPU like normal), but they're now PCIe 4.0 meaning effectively double the bandwidth. This should make adding stuff like uses up PCIe connections like fast networking MUCH easier than it was with the somewhat I/O deficient Raven Ridge & Picassso platforms (where stuff like 1x lane WiFi chipsets were annoyingly common to save on lanes). Well, assuming more & more such expansion devices start to come around with native PCIe 4.0 support that is.
You are right. Damn that's big miss. Designing laptops with just 12x available 3.0 lanes was already super tight with Raven Ridge/Picasso. And that's before you start talking about new tech like WiFi 6 for ex.
A single PCIe 3.0lane still provides 1GB/s bi-directional. 500MB/s read/write should more than suffice for even the most recent WiFi chips. It's still a big limitation for potential Thunderbolt implementations, though, but I'm part of the "couldn't care less" market on that front - USB C does everything I need at a much lower cost.
One advantage of the mobile parts bringing up the rear is that previous Ryzen mobile parts have included features that weren’t ready when the desktop parts came out.
Hopefully, AMD is willing to talk about how these into parts compare to other members of the family. For example, one CCX or two?
maybe now, there will be some fully premium AMD laptops. Ryzen 3000 laptops were better, but still not there.
The later release is probably also due to the higher degree of integration these products require, to design laptop chassis, system board and cooling around them, never mind the firmware optimizations. Manufacturing on leading edge nodes should also get more profitable for lower cost parts as time goes on - APUs have up until now been much cheaper than the desktop CPUs.
"AMD has also adjusted the L3 amount, to 4 MB per CCX, which is half that of the consumer desktop line."
That's not correct, it's only a quarter, since consumer desktop line now has 16 MB per CCX. (Raven Ridge had half that of the desktop line, IMHO it's slightly surprising that the cut this time is that drastic.)
Ian made an error in the article. That should say "8 MB per CCX" not "4 MB". And the lack of any I/O die (aka, the memory controller & uncore is all right on die) should make up for that reduction by reducing latency, just like using just 1x CCX did w/ Raven Ridge & Picasso.
Scratch that. It seems that dAMD DID actually cut the L3 by 3/4's (from 16MB to 4MB per CCX). That's really kinda surprising. Must mean that they got an even bigger latency benefit from bringing the IMC & uncore on die (from the desktop's separate 12nm I/O die) than I was originally expecting, such that the die space savings were more worth it than just cutting the L3 in 1/2 (aka doubling the L3 per CCX vs RR/Picassso), as was the case with prior APUs.
No mention from what I've seen. Perhaps when AMD are done producing the 4800HS 35W exclusive part for Asus they will turn those bins into a 45W 4900H? I have no idea.
it's time for Intel to spin off it's fab just like amd did for global foundry. AMD indeed waited to long that it's fab became Achilles's heel for years
AMD is using fabs that have benefited from insane amounts of money from the mobile boom. Going forward those fabs will still receive orders from those same sources. Intel's own demands do not generate enough profit for it to invest in its fabs as much as TSMC/Samsung can.
It`s a valid complaint. Those less informed could incorrectly assume that Ryzen 3000 mobile APU`s have Zen 2 because that`s what the Ryzen 3000 desktop parts have.
it is not really a valid complaint. The numbering system just tells people that they are getting the newest chips for any particular year. Everyone who cares about cores and the difference between Zen, Zen 2, Zen 3, etc. knows that the newest APUS come after Desktop and server and thus every year have an earlier core.
If you care you know. If you don’t know you don’t care.
Yeah it's stupid. It's done by marketings because the 4000 series will be sold during the APU's lifetime, so it doesn't look outdated from a naming perspective.
Lol, are you allergic to facts & reason or something? But way to attack my points with facts & data there!... Not...
The cognitive dissonance required to continue believing that Intel is dominating mobile CPU's after today is kind of awe-inspiring. And you'd do well to realize that the vast majority of readers/commenters agree with me. It's literally you & just two others that are living in deep, DEEP denial by believing that AMD completely lied out their ass about the chip's performance. Find me a SINGLE post Ryzen, AMD provided benchmark graph that was totally outright fraudulent, & then maybe you might have some extremely shaky ground to stand on.
Otherwise "taking things w/ a grain of salt" ≠ "assume all provided benchmarks are total bullshit, despite years of historical precedence".
This isn't the company that conveniently forgot to tell people about a hidden 5HP chiller during a recent live performance demo, or arbitrarily deciding what does & doesn't count as "real world performance", remember?
If you're willing to believe AMD's pre-baked marketing slides out of hand, you're just as gullible as those willing to believe Intel's slides. Regardless of AMD's massive progress in the CPU space, mobile has remained their Achilles' heel and until they show us the pudding, there's no proof.
And Cinebench and anything from 3DMark is synthetic trash that has zero bearing on the real world and is only used by overclockers and e-peen-wavers.
Stop sounding like Intel if you are trying to say "stop believing AMD's slides".
Maxon Cinema 4D is an actual piece of software for rendering. Cinebench R20 is based off of Maxon's software, and gives a fair, repeatable show of rendering performance. While I think they should use Blender more, it's also unfair to claim "it's trash data".
Also, fun fact: Before Zen continuously beat Intel in it, Intel had 0 issues with showing how well their CPUs performed using Cinebench. Intel had no problem using benchmarks to give performance data, but suddenly, when AMD's winning more and more benchmarks... Intel is concerned over "real world"?
Would you rather Intel sponsored reviews and benchmarks be used? How is that any better? A lot of Intel's latest benchmarks on their slides have been using benchmarks they sponsored or helped created... doesn't that rub the wrong way?
And also, I find it funny they recommended benchmarking MS Word.
Who's talking about them out-of-hand? He specifically pointed out that taking them to be unreliable isn't the same as disregarding them entirely, as LogitechFan and co are. He also asked if anyone had any post-Ryzen charts from AMD that misrepresented their products - as far as I'm aware no such thing exists.
Mobile has indeed remained their achilles heel, but they've already made tremendous progress in that space. More has been needed, and I agree that their relative silence on power efficiency isn't encouraging.
Cinebench and 3DMark are useful for apples-to-apples comparisons. You're right that they don't represent most workloads well, but no single benchmark does. What they do do is allow you to compare generational improvements. This is basic stuff that's repeated at the start of pretty much every technical review, so it's a bit weird that you're pushing the "synthetic trash" angle here - especially as CineBench isn't synthetic at all.
I feel like this comment section would be markedly improved if people just stopped taking the precise opposite side in every discussion to someone they think is a "fanboy".
If all the fanboys were removed, this place would be mostly deserted.
Not only do they post a lot, and repeat what they've said multiple times per discussion, and in comment sections of multiple articles, they also incite other fanboys (and reasonable people) to respond and argue with them... :-)
To think that political discourse is now influenced by the same forces that brought us the raging flame wars over the Red Ring of Death and Bumpgate... D:
Relying on ableist slurs to make your point is often a good sign that you're overestimating your own intelligence. Agreeing with an obvious troll tends to seal that deal.
Looking forward to the first reviews! I am especially curious about head-to-head comparisons of the 45 W 6C/12T i7 and the 8C/16T Ryzen parts; AMD went for more cores, but less cache (Intel's i7 have 12Mb). Let the battle begin. Hope to see a bit of a price war here soon, as I am due for a new laptop.
Depends on whether you're after CPU or GPU performance.
CPU performance should be a rout, but between the reduced CU count for the Vega GPU and the inability to use LPDDR4X on desktop, I think the GPU side of things might be closer than you might think.
I don't think there will be desktop APU's anytime soon. The product stack doesnt leave an area where the APU will make alot of money. Unless an 8C $200+. 7nm's getting cheaper but its still expensive
It would be odd for them not to. They have had trouble competing in the business desktop space due to their lack of an IGP in the standard Ryzen chips, and these chips can address that market very well indeed.
Consumers served by the existing 3000 series Ryzen CPUs as well as a few low-end 4000 series APUs (up to 4c/8t, maybe 6c/6t).
Pro series gets all the same, plus the high-end APUs (up to 8c/16t). Or decide that the APU graphics will never be enough for serious use, and have the higher end models include a basic 2 CU iGPU for 2D duties only.
I don't think the 6 and 8 core APUs would ever be as inexpensive as the current ones. Once a consumer is looking at spending $300+ on an APU, they probably wouldn't be satisfied with the iGPU performance.
I'm not sure on the price side of things. If the die size estimate of 150mm^2 is accurate, that's almost exactly 3/4 the size of Raven Ridge/Picasso. We know that 7nm is a more costly node than 14/12nm, but I wouldn't have thought it would be to such an extent that a much smaller chip would wind up being significantly more expensive at retail.
%90 16 threads advantage in Cinebench AT 1st Run. If it is Really 15W, expect the next run to be 4% slower. On Paper, even on paper, AMD won't be as efficient as Intel, for both higher Idle power consumption AND higher Display power use, which is not Intel's 1W display panel. Intel's project Athena ASIDE. which are more optimized, just to mention. It is very probably not so good in power use, you see 2 times more EFFICIENT won't translate to 2 times more battery life. Intel's Laptops were beating AMD ones greatly in likes of Microsoft Surface Laptop which had both AMD and Intel, 2 times MORE Efficient, you say, without being that optimized.. No 1w display which matters.... No tiger lake which comes soon... Yeah, I expect Intel be faster & ALWAYS BEST QUALITY...
We don't know anything about Renoir's idle power consumption other than that it has been improved over Picasso. Also, why would you expect AMD's Cinebench performance to degrade on a second run more than Intel's? Intel (more than AMD) have a history of pushing mobile CPU's to many times their TDP for short bursts.
No doubt Intel's optimizations within project Athena will play a role. But it's way to early to make the kind of assumptions you are making. Also, Tiger Lake will probably not be available until September/October at the earliest. OEM's are still launching Comet Lake products with availability of some not until March.
In past, cinebench benchmarks showed when AMD is High, it will degrade much more than Intel. As much as half in 3rd run, for example... They exist on web... AMD says it is limited & doesn't provide sustainable performance at 15W, It is surely so, Even Intel's 8 core 9th gen H i9 is faster than AMD's toppest both 45W & 16 Threads, so, AMD's isn't cooler...
Absolute bollocks. How much the system's performance degrades by depends entirely on the cooling system used and its performance parameters. Some Intel laptops have a steady decline from run to run, some drop after the first run and stay in the basement - and some don't degrade much at all. The exact same is true for AMD notebooks, but they tend to be used in lower-quality designs with inferior cooling solutions (mostly looking at HP and Lenovo here).
LMFAO Go see how Tomshwardware forums are flooded with issues of Intel mobile high end CPUs not reaching their turbo frequency, they are locked at 4Ghz while advertising 4.5Ghz. Why would AMD 3rd cinebench run be half degraded? Your post makes 0 sense. Post us proofs of your claim with temperatures otherwise stop wasting bandwidth using your BS trash. Where do you see the i9 surpassing the 4800H? The 4800H is faster than 9750H by 12% and slightly faster, by 2%, than the i9 9980HK 8 Core 5Ghz. The i9 is way more expensive and is meant to compete with AMD high end CPUs like the 4900H and above. AMD 4800HS is 4.2Ghz 35W which tells us that AMD could make a mobile chip with higher clocks at 45W such as 4.7Ghz and name it 4900H, that will surpass Intel i9 10th gen.
Reading the comments section on an AMD article has been cancer inducing, for a while, even on Anandtech (sadly). Rarely have i seen such fervor and hate as to what these so called fans can spill out. What’s really painful is that they accuse others of the things they do / believe.
'S an amazing number of folks that seem to have registered an account just to get their flame on today. CES is such an exciting time for the fanchildren.
These are the times I kinda wish people had to put their name and face out there for all to see. People tend to be a little more civilized when they have to put their name on it.
Agreed. It's still a little better than WCCFtech, though. You'll see the same troll post 15-20 "unique" comments on one article, plus hundreds of bitchy replies :|
I stopped using WCCFtech because of their idiotic and useless comments, I prefer Tomshardware and AnandTech. Whenever WCCFtech put a new article, you instantly see tons of useless and stupid comments spamming.
Exactly that. These days it feels less like a tech site and more like a recruiting / trolling ground for the worst parts of the internet. Even the worst comments here pale by comparison - probably because it's harder to do instantaneous responses and impossible to post images here.
Do you know why the Zephirus G14 will use NVidia discrete graphics? Will there be issues when switching between iGPU and dGPU? Will both Nvidia an AMD drivers need to be installed? The iGPU of the 4800H should perform close to the 1050 (discrete, non-mobile): I understand that the 2060 is noticeably more powerful, but I would prefer why no "H" CPU will be in a laptop without discrete graphic card?
iGPU of 4800H will only perform similar to curent Vega11, because in Renoir APU has only 8 Vega CUs, albeit at higher clocks. So, comparable to MX230 or maybe MX250 if coupled with DDR4 3200 or quadchannel LPDDR4, but nowhere close to 1050.
There are already a bunch of laptops with AMD APUs and Nvidia GPUs - the 3750H and 1660Ti seem to be a popular combination.
AFAIK: Yes, you need both AMD and Nvidia drivers. No, there aren't any more issues than there are with an Nvidia dGPU and an Intel iGPU - i.e. it's mostly fine, but every once in a while the Nvidia driver forgets to power up the dGPU fully, or forgets to power it down after you're done.
I think they imagine, if your buying a 45W part, your looking for power not battery life. So might as well go all out. I'm sure there will be one sooner or later without dGPU, but none at release.
And I think they chose nVidia over their Radeon cards because they still win a performance per watt, and probably provide easier thermals to cool with the bigger chip.
Obviously in terms of the overall laptop design (cooling and thermal insulation or component placement for comfort), but for the APU itself, it's just that the firmware (included in the BIOS/UEFI) tells it to run at whatever cTDP setting is required - range of 10 - 25W is available for the 4000 U-series.
This generation should bury water-cooling, and 15W should allow even passive cooling in thin designs. I'd risk someone has the gumption to slap one of these in a phone. Intel has to be squirming in their chairs right now.
You need to be under 10W for passive cooling in a small and crowded compartment such as a laptop. Phones have 5W TDP. 15W is perfect for a laptop, I find 45W a lot and cause overheating like my 4820HQ reaching 90C.
These are very nice products from AMD and I am pretty sure they will sell and raise AMDs image in the notebook market. Still, we need to see where they stand with these products because their handicap on mobile was much bigger than in desktop and mobile is a different market where efficiency is paramount. H series will sell well with gamers, maybe not being the definitive choice for gamers (lower max boost speeds means lower gaming perf) but we need to see reviews. U series also is interesting but here it all depends if they can get the idle power consumption under control. Thin and light laptops are called like that for a reason. People buy them to use them during travel and battery life is paramount. Not threads, not even absolute performance, but battery life and small things like sleep battery drain, sleep on/off speed, things that AMD is not usually great with (not even Intel, but they did improve massively with Ice lake).
Intel high end mobile CPUs are capped at 4Ghz and do not reach 4.5Ghz, users at Tomshardware are reporting this issue. Let see if AMD manages to reach 4.2Ghz as they claim, if they do, they will also be faster in single threaded loads than i9 10th gen.
It's a real bugger that their product cycle aligns so poorly with AMD's products...
That said, they're still selling a Surface Book 2 with a GTX 1060 in it. Maybe they've been waiting to build the Surface Book 3 with a 4800U and an RX 5600M in it... we can dream, eh?
Honestly? I mostly like the Surface Book for the display. There are not many devices out there with such a high quality 3:2 panel, let alone devices that I could potentially use for gaming, for photo editing with a stylus, *and* as a touch-screen content consumption tablet.
Yes it is strange, why Microsoft not waiting for this?? Hey!! it is clocked faster than actual Ryzen Mobile, it have two times the cores, faaaaar more IPC and a faster GPU. In average is 3X actual Ryzen Mobile, faster than a Ryzen desktop dropped at 15W and faster than Epyc module......... with only one node step. More or less is done on 5nm but AMD don't know this. Too good to be true?? :)
As Intel has currently a 1.1 GHz 6 core part at 15 watts to compete with a 1.8 GHz 8 core part, AMD has, at least for the moment, become the only choice in laptops. That's where Intel was ahead, so it's an existential threat in my opinion. Of course, in a few months, Intel will have 10nm+ out, but now they will have more work to do catching up.
Are you sure amd is capable to sustain this speed for more than few minutes before dropping at 0.8 Ghz 8 cores??. You know what is their strategy, their old slides are pretty clear. The other strange thing is that suddenly Zen 2 have gained more than 10% in single core performance :), pretty funny he?? To be noticed they have now only 1MB of L3 per core, this is not a good sign for real world performance. Looking at claimed results, this core should be inside the Epyc....but it is not. My suspect there is a lot of marketing at work and a generous reference design that cool a lot of heat.
Are you sure?? with only 8 MB of L3 instead 32MB the performance will be idiotic outside their fake slides. There isn't enough L3 to meet the L2 needs.
Why do you think Zen 2 needs mountains of cache to perform well? The 8 core AMD has just as much total cache as the 6 core i7. Do you label everything you don't like as fake? Is it possible to believe that Cinebench doesn't tell us everything about a processor, while also believing that AMD didn't fake their results? Did I use use enough question marks?
solarbear28.. why ?? because gondalf thinks the majority of zen 2s ipc gain is from its cache sizes.. if that was the case.. wouldnt intel of increased its caches as well ? he is an intel troll.. and will do what it takes to make intel look good.. and amd look bad. plain and simple... he just cant handle intel not being the best anymore...
It's a mobile chips not a desktop CPU and it uses a monolithic die which mean there's no latency penalty for having multiple chiplets. The 8MB cache won't stops these 45W chips from reaching their full potential.
Why hasn't AMD launched desktop Renoir? I think that is the only segment Intel still has a lead and AMD shall be able to take it easily. Maybe this segment is too small and AMD doesn't care.
gondalf.. ok fine. then explain WHY intel didnt increase their cache sizes more then they have ?? lets see what bs you can come up with to explain that...
IO is on die now. The whole point of increasing cache sizes was to keep needed data close to the cores to avoid the latency penalty. Not needed for this monolithic die.
It's easy post good number running at 25W + chipset power. They does the same with Ryzen Mobile 3000. But reviews said the truth. At a core level right now Intel is well ahead. Zen 2 is a little obsolete after Sunny Cove
gondalf.. shut up already.. INTEL is the one that NEEDS clock speed now... not amd. and cache sizes does not increase ipc as much as you keep trying to claim...
Ah, I see the idea here. Everything that AMD's CPUs need to perform well is what Intel's CPUs have now; when Intel have different things, then those will be the things AMD need to perform well, but none of the things AMD ever has will ever allow their CPUs to perform well and Intel never need to learn anything from them.
Best guess: they're capacity constrained and mobile is a bigger target for them - they need to start getting design wins ASAP.
There may also be other factors. The desktop APUs are probably a lesser bin. If they're getting good yields of chips that make the grade for mobile, they may not yet have enough "inferior" chips around to do a proper launch for the desktop APU. In that case we should expect to see it later when they've built up enough of them, a bit like the 5600 XT.
I am a bit amazed at the heated passion for or against the Ryzen 4000 mobile APUs. Personally, I think it's great that we finally have real competition in the ultraportable and performance laptop space, and look forward to the first head-to-head reviews. Since I neither own shares not work for AMD or Intel, all I care about is who can sell me the biggest bang for my bucks. If that ends up being AMD, even better, as it keeps the competition alive.
There is more to a laptop than processor. Significantly, there has been a deluge of classy amd based models.
No more oem crap configurations models and foot dragging to oblige intel. Intels stuff is all over Asus's front lawn e.g.
They have had it up to here w/ intel's years of reiterated & rebroken promises on 10nm (which they have invested heavily in and relied on for their roadmaps).
The final straw has been their egocentric hubris in robbing supply of more humble cpuS, in a bid to win the unwinnable battle of cores, using ever bigger & lower yield chips.
mosesman cribs it well imo "Summary Better than expected.
The Ryzen 4000 mobile 7nm will ramp in 1Q20 (1 quarter ahead of our view), with specs including 8-cores that beat competitors single thread performance and blow them away in all other metrics, including performance per watt. There will be over 100 laptop designs for 2020 in all categories (Thin & Light, gaming/creator, pro) and AMD is set to replicate desktop success in 2019 in laptops. Intel has limited options, given their 14nm power and die size issues, and 10nm being a broken node at the moment, in our opinion."
Thanks Ian! Question: do I see this correctly? One could, in theory, run a 4800U chip just as hard and fast as the 4800H, provided the cooling solution can handle the higher TDP? If so, I would love to see some vendors offering this variant. Being able to clock down to 1800 MHz and its low power envelope and yet turbo up to the same 4.2 GHz Max as the 4800H would be very attractive!
On a related note, did you hear whether SmartShift is indeed ready for prime time? The ability to switch seamlessly between iGPU and dGPU is essential to make a performance laptop a real daily driver, at least for my situation. A dGPU just eats too much battery when unplugged.
AMD states that the U series chips can be configured to run in 25W mode rather than the standard 15W (if the OEM supports it and provides enough cooling) for better sustained performance. Obviously this still wouldn't match the sustained performance of a 45W H series part.
I would think there are at least some binning differences between the U series and H series that allow the H series to run better at high frequencies (or allow the U series to run more efficiently at lower frequencies), however I don't know if it's anything more than that.
Usually there's a bit of a trade-off between being able to operate well at low voltages and being able to operate well with higher clocks and higher voltages - so it'd make sense that a chip binned for 15W operation might not run as well at 45W as a "genuine" 45W chip, even though they're the same design.
I would normally agree, however, the specs list the 4800U with a top turbo and all other specs (minus the one CU for graphics in the H) identical to the 4800H. That, and the in fact lower top speed of the GPU in H chips, leads me to believe that the H line APUs are actually lower-binned U chips. I hope AT or other channels can play with a 4800U and try pushing the thermal envelope; unfortunately, that'd void any warranty on the laptop.
They will definitely be lower binned chips - I'm not sure I was clear, so to reiterate, sometimes the "lower" bins tolerate voltage better than the "higher" ones, so even though they don't run as efficiently as the best bins they can run a little faster when given more juice.
This is just speculation, though - I don't know enough about the properties of the 7nm process :)
I'm a little confused why the 4700U isn't multi-threaded. It doesn't use much power does it? But it doubles multi-threaded performance. I understand if AMD wants to gimp the low end parts, but it seems strange to do on a Ryzen 7 chip.
I think you really are confused. Simultaneous multi-threading doesn't double the compute power.
The two threads sharing the same physical core can only run concurrently whenever they happen to require different parts of the core. Other times they end up waiting for each other. The total effect is that SMT will increase performance by around +25-30%.
8 cores with SMT will be equivalent to around 10 real cores.
This is also the reason why a 8c/8t (i7-9700K) is considered largely equivalent to a 6c/12t (~7.5 physical cores' worth of computational power), not accounting for differences in IPC, frequency or overclockability.
When you factor in that doing more work - you have double the number of threads and get +25% more work done - will consume more power. Running such a CPU at full tilt will require one or more of: - lower frequencies - lower voltages - better power delivery and cooling; or - better binned silicon (so you can run the same frequencies with lower voltage and get the required power usage).
I would postulate that 4800U is better binned, so can stay within the power and thermal envelope even with double the threads, without affecting frequencies too much.
Of course, it could also be that they're *not* binned any differently, or differently enough that it would make much of a difference; The worst quality APU silicon with all 8 CPU cores functional would probably be stockpiled for the future desktop products, or used now for the lower core count models (just with 2 - 4 cores fused off).
Surface Pro 8: 8C/16T APU, 15W, LPDDR4x, fanless, one can dream. I'm quietly excited about these new APUs but performance numbers take a back seat to energy efficiency for mobile parts. AMD will have trounced Intel if they can come up with a part for the Surface Pro tablet.
It is possible, and has been done, even at ultraportable- look at Acer Alpha and similar. 37Wh battery, runtime at full load is 112minutes= 20W power use- cooled fanless. And this is in a very limited 12" tablet formfactor. https://www.notebookcheck.net/Acer-Aspire-Switch-1...
Maybe AMD do not see the use of it on 4800H, especially if Renoir LPDDR4X support is dual channel (lower bandwidth than DDR4), not quad channel. Although that is exactly what I would want (13" ultraportable H-series with LPDDR4 if quad channel).
AMD has been crushing Intel in all other categories but wasn’t able to in mobile processors specifically single thread performance and power efficiency was behind Intel’s.
Now since both of these issues have been resolved, as said by the AMD, this should be great thing for customers as they have choice in laptops for getting AMD or Intel.
There is no way Intel can match AMD even in another year, AMD are on 7nm EUV while Intel are still on 14nm in desktop and 10nm in mobile. Also, AMD ST performance is equally or slightly better than Intel 10nm while MT performance is more than double. AMD integrated graphics are also waay superior to Intel's.
Intel's mobile high end CPUs have a big issue with boost clocks, they are capped to 4Ghz instead of the advertised 4.5Ghz, its all over the place in Tomsharwdare forums. If AMD can really hit its 4.2Ghz boost then they will significantly surpass Intel's 10nm in ST performance and more than 250% in MT.
ST performance os AMD slides is a fake. We already know Sunny Cove badly beat per GHz a Zen 2 for desktop with fast Drams. Bet they tested the worse Intel laptop available with their generous and well cooled reference design.
Where are you reading that Sunny Cove is beating Zen 2 on desktop, there isn't any benchmarks about it yet or any engineering sample. Even then, by the time Sunny Cove is released, AMD would already have released Zen 3 in 2021.
Hey you are on Anandtech.There is a full article about Sunny Cove core of Mobile Ice Lake (for sale now). A good session of Spec (15W stable) showed the clear per Ghz superiority of Intel Sunny Cove with mobile memory over Zen 2 for desktop with fast Drams and gigantic L3. Sunny Cove is out since middle last year, now is time for Willow Cove in the middle this year, just to stay well ahead Zen 3 a quarter before.
You can't compare Mobile vs Desktop, that doesn't make any sense. What did you smoke? Clock for Clock Zen 2 beats Sunny Clove as Zen 2 has better IPC than any current Intel CPU. Intel's 5Ghz 9900KS is comparable to a Zen 2 at 4Ghz in ST performance.
Likely you mean 25 W?? i think yes. I don't see competition here, simply 8 cores do not fit in 15 W at these clock speed. In fact they say "up to" about clock speeds, turbo included. My bet, we will never see a 8 core Ryzen Mobile 4000 in a thin laptop. In fact what announced right now is of 25 W class. The funny thing AMD have not a fast clocked 4 cores 8 threads, a pretty suicidal thing in mobile.
The 8 Core is a 15W chip @1.8Ghz with 4.2Ghz Turbo. Of course it won't be 15W Turbo but its worse for Intel, just like their 300W 4 Cores desktop CPUs, the 9900KS will use up to 280W under Turbo, 350W when overclocked which is just insane. The 6 Core from AMD is 2.1Ghz @15W with a Turbo boost of 4Ghz.
Intel Turbo boost is throttled to 4Ghz instead of the advertised 4.5Ghz. Tomshardware forums are flooded with Intel throttling issues on mobile. If AMD really reaches 4.2Ghz Turbo then it will be faster in ST than the best Intel mobile CPU and waay faster in MT.
I think better strategy is to lower as they can the SOC die size, crank up seriously the clock speed and try to grab more and more market share from Intel in a situation of tight 7nm supply. This one is the wrong SOC for AMD needs in actual timeframe.
1) The SoC die size is pretty small already. 80+% yields will do fine. 2) The clock speed won't go higher without a different process or a different architecture. They're not in a position to radically change either. 3) I trust their perspective on what SoC they need better than yours.
gondalf.. post links to your bs.. or shut up.. most of what you say.. is bs.. AND it looks like you are comparing sunny cove to zen+ trying to pass it off as zen 2...the ONLY thing that is fake.. is you
For what I use my work laptop for - full desktop replacement driving three screens - these new processors look awesome. Sure we are not getting full fat 8/16 but this is a genuine desktop replacement for the mainstream user. There is a big difference between 100+ designs and my ability to order one though... I hope we see good supply but like I hope most buyers will do... I will have a budget look at the models that are actually available for that budget and then look at good real world reviews of those machines plus if at all possible get actually hands on with the one that looks best before buying. We all know that OEM limitations on memory speed and the TDP will make far more difference than +- 1-10% theoretical performance in an ideal scenario. Right now these CPU's look awesome, let's see them in the wild and then compare!
If the 4800H can really reach 4.2Ghz, then it will be faster than the i9 10th gen in single threaded loads and way faster in multithreaded loads. Many users on Tomshardware are reporting that their mobile high end Intel CPUs are capped at 4Ghz and not reaching the advertised 4.5Ghz.
Let see if AMD can reach their turbo speeds when benchmarks appear this year.
My thread in Tomshawrdware was about the 8750H not reaching 4.1Ghz.
What I mean is that those Intel 45W chips never reach their 4.5Ghz Turbo speed because temps doesn't allow reaching 98C and the CPU starts throttling. My 4820HQ reaches 98C during gaming with the fan at max and with Grizzly Kryonaut thermal paste and a coolpad.
In other words, if AMD doesn't throttle and allows 4.2Ghz, it would beat Intel's i9 10th gen. My only AMD laptop was a Turion X2 and temps weren't that high. With 7nm its possible to hit 4.2Ghz Turbo without throttling I hope.
i find it funny.. that on order to get the notebook to work as it was intended... one has to spend hours and hours tweaking and researching so it will.. maybe that cpu just shouldnt of been put in a notebook like that ?
Zizo007 thats not what i mean..28w, 45w, what ever watt... you still shouldnt have to spend hours tweaking and playing with settings to get it to work as it is supposed to.. 45w isnt too much for a notebook, IF the cooling system for it.. is done right. obviously, the cooling for these chips isnt...
Or rather, laptop manufacturers should do a better job in designing laptop cooling rather than cutting corners like they usually do. This is especially accurate for AMD laptops, but Intel suffers in this area too (along with the fact that people had to delid their CPU's to get it performing as it should have).
Thanks for the share. My 17" gaming laptop has a 6700HQ with a maximum boost of 3.5Ghz - it will hit 80+C when gaming, and that's with a ~125mv undervolt. Considering that the underlying architecture hasn't changed much from 6th to "10th" gen, it makes sense that they can't actually hit their supposed maximum boost speeds.
"Support for Infinity Fabric Link GPU interconnect technology – With up to 84GB/s per direction low-latency peer-to-peer memory access1, the scalable GPU interconnect technology enables GPU-to-GPU communications up to 5X faster than PCIe® Gen 3 interconnect speeds2" source Radeon Pro Vega || Duo
if it was only pci 3.0 on mobile systems its still plenty quick but there is already IF2. it has reached its 2nd gen. just imagine the kind of power available in the apu only..for a mobile gpu+IF??
Is it IF2? and even then the die is not like chiplets guess what?? yup..smartshift.
@Ian: I may have missed it among the 380+ comments, but isn't this chip, especially the H line, also the dress rehearsal for the CPU part of the custom APUs that will power the PS5 and the next Xbox? Any rumors from AMD? The APUs must be in production already if SONY and MS want to ship the new consoles in mid/late 2020.
Gimme an 8C/16T desktop with integrated graphics, even if it's the lowest of the low, and my next build would use it. Just need basic graphics as a server, but the current desktop line is limited to 4C/8T with IGPU.
One possible error here. The mention of the 8 cores being broken into two 4 core CCX doesn't apply with Zen2, since AMD went to 6 and 8 core CCX units for the desktop. This means that for the laptop chips, we may be looking at a single 8 core CCX linked to the GPU instead of two four core CCX linked to the GPU.
We saw that the cpu speed to 12% as well. Now that work shifted to the mobile dgpu having in mind a 12%+ when the api shared the cpu+igpu on if?? Which gen?? If it speed to the mobile dgpu the same shared I want222 smartshift...o.k...but...is it 12% of for a link if almost 100GB/sec?? Lol
We saw that the cpu speed to 12% as well. Now that work shifted to the mobile dgpu having in mind a 12%+ gap when the apu shared the cpu+igpu on if?? Which gen?? If it speed to the mobile dgpu the same shared I want222 smartshift...o.k...but...is it 12% of for a link if almost 100GB/sec?? Lol
They missed an important trick here if they can allow the 4800u to upclock to 45 watts. It would make those who require a cpu performance and occasional gaming with long battery life a good alternative in the market.
I doubt AMD nor OEM would allow that, since increasing TDP that high from what it's supposed could be dangerous to the chip, and not to mention the inadequate cooling the OEM will offer on their product. Allowing it outright despite the system couldn't handle the increasing temperature would just calling a lawsuit.
A proper desktop APU would have: 1) multi-chip design similar to Ryzen, with separate chips for CPU and GPU 2) Navi-based GPU part, similar to 5500M 3) 4 channels of RAM, preferably DDR5 to feed that GPU. Would be great if the 4 SODIMM slots for it are on the package (4 sides) to reduce latency. 4) COMBINATION of GPUs in case of another Navi-based GPU used in the PCIex16 slot, for example, adding 5500 should double the performance.
It would make sense to have the memory controller chip also having all the L3 cache used by both CPU and GPU.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
406 Comments
Back to Article
azfacea - Monday, January 6, 2020 - link
RIP intelshabby - Monday, January 6, 2020 - link
I dunno... ice lake is pretty competitive.Cooe - Monday, January 6, 2020 - link
Are you unable to read basic graphs or something? Ryzen 4000 (Ryzen 7 4800U to be specific) beats the best Ice Lake chip Intel has (i7-1065G7) in literally every single metric (+4% in single-thread, +90% in multi-thread, & +28% in iGPU). Perhaps go back to elementary school lol; it seems you need it more than you might think.SolarBear28 - Monday, January 6, 2020 - link
The only metric that remains is battery life. If Renoir can match Ice Lake on efficiency (specifically idle power draw) then AMD should finally get some premium designs wins.shabby - Monday, January 6, 2020 - link
It they could match i think they would make a fancy graph about it, but nowhere in the article was battery life mentioned which makes me think it'll still be behind intel.Cooe - Monday, January 6, 2020 - link
I can't buy that with the absolutely INSANE power efficiency advantage 7nm Zen 2 based Ryzen 3000 CPU's have on desktop (nearly 2x vs 14nm CL-R).My guess is that it's only a definitive win against the Skylake based machines (aka Coffee Lake Refresh Mobile & Comet Lake Mobile), with 10nm Ice Lake able to put up a MUCH closer (and less flattering) fight. And unless you're winning in a category across the board like they are with the other parameters, perhaps it is best not to bring any direct attention to it at all.
BillyONeal - Monday, January 6, 2020 - link
They don't have a power efficiency advantage on desktop for idle power: the chipset alone wants 11-15W. Most laptops spend most of their time idle; it's the low end of perf AMD needs to demonstrate wins in to be viable in notebooks (except giant gaming monsters with terrible battery life anyway).I'm not saying they can't do it, I'm saying they haven't proven that they have done it yet.
Hul8 - Monday, January 6, 2020 - link
- mobile Ryzen 4000 laptops only support PCIe Gen 3- Ryzen APUs have traditionally had 8 fewer PCIe links than regular CPUs; makes sense for mobile use, since it will cut down die area and power consumption
- on desktop, the I/O dies are manufactured on 12/14 nm nodes and are the same design as the X570 chipset; since the mobile Ryzen is a monolithic 7 nm die, this power consumption should be reduced
- Ryzen CPUs (even on desktop) are SoCs, with a limited number of integrated SATA and USB controllers, so they don't necessarily need a chipset at all
- even if a chipset were to be used, it could be much smaller than the desktop X570
What I'm getting at is that basing power estimates on the vastly different desktop design (with completely different design goals) doesn't help here.
Cooe - Monday, January 6, 2020 - link
Source for Ryzen 4000 being PCIe 3.0 only or I call total BS on that one.Hul8 - Monday, January 6, 2020 - link
@Cooe Simply searching in this article (for "PCIe" or something) would have sufficed.Here's the relevant string for you: "It should be noted that for both U-series and H-series, the chips only support PCIe 3.0".
Cooe - Monday, January 6, 2020 - link
Damn, you are right. That's a big miss IMO.Spunjji - Tuesday, January 7, 2020 - link
Why is not having PCIe 4.0 in a laptop a big miss? What would it be used for?Orange_Swan - Tuesday, January 7, 2020 - link
thunderbolt 4?, eGPUs, ultra-high-speed external storage & networking.BurntMyBacon - Wednesday, January 8, 2020 - link
Thunderbolt 4 - Is this even out yet or just a spec? Also, this is an Intel spec and Intel doesn't have PCIe 4.0 either so ...eGPUs - Given that eGPUs only use 4 lanes, it seems like this is the best use case.
Ultra-high-speed external storage - There is only one PCIe 4.0 internal controller (Phison) at the moment (Samsung soon to join) and it isn't even very good. External storage generally lags behind internal storage so there is plenty of room to improve external storage within the bounds of PCIe 3.0.
Ultra-high-speed networking - PCIe 3.0 supports 10G ethernet as well as the latest wireless standards just fine. When I see 10G ethernet as common place, I'll reconsider the need here.
So it really just comes down to eGPUs. An alternate solution is to give them more lanes.
Daeros - Wednesday, January 15, 2020 - link
Thunderbolt 4 is Intel only (TB3 is now open), but it also isn't any faster than 3...https://www.anandtech.com/tag/thunderbolt-4
Peskarik - Tuesday, January 7, 2020 - link
Are you unable to read basic English or something? Perhaps go back to elementary school lol; it seems you need t more than you might think.oynaz - Wednesday, January 8, 2020 - link
It is in this very article. One of the first paragraphs.zamroni - Sunday, January 12, 2020 - link
hence, amd needs to add more pci lanes so manufacturers can put thunderbolt portsKorguz - Sunday, January 12, 2020 - link
try intel... arent they still below amd on total pcie lanes ???SolarBear28 - Monday, January 6, 2020 - link
Exactly, I don`t have any reason to doubt Renoir`s efficiency improvements over Picasso. But Picasso was miles behind Intel on mobile efficiency. Hopefully the aforementioned power gating and decoupling Infinity Fabric from memory frequency will do the trick. If Renoir laptops get within 5% of Intel on battery life that would be a huge positive step for AMD.Cooe - Monday, January 6, 2020 - link
That was entirely because of the 12nm I/O die... Which APU's don't have. Try again?milli - Tuesday, January 7, 2020 - link
Cooe you really need to tone down the aggression. Starting from your first comment you talk down to everybody like you're some kind of genius while we are kids. The reality will probably be the opposite.That said, AMD was like 50% behind because of the aforementioned inefficient idle power. Even the cherry picked 3780U in the MS Surface was like 40% behind. If AMD had closed this huge gap, they would be bragging about it. Even if they reduce this to a 10-20% loss would be a huge accomplishment.
https://images.anandtech.com/graphs/graph15213/113...
https://images.anandtech.com/graphs/graph15213/113...
lejeczek - Tuesday, January 7, 2020 - link
was like.. is like... like like... like .. nothing really is.Jugotta Bichokink - Tuesday, January 7, 2020 - link
The bus determines the idle power. This is entirely different and comparing it as if relevant is...wrong.
psychobriggsy - Tuesday, January 7, 2020 - link
These chips will have all the laptop I/O they need integrated, there will be no separate chipset.Most likely it'll be a PCIe x8/x16 for the discrete GPU, PCIe x4 link for the SSD, a SATA for additional HDD, and USB for everything else.
Perhaps x8 for mobile, x16 for desktop (which may have use a separate chipset for more I/O).
APU likely has two PCIe x4 links, two enabled for desktop, one enabled for mobile. These appear to be PCIe 3 in the mobile SKUs, I hope the desktop APUs will bump these to PCIe 4.
kaidenshi - Tuesday, January 7, 2020 - link
"the chipset alone wants 11-15W"That's completely untrue. I have a mini desktop system (ASRock A300) with a 2400G that is stuffed full of two spinning 2.5" HDDs, two m.2 drives, and 16GB of RAM. Looking over at my Kill-A-Watt as I type this, it's sitting at 12.4W total power draw. If what you said was true, I'd be seeing at least 20-25W idle.
More importantly, it outperforms my last gaming system in the same games at the same resolution (1080p) at a max of 70W total power draw for the entire system, versus the 400W+ power draw of my last system with a Core i7 and GTX 1050 Ti. If that isn't power efficiency I don't know what is.
neblogai - Tuesday, January 7, 2020 - link
A300 does not use a chipset, it runs off the APU, like laptop designs.Jugotta Bichokink - Tuesday, January 7, 2020 - link
That's semantics. The chipset is on the die, but the electricals are on the mobo = chipset layout.MASSAMKULABOX - Thursday, January 9, 2020 - link
Whilst your comment is informative, I would dispute your 400w figure 100w for CPU, 300w for 1050ti no sir , no! About 150w max for that 1050ti at full chat, I want to know when these mobile parts will transformed into desktop or mini-pc . ALtho saving the best gfx for the hi-end parts is a bit of segmentation, they could afford to put MORE gfx on the 6c parts , not less. Bad AMD, Bad.soresu - Tuesday, January 7, 2020 - link
The APU's have integrated the chipset for several generations now, keep up!At the very least unlike with Matisse and Threadripper, the chipset functions are all on 7nm instead of 12nm, whatever difference that makes.
Jugotta Bichokink - Tuesday, January 7, 2020 - link
"They don't have a power efficiency advantage on desktop for idle power" - True, but who cares.It's not determinant and it's not completely out of school. Nobody buys a highest-end laptop for the fricking battery life lol. It's a consideration down like second to last.
I'd be much more concerned about my brand new flagship laptop CPU speculatively leaking Ring-0 access before I even plug it in. Intel chips are right now flawed until they prove otherwise.
Orange_Swan - Tuesday, January 7, 2020 - link
i do, my next build will be a mini-ITX buildwilsonkf - Wednesday, January 8, 2020 - link
You can equip a mobile APU without FCH in a laptop, since many years ago.Alexvrb - Wednesday, January 8, 2020 - link
As someone else pointed out, these won't have power hungry PCIe 4.0 chipsets nor separate I/O dies. Everything is very likely integrated and ultra low power. They're a mobile-first monolithic design, and thus their design doesn't have all that much in common with their desktop brethren (aside from core architecture).Heck even the big X570 chipsets don't IDLE at 11-15W. OEMs can configure them however they want (hence the range of TDPs), but 11-15W is would be if you have the chipset I/O pretty well pegged.
With that being said I wouldn't swear they've caught up to Intel on idle power, and a lot of it is up to the OEM boards and firmware. But if they can get even close, and they beat them on performance, they'll make great machines. Especially interested to see how many gaming laptop wins they can get with those 45W octacore models!
generalako - Tuesday, January 7, 2020 - link
You mean like the single-core where they're absolutely, where they're equal (at best), but AMD is still promoting it as a "4%" win? If they had made great strides in efficiency, clearly they would have shown it, no? It's literally the biggest issue with current APUs of theirs. Not saying it hasn't improved, but it still needs to improve by around 40% to catch up to Ice Lake, and with doubling of cores and even higher turbo frequency, it's difficult to see how much more room they have left.Jugotta Bichokink - Tuesday, January 7, 2020 - link
AMD actually does beat Intel in single core now, you are smoking crack to say otherwise.4% is 4%. Intel fangirls wanted to play the metric game when it suited them, now you run from it?
Jugotta Bichokink - Tuesday, January 7, 2020 - link
Are you even serious? They can't test power efficiency until the entire machine is built.Testing chips ex-situ of their platform and chipset and defaults and all that, it proves nothing.
Unless intel is substantially, amazingly better than before (and they aren't) the difference between the two in power consumption isn't going to be more than it was before and it's probably less.
But if you're really trying to pretend someone interested in buying the highest performing laptop platform is going to quibble over 10-20% more power consumption for 20-100% more performance, your speculative execution is scheduled for the moment you open your eyes.
rahvin - Monday, January 6, 2020 - link
You can't list battery life when you have no battery specs, especially when it's a CPU article. Battery specs will go with the individual laptop designs, in size and weight.It'll be interesting to see the individual wins and how they compare with Intel laptops. The number of design wins suggest AMD is winning mind-share with the OEM's. Given the cheaper price AMD might make some good progress in laptop sales this year.
Spunjji - Tuesday, January 7, 2020 - link
Agreed. I'm eager to see what gains they've made on idle power draw this time around, as the Ryzen 2000/3000 mobile series were quite a disappointment in that regard.zmatt - Tuesday, January 7, 2020 - link
I have yet to get anywhere close to the advertised battery life on any modern Intel laptop anyways. I think its all marketing BS. They are certainly longer than they used to be but the only way I get a full work day out of an Intel laptop is if it spends most of the day asleep.rocketbuddha - Wednesday, January 8, 2020 - link
With respect to AMD clients in the mobile world, I always say "With Friends like these who needs Enemies". The most recent example is Microsoft with its Surface AMD edition which is more expensive, less performing, less battery life than the Intel Edition that multiple websites declared on the day Anandtech did the comparison.So let us see if Asus forces the OEMs to give what AMD deserves.
StevoLincolnite - Monday, January 6, 2020 - link
No benchmarks out yet.Take AMD's and Intels "performance" slides with a grain of salt.
Jugotta Bichokink - Tuesday, January 7, 2020 - link
We take Intel fangirls with a grain of Ring-0 speculative access. Blow along now tumbleweed.Obviously you don't get graphs until the machine is built, derp. You may live in fantasy until then!
shabby - Monday, January 6, 2020 - link
I can read past cherry picked marketing graphs, thank you very much.Cooe - Monday, January 6, 2020 - link
You are acting like AMD's pre-release performance graphs haven't been totally accurate (or close enough to it) since the OG Ryzen launch nearly 3 years ago.... This isn't "hide a 5HP chiller under the table" Intel or "everything we say is mostly a lie" Bulldozer era AMD anymore. This is Lisa Su's AMD in 2020, but it seems your tin foil hat must be just too super precious for you to ignore.Now I'm mot saying you should put 100% faith in every bench result to be 100% accurate under every
usage scenario, but AMD's current track record says ignoring the benches entirely & believing them likely to be fraudulent, just makes you look like a damn fool.
shabby - Monday, January 6, 2020 - link
Did you read this? https://www.anandtech.com/show/15213/the-microsoft...This will give you a good idea just how far ahead ice lake is compared to zen+, honestly i don't believe zen2 mobile will be able to catch up to it... sure maybe in cinebench but who uses that on a 15w laptop.
Btw i have an r5 3600 so i'm no ones fanboy, just a skeptic of pre-release cherry picked benchmarks.
Cooe - Monday, January 6, 2020 - link
Yes, but you're COMPLETELY ignoring the fact that Zen+ to Zen 2 is a >=+20% improvement to single-thread performance (+15% IPC & +5% clocks), with an even more massive power saving. So literally the only way for anything you are saying to be even remotely true is if they straight up aren't actually using 7nm Zen 2 as the CPU arch (which is a known quantity) & thus AMD's provided benchmarks are off by orders of magnitude...AMD used Intel's BEST Ice Lake chip as the comparison for a reason, & as I said above, AMD's pre-release performance graphs have been DEAD ON since the OG Ryzen announcement in late 2016.
Either you are saying that AMD's either blatantly lying about using Zen 2 or they are blatantly lying about their shown performance numbers by a huge margin. So which is it?
shabby - Monday, January 6, 2020 - link
They're cherry picking benchmarks, why can't you understand that. What percentage of people use cinebench? 0.1%Cooe - Monday, January 6, 2020 - link
What benchmarks do you need to get over yourself? CPU-Z? Geekbench? Because the results would remain very consistent in those as well. And what's your issue with Cinebench other than that it ever so slightly favors AMD chips (as in by a handful of %)? As seen with the desktop CPU's (Ryzen 3000 vs Coffee Lake Refresh) exclusively single-thread performance will only vary by about 5% at the most depending on the particular application/benchmark used to test it. Thus, even by "cherry picking" you'll only be able to change the results by a tiny fraction in either direction.Also, way to avoid having to try & back up your statement that doubling the core count / multi-threaded performance in the same power/thermal envelope is pointless in a world with 8c/16t consoles dropping in months! You are a delusional Intel fanboy of the worst kind.
shabby - Monday, January 6, 2020 - link
Your going to pop vain soon, can't wait...Cooe - Monday, January 6, 2020 - link
Maybe out of happiness lol. You being a dumbass doesn't make these APU's themselves any less awesome. Rather, it just makes you well, look like a dumbass...Lord of the Bored - Tuesday, January 7, 2020 - link
"Pop a vein", not "pop vain"Jugotta Bichokink - Tuesday, January 7, 2020 - link
Spoken like a Pentium troll.Sub31 - Friday, January 10, 2020 - link
Nah, more like a 486 troll.Jugotta Bichokink - Tuesday, January 7, 2020 - link
Shabby you're a fool. Intel picks random benchmarks that help their case EXCLUSIVELY.When they were winning on single core they had all kinds of random benchmark citations to try to prove that was more important "overall" than having a fast bus - well, now they have NEITHER.
You can go hide in the sand all you want.
generalako - Tuesday, January 7, 2020 - link
Except Ice Lake is a 18% IPC gain over Coffee Lake, making it around 10% over Zen 2 in IPC as well. Meaning that in order 4800U AMDi7-1065G7 to even match i7-1065G7 in single-core performance, it has to be clocked at 4.3 GHz in boost. It's instead around 4.2, meaning AMD is in fact a few percent behind, not ahead, of Ice Lake in single-core. In any case, we can assume it's parity.As for the rest, the most important metric is efficiency, and AMD did not show us anything there, which is a bit worrying. Ice Lake was ahead of Zen+ by a factor of 40%. AMD only stated "up to 20% better efficiency" at the same frequency. That's simply not enough.
I'm also interested in sustained performance. At the end of the day the benchmarks that Andrei and others provide are completely useless. For example, in his benchmark of Surface Pro 3 Laptop, Ice Lake was reaching desktop Coffee Lakes levels of performance. That's just nonsense, and I'm surprised he makes no reservations about it. I use an Ice Lake laptop regularly, and no way in hell dos it even come close to performing as well as desktop CPUs for even everyday tasks. Even something as a simple scrolling in Spotify is more janky on laptops than desktop CPUs (which do run at base clocks in such tasks too, mind you). SPEC does not properly represent this reality, and it's also why I'm disappointed that 4000U chips have regressed in base clock. Base clock improvements is where the real performance is at.
SolarBear28 - Tuesday, January 7, 2020 - link
@generalako AMD stated up to 20% overall lower soc power at the same frequency (while doubling the core count). That's much different than 20% better efficiency. Performance per watt has doubled (in multi threaded loads). I doubt the gains will be as large in single threaded loads. And I agree, AMD would certainly have shown a direct comparison to Ice Lake if they had somehow surpassed it in absolute power consumption. But AMD could be very close.Zizo007 - Wednesday, January 8, 2020 - link
If AMD can reach their 4.2Ghz boost then they will beat the new Intel mobile processors.Users in TomsHardware are having issues with Intel high end mobile CPU not reaching their Turbo speed even under good temps. They are reaching a max of 4Ghz whereas the boost speed is 4.5Ghz.
HStewart - Wednesday, January 8, 2020 - link
Even though AMD used Intel best Ice Lakes which are by the way only 4 Core why these new AMD chips have 8 Cores - one has to believe Intel has 6 and 8 core mobile CPU with new architexture. It might be called Tiger Lake - but it would be foolish to think Intel is just sitting on their butts.Korguz - Wednesday, January 8, 2020 - link
hstewart come on man.. will you EVER learn how to spell architecture correctly ???? you claim to know so much about computers.. but you STILL cant spell that?? unless you can prove it... intel may not have more then 4 cores with sunny cove, tiger lake, or any of the new late 2019 or later lakes it has out that is now.. and only you would believe other wiseJugotta Bichokink - Tuesday, January 7, 2020 - link
DID YOU EVEN READ YOUR OWN COMPARISON GENIUS?AMD Ryzen 7 3780U
4C/8T, 2.3-4.0GHz, 15w Intel Core i7-1065G7
Memory 16 GB Dual-Channel DDR4-2400 16 GB Dual-Channel LPDDR4X-3733
Jugotta Bichokink - Tuesday, January 7, 2020 - link
Memory 16 GB Dual-Channel DDR4-2400 - AMD tested on 2400 Ram16 GB Dual-Channel LPDDR4X-3733 - Intel has 3733 in this comparison, derp!
Zizo007 - Thursday, January 9, 2020 - link
Intel doesn't benefit from high frequency RAM. Ryzen benefits from higher frequency RAM up to 3800. Ryzen will for sure support RAM speeds faster than 2400.Zizo007 - Thursday, January 9, 2020 - link
Ryzen 4000 mobile will for sure support higher RAM frequency than previous Gen.Sub31 - Friday, January 10, 2020 - link
In fact, Ryzen 4000 mobile supports LPDDR4-4266 (which has a half bus compared to DDR4)Xajel - Monday, January 6, 2020 - link
While I would love see this true, I also know that official statements are selective, picky & sometimes only gives you the bright side, eg.. tricky.So I would wait for the official release and independent reviews and benchmarks.
Although things seems pretty interesting and very good indeed.
bytebuster - Tuesday, January 7, 2020 - link
Just... calm down. Seriously.generalako - Tuesday, January 7, 2020 - link
It's funny, because Ice Lake beat Zen+ APUs by 40% in single-core, 50% multi-core and 40% better power efficiency, and yet people called AMD's alternatives "competitive". There's clearly a different standard to what both parties are called.Let's also not forget that we still know very little about upcoming Zen 2's efficiency. And also that Tiger Lake is coming this summer, which will for sure take back the single-core thrones and possibly also iGPU (Intel is promising 2x jump here from Ice Lake--Suny Cove).
Nicon0s - Tuesday, January 7, 2020 - link
40% single core advantage?LoL
More like 18% at best.
generalako - Wednesday, January 8, 2020 - link
Wrong Go check the Anandtech Ice Lake vs Picasso Surface Laptop 3 review. The single-core performance was 40% above it Zen+ APU.Nicon0s - Tuesday, January 7, 2020 - link
Multi-core was like 11%These new AMD APU's have 2X the cores so AMD is now winning in multi-core by a lot.
Spunjji - Tuesday, January 7, 2020 - link
You really are throwing everything including the kitchen sink at generating FUD here. The only one of your numbers that represent reality is the 40% better power efficiency - the rest are clearly not comparing apples to apples, as the average ST/MT performance difference between Ice Lake and Zen+ difference is closer to 10-15%. That's what people mean when they say "competitive" (especially considering cost).Tiger Lake is also most likely not coming "this summer". It's H2 2020 in Intel speak, and given Ice Lake was ~3 months later than they said it would be, I'm not prepared to assume that's anywhere near "Summer".
generalako - Wednesday, January 8, 2020 - link
No I'm not. Go check the Anandtech Ice Lake vs Picasso Surface Laptop 3 review. The single-core performance was 40% above it.Spunjji - Wednesday, January 8, 2020 - link
Yes you are. That article is exactly where I went to check the data *before I wrote my first reply*. The only specific ST test quoted has a 20% advantage to Intel, and it's in Cinebench, which is apparently Not A Real Benchmark now that AMD are predicting wins in it. So that's your one factual claim cut in half, and still nothing to back up your speculation.The biggest wins for Intel in the article you're mentioning are in Handbrake (MT, not ST) and 7zip (unsure). 7zip is strongly affected by memory bandwidth, which is an advantage that Renoir negates by supporting LPDDR4x. The other big wins for Intel were in the web browsing tests - Renoir's changes to boost behaviour should help there, too, though we don't yet know by how much. What's absolutely certain is that none of those performance gaps will change in Intel's favour.
It's impossible to gaslight people when they have access to the same information you do - you just come out of it looking like a troll.
yeeeeman - Sunday, January 12, 2020 - link
Dude, You really don't know how to read, that is the best answer to you. It writes like this after spec benchmark: with the Intel variant of the Surface Laptop 3 being ahead by 37% in the integer suite, and 46% in the floating-point suite.What is so hard to believe? Zen 2 is something like 15-20% better than Zen+. Ice lake is another easy 10% over Zen 2 in ipc. Add that and you will get the figure the dude you are arguing about said.
Korguz - Sunday, January 12, 2020 - link
but are the freqencies the same ?? seems.. intel cant reach higher clocks on there 10nm process.. that still true ?Gondalf - Tuesday, January 7, 2020 - link
Depends if they done the bench suite at 25W or 15W. With last Ryzem mobile they were smart enough to test at 25W to show strange things.Anyway my bet with all cpus active they will destroy the battery with an huge power consumption utilizing the Tskin methodology to cool the SKU.
It is now all gold under the sun.
The idle power will be huge and the SOC availability small. No volume no party.
Spunjji - Tuesday, January 7, 2020 - link
Gondalf reliably providing a useful guide on what not to believe, as ever.Gondalf - Tuesday, January 7, 2020 - link
You will have bad surprises when someone will test these 8 core SKUs on a real laptop constrained to 15W. Knowing to be unable to beat Intel at the same number of cores, AMD chose another sad street. They claimed great things with present Ryzen Mobile, but real world power measures say the sad truth.Basically they not even try to ship a good low idle power four core eight threads, the most wanted in tiny devices.
Jugotta Bichokink - Tuesday, January 7, 2020 - link
Gondalf your speculations are as useless as Intel's own.SolarBear28 - Tuesday, January 7, 2020 - link
@Gondalf Did you even read the article? Running Cinebench Renoir is twice as efficient as Picasso. Why do you anticipate heat issues at full load?Even if we grant that AMD have cherry picked their single threaded benchmark to show a 4% lead, they should at least be equal or very close to equal with Ice Lake in single threaded performance.
Intel might still use less power on idle and with low loads but I suspect that moving to 7nm (in combination with AMD's other improvements) will have significantly reduced that gap.
You think it's sad that AMD have doubled the core count within the same TDP? I think it's sad that Intel finally released Sunny Cove with significant IPC improvements but don't have the capability to make high core count or high frequency chips, and probably won't for a while.
Spunjji - Wednesday, January 8, 2020 - link
@Solarbear28 - the funniest bit is how hard Gondalf's projecting. He's predicting that AMD won't have enough Renoir chips to go around, which is precisely the situation Intel are in with their 10nm CPUs. As you also noted, the only reason they haven't pushed the core count further... is they cant.Spunjji - Wednesday, January 8, 2020 - link
Since when was 4 cores 8 threads "the most wanted" in mobile? Intel didn't even bother offering one until after AMD announced the Ryzen 2000 mobile chips. Why would I limit myself to that when I could have 6/12 for less money?I'm not going to have any "bad surprises" - I'm not predicting the second coming, here. You're the one pulling out every possible reason you can come up with for this unreleased product to be bad.
sarafino - Wednesday, January 8, 2020 - link
Precisely. I have never saw extra cores and threads, in the same TDP envelope, for the same amount of money as some kind of unpleasant surprise. I guess it's only seen as a negative if it's the competitor to your favorite brand that's doing it.The_Assimilator - Tuesday, January 7, 2020 - link
Company releases graphs showing company's own products outperform their competitor's' idiots believe these graphs.In other news at 11, water remains wet.
The_Assimilator - Tuesday, January 7, 2020 - link
*competitor's,Spunjji - Tuesday, January 7, 2020 - link
Heaven forfend that the truth lie somewhere between extremes, and that not everybody who believes these graphs to be a reasonable indication of what to expect is also an idiot.Korguz - Tuesday, January 7, 2020 - link
The_Assimilator you still belive what intel says about its own products... dont you ??Korguz - Wednesday, January 8, 2020 - link
yea.. thought so...hanselltc - Wednesday, January 8, 2020 - link
in cinebench?lefty2 - Monday, January 6, 2020 - link
You won't see many Ice lake laptops in 2020. Poor yields means it'll be mostly comet lake and rocket lakemsroadkill612 - Tuesday, January 7, 2020 - link
Yep. Another SKU they make a big PR fuss about but you ~cant actually buy.The salesmen will do all possible to steer you to a 14nm laptop - bet on it.
s.yu - Wednesday, January 8, 2020 - link
Hmmm, Ice Lake lacks vPro resulting many flagship business models sticking to Comet Lake, this might be the reason.koopahermit - Monday, January 6, 2020 - link
How. If by competitive you mean the 4c/8t i7 10G7's barely competes with the 6c/6t Ryzen 5 4500U then sure. Not to mention the worse yields and availability that come with Intel's 10nm.a5cent - Sunday, January 12, 2020 - link
Until AMD's iGPUs have something like Intel's GVT-g, AMD is unfortunately dead to me.Korguz - Monday, January 13, 2020 - link
why???Aljon Pobar - Friday, January 10, 2020 - link
So happy for AMDs comeback, i remember my old tricore setup that was bios unlockable to four cores.lmcd - Monday, January 6, 2020 - link
Frustrating to see the 45W parts aren't shipping without GPUs. I'd love to see a model such as the Dell XPS 15 with a 45W CPU part and no dGPU. Navi video decoder is comparable to a dGPU anyway and the higher TDP CPU is useful for development.nandnandnand - Monday, January 6, 2020 - link
That is a good point. They slashed the graphics CUs, why not slash all the way down to zero?I'm more interested in the 15W TDP chips anyway, especially 4800U.
neblogai - Monday, January 6, 2020 - link
He is asking for laptops that use very decent AMD iGPU instead of dGPU. And I completelly agree- I also want an ultraportable with 3550H at 45W, but no such model was released for RR/Picasso (just launched Lenovo model has the H-series APU, but they configured it down to cTDP of ~25W). We'll see, if anything ultraportable at 45W comes out with these new Renoir series.StevoLincolnite - Monday, January 6, 2020 - link
I have a 2700u notebook, it was cheap, most demanding game I wanted to run on it was Overwatch, which is does okay at 720P, but I think it's bandwidth starved.I would upgrade to a 4700u in a heartbeat if it offers a big uptick in performance, the 3700u was a side grade not worth bothering with.
Jugotta Bichokink - Tuesday, January 7, 2020 - link
You bought a "cheap" mid-tier laptop "for gaming" and you're.... complaining. Gee.Throw him on the pile.
SolarBear28 - Tuesday, January 7, 2020 - link
Doesn't sound like complaining, just sounds like he's sharing his experiences. LPDDR4x should solve the bandwidth issue on Renoir. AMD is claiming 54 fps at 1080p, low settings on Overwatch with the 4800U. But if you really want a major upgrade obviously go with a dedicated GPU.Spunjji - Wednesday, January 8, 2020 - link
Does your notebook have dual-channel RAM? It should manage Overwatch pretty well with it, but without it would definitely struggle. If it has a free slot, throwing in another stick of RAM is your best bet for a relatively inexpensive and very effective upgrade.Cooling is the other problem that tended to affect earlier 2700U devices. Not much to be done for that besides a paste swap.
Failing all that, hopefully the 4700U should be a good future option!
Santoval - Monday, January 6, 2020 - link
Apparently you didn't read the full article. Navi iGPUs will have to wait for the 5000 series of AMD APUs. These APUs all have Vega iGPUs. Higher clocked, fabbed at 7nm (and thus more power efficient), but still Vega.brakdoo - Monday, January 6, 2020 - link
We don't know what the video decoder is. He didn't mean the whole GPU.neblogai - Monday, January 6, 2020 - link
This is Vega, but with an even more advanced VDC than Navi chips.nandnandnand - Monday, January 6, 2020 - link
These new Vega iGPUs are supposedly getting the same VCN 2.0 video engine that is in the Navi GPUs.RX 5500M (Navi mobile dGPU) has the video engine according to this page: https://www.amd.com/en/products/graphics/amd-radeo...
So if you removed the iGPU from a laptop chip entirely and just have the discrete GPU handle everything graphics and video related, what's the harm?
brakdoo - Monday, January 6, 2020 - link
Power consumption is the problem. You can't get those 7-10h of battery life with discrete GPU on...That's the reason the igpu is never disabled in laptops (Intel and AMD) aside from a few desktop replacement devices.
neblogai - Monday, January 6, 2020 - link
Poor battery life- discrete GPUs idle at ~5W-7W.nandnandnand - Monday, January 6, 2020 - link
Who cares about battery life in the laptops with 45W chips? If you want good battery life, you get the 15W chip.Retycint - Monday, January 6, 2020 - link
People that want their laptop to be powerful if needed, and long-lasting if needed? My XPS 15 has an i7-7700HQ, GTX1050 and still gets a solid 7-10 hours of battery life for web browsing, document processing etc.Butterfish - Monday, January 6, 2020 - link
Many 15” “premium” laptops (e.g. MacBook Pro, Dell XPS 15...) use 45W chip that are still very much care about battery life. Having an iGPU or only dGPU is the difference between 9~6 Hours or 2 Hours battery life.shing3232 - Monday, January 6, 2020 - link
I do care, and i got decent battery with my 9750h+2070mq+100wh battery.Jugotta Bichokink - Tuesday, January 7, 2020 - link
Exactly. "I want the top clocks and performance, but using less power than anyone else!"ksec - Monday, January 6, 2020 - link
I actually think iGPU being Vega or GPGPU focused uArch is better than having Gaming Focused like Navi / RDNA. Those GPGPU could be put to good use once software catches up.So I do hope the 5000 Ryzen Series to be Arcturus GPU based.
msroadkill612 - Tuesday, January 7, 2020 - link
I am inexpert, but maybe we all are? The apu is monolithic. They cant mis and match. It needs to cover a range of needs in one form, & i suspect lisa has her eye on apps for the apu that place great value on compute - a strong point of vega.embedded processors for AI onthe edge come to mind - smart cars?
Cooe - Monday, January 6, 2020 - link
What are you talking about? AMD does ship the 45W parts without dGPU's. Look at the ASUS machine; it's a Ryzen 4000 APU + an Nvidia RTX 2060 (aka, AMD only sold them the APU by itself). It's entirely down to the OEM's whether or not to use an H or U series CPU as well as whether to pair it with a dGPU, but with the H series APU flagship having worse iGPU performance than the U series model (1x less CU), and a 45W CPU inherently requiring a cooling system redesign to implement, not adding a dGPU to H using models in these very early days doesn't make much sense.As the 14" ASUS shows, Ryzen 4000 H devices can be small enough already that you really wouldn't save much more space or weight by axing the RTX 2060 but staying w/ an H series CPU, with the only real reasons to do so being price & battery life related (the latter of which can be dealt with by simply force disabling the dGPU when not explicitly being used). IMO, to make a truly noticeable difference in real world use conditions you'd need to axe both the dGPU AND switch to a U series part, such that you could dramatically scale back the cooling system & required VRM board space, as well as the battery if further size reduction is desired.
neblogai - Monday, January 6, 2020 - link
There are plenty of U-series laptops, but no H-series ultraportables at 13", ~1kg (not ultrathins, or gaming laptops).Hul8 - Monday, January 6, 2020 - link
I believe that comment was about there not being any designs with the powerful H-series APU without any kind of dGPU (AMD or Nvidia) - not about AMD bundling APU+dGPU together.Thunder 57 - Monday, January 6, 2020 - link
Agreed. Integrated Vega with LPDDR4X should provide plenty of performance.Myxt - Tuesday, January 7, 2020 - link
Keeping iGPUs in laptops are important imo. You don't want the dGPU to constantly suck up juice while browsing. Having an iGPU saves so much power.Spunjji - Tuesday, January 7, 2020 - link
What'd be the benefit? As it is you get to use the iGPU for low power scenarios, then switch to the dGPU for heavy lifting. When the dGPU is enabled the iGPU is barely doing anything (passing information from the dGPU to your display outputs), so at most you'll be losing out on a couple of watts from it being there. I don't think that's enough to make a difference to clock speeds when you're already ad a 45W TDP.Jugotta Bichokink - Tuesday, January 7, 2020 - link
Why? Just turn it off if you don't want to use it. It costs nothing to have it there.msroadkill612 - Tuesday, January 7, 2020 - link
Good point. Maybe cooling the apu w/ both a pumped cpu in addition to having the IGP in use is too difficult?taz-nz - Monday, January 6, 2020 - link
The one place Intel could feel save was in Laptops, not anymore.Hyper72 - Wednesday, January 8, 2020 - link
Indeed, I had to buy a laptop two months ago because I couldn't wait, got a i7-9750H. It's safe to say; I would've gone AMD if I could have waited.vFunct - Monday, January 6, 2020 - link
Do these have AVX-512 like Icelake does?nandnandnand - Monday, January 6, 2020 - link
Do you have any software that uses AVX-512?koopahermit - Monday, January 6, 2020 - link
Desktop Zen2 doesn't have AVX-512, so the mobile variant isn't going to have it either.And it won't matter at all since Intel's best Icelake CPUs only go to 4c/8t
brakdoo - Monday, January 6, 2020 - link
super-wide vector operations are best done on gpus or other dedicated accelerators units as "real programs" with conditional jumps make no sense whatsoever on vector operations....shing3232 - Monday, January 6, 2020 - link
to put Avx512 on CPU without high speed ram like hbm or gddr6 is pretty stupid to begin, not to mention power limit.Cooe - Monday, January 6, 2020 - link
No, but this is near completely irrelevant to what you'll realistically be doing on a laptop (... any laptop). AVX-512 already negatively hits clock-speeds & power efficiency so hard on Ice Lake that it was already of pretty ndebatable usefullness on a mobile platform anyway, let alone if the instruction set actually had a decent amount of software support (which it most definitely doesn't). Having full speed (1x operation per clock) AVX2 (256bit) with Zen 2 is the vastly bigger & more important deal here.mdriftmeyer - Monday, January 6, 2020 - link
Zen 3Jugotta Bichokink - Tuesday, January 7, 2020 - link
Intel invents new instruction sets specifically for the purpose of saying "no one has this"That doesn't mean it's useful to anyone.
wilsonkf - Wednesday, January 8, 2020 - link
Same power, 4cores with AVX512 but lower freq, or 8cores?vFunct - Monday, January 6, 2020 - link
Really need this in a MacBook Pro 16"timecop1818 - Monday, January 6, 2020 - link
Haha, more AMD bullshit with retarded cinebench on laptops.Almost no discernible ST performance gains (and likely just doctored benchmark results), power consumption/idle still horrible as per usual AMD history of doing things, I'm so buying a new i7-10G7 or whatever laptop instead of any of this overheating unstable AMD garbage.
AMD is absolutely excelling at one thing tho, making products nobody actually needs or wants. Nobody encodes video or runs cinebench on laptops. If they do, they're using dgpu and nvenc, and AMD video coding engine is trash, too.
RIP
Tamz_msc - Monday, January 6, 2020 - link
Begone troll.LogitechFan - Monday, January 6, 2020 - link
Not secure anymore? Oh, maybe /Ayymd is your type thenCooe - Monday, January 6, 2020 - link
Rofl, this might be one of the most "pot meet kettle" comments I've ever seen. The hypocrisy here is so thick I can straight up smell it.Spunjji - Tuesday, January 7, 2020 - link
Pretty sure "LogitechFan" is a sockpuppet. The very idea of having a sock on the Anandtech comments is desperately sad, but then so are their posts, so...Jugotta Bichokink - Tuesday, January 7, 2020 - link
Intel is a company that relies on desperate fangirl socks, so yeah.Cinebench is just one metric. If it was the only metric AMD was kicking Intel's sad 5-year-old lunch in, that would be news.
But it's just one of them, and for an Intel fangirl to complain about metric cherry-picking?
HILARIOUS KARMA
Sub31 - Friday, January 10, 2020 - link
I don't know how or why we've suddenly all gone from lauding each other over perf increases to disowning performance increases (in the same power envelope) as pointless. Truly puzzling.hecksagon - Monday, January 6, 2020 - link
Nobody who cares about video quality is using NVENC or Quicksync. Software video encoding is the only way to go. In most cases these 8 core parts are approaching NVENC and Quicksync in performance anyways.hecksagon - Monday, January 6, 2020 - link
Oh, also forgot to add that in the enthusiast market AMD is outselling Intel desktop CPUs almost 2:1. Really excelling at making something nobody wants.Cooe - Monday, January 6, 2020 - link
Holy delusional fanboy, Batman!!! Zen 2 is dramatically more efficient as far as both power AND thermals than ANYTHING Intel has; with the difference compared to the latest 14nm+++ Skylake derivative nearly on the order of 2x (at least as far as power draw is concerned). 10nm Ice Lake gets things closer, but Zen 2 is still ahead to the point it can pack in 2x the core/thread count in it's max config to boot.AMD's literally giving you nearly 2x the multi-threaded performance in the exact same 15W ULV power/thermal envelope as Ice Lake, & with notably better single-thread + significantly better iGPU perf thrown in for good measure. I seriously don't know what more you could have been realistically expecting to have ended up so damn disappointed.
timecop1818 - Monday, January 6, 2020 - link
> AMD's literally giving you nearly 2x the multi-threaded performanceWhich I (and most people) don't need. 4 fast cores > 16 shitty cores. Also: "benchmarks that benefit from multiple cores perform better with more cores. Who would have thought?!". Oh, but how about real-life usage? Ah... Oops.
> with notably better single-thread +
Citation needed. 4% as per their graphs is literally measurement noise and/or some cherry-picked "benchmark" that favored AMD.
> significantly better iGPU perf
Yeah but then you have to install fucking radeon drivers, lol.
Cooe - Monday, January 6, 2020 - link
Holy shit... I'm going to guess you're an extremely uneducated "gamer". If so, here's a news flash buddy, next gen consoles have 8-cores/16-threads (Zen 2 ones in fact), meaning that's the MINIMUM you'll need to get max performance in next gen titles. So do you still wanna tell me that doubling multi-threaded performance in the same thermal/power envelope is still pointless?And you are seriously someone who will refuse to install AMD Adrenalin, but then go & voluntarily install the near spyware that is GeForce Experience with a smile on your face? Rofl, oh the hypocrisy *facepalm*...
Maybe stop taking it from Intel in the butt & being a total freaking dumbass for just a second.... You might want to enjoy it.
LordanSS - Monday, January 6, 2020 - link
I literally manually install my nVidia drivers... won't touch that new GFExperience, not even with a 10feet pole.Jugotta Bichokink - Tuesday, January 7, 2020 - link
Same on AMD myself, drivers only no bloaty phone-home stuff.Whoever took over the v12 and made it do Windows 10 grade crap in the background... fired!
timecop1818 - Monday, January 6, 2020 - link
No, I don't game and I actually use my laptop for work. For which I require decent graphics (CAD) and (you guessed it) good ST performance. Neither of which AMD will deliver. Which is why my new laptop in 2020 is going to be 10th gen i7 or something newer if it shows up in Q1.I couldn't give any less fucks about how many cores are in next gen consoles, game "developers" have been lazy fucks for years now, writing shitty code just because.
Tamz_msc - Monday, January 6, 2020 - link
Even the Zen+ based APUs have better 3D graphics performance than Ice Lake. You won't see a night and day difference doing CAD between Ice Lake and these new AMD chips.And 4 percent higher ST performance isn't Cinebench. I'm surprised that you claim to be a professional and yet do not know what noise is.
SolarBear28 - Monday, January 6, 2020 - link
Except that AMD have matched Intel in single threaded performance at a lower cost. And thrown in more cores for better multitasking if you need it. So why rule out AMD already?supdawgwtfd - Monday, January 6, 2020 - link
Why rule it out?Because he is a moron.
Only a.moron would choose to not buy the best regardless of what sticker is on the product.
Who cares if it is Intel, AMD or VIA. The best is the best and that is what you should be buying.
Unfortunately most people don't seem to be able to use their brains most of the time and buy things based on how they feel.
palladium - Tuesday, January 7, 2020 - link
Then all the power to you. Sure, Ice Lake probably would have better ST performance than Renoir (the magnitude of which we wouldn't know until both are available en masse), but I'm curious what sort of CAD work you do that does not benefit from MT performance (but benefits from improved graphics).And just so you know, it is very difficult to write good multithreaded code. Sounds like the "lazy game developers" you are talking about who can write good MT code that runs well on current gen console hardware (8 Jaguar cores) is smarter than someone who chooses to run a GPU-accelerated CAD software on a non Quadro/Firepro hardware.
Spunjji - Tuesday, January 7, 2020 - link
Way to throw game developers under the bus just because you're salty about Intel's 10nm offering only having 4 cores. xDYou clearly have no idea what you're talking about. Good effort, noble unpaid Intel shill.
msroadkill612 - Tuesday, January 7, 2020 - link
The sad part is he IS paid :) - the literate old hands are having bouts of conscience and getting more honest jobs.Korguz - Tuesday, January 7, 2020 - link
timecop1818 you do realize amd has better ipc then intel now.. right ??? clock for clock.. core for core.. amd is better.. the ONLY reason intel wins any benchmarks.. is because of the higher clocks.. clock the 2 at the same clocks.. and intel looses...Fataliity - Tuesday, January 7, 2020 - link
and avx raies their benchmark scores in AESJugotta Bichokink - Tuesday, January 7, 2020 - link
TIMECOP CANNOT READ.STOP MAKING FUN.
SolarBear28 - Tuesday, January 7, 2020 - link
@ Korguz On the desktop that is true. But Sunny Cove has taken back the IPC lead in mobile. According to AMD Renoir beats Ice Lake by 4% in a single threaded workload but its 4.2GHz vs 3.9GHz. So Ice Lake has a small IPC advantage. However, by the time Sunny Cove comes to the desktop AMD should be able to retake the IPC lead with Zen 3.Korguz - Tuesday, January 7, 2020 - link
i bet sonny cove only has the lead still because of the clock speed... what happens when you clock both cpus at the same clock speed ?? intel vs Zen 2 based cores....SolarBear28 - Tuesday, January 7, 2020 - link
Ryzen 7 4800U has a single core turbo of 4.2 GHz versus 3.9 GHz for Ice Lake Core i7-1065G7. That's a 7.7% advantage for AMD. Yet AMD claims to have a single core performance advantage of only 4%, so Ice Lake has slightly higher IPC.Zizo007 - Wednesday, January 8, 2020 - link
Actually, in the mobile segment, Intel has lower clocks than AMD, their high end is capped/throttled at 4Ghz while they are advertising 4.5Ghz, its all over the place in Tomshawrdware forums.If AMD manages to boost to 4.2Ghz, they will beat Intel by a good margin in both ST and MT.
Spunjji - Tuesday, January 7, 2020 - link
"4 fast cores > 16 shitty cores"Except you're not getting 16 shitty cores, you're getting 8 cores that are comparable in ST performance to the 4 cores Intel is offering. Derp.
"Citation needed"
You first, pal. AMD's graphs have been pretty solid lately; Intel's... not so much. You're giving out what *could* be solid reasons to ignore the AMD results, if true, but they could just as easily not be true. You have no more idea than the rest of us, but you made your mind up already. Cute!
"Yeah but then you have to install fucking radeon drivers"
LOL. Ever used Intel iGPU drivers before? This old "AMD drivers suck" rhetoric is old and has no basis in truth, and using it in comparison with Intel is downright hilarious.
vladx - Wednesday, January 8, 2020 - link
"AMD drivers suck" holds true more than ever, just search the web for Radeon 5700/5700 XT drivers issues and you'll get hundreds of topics on it.Spunjji - Thursday, January 9, 2020 - link
If you search for driver issues, you'll find lots of people with driver issues. Who'd have thought?AMD's drivers aren't significantly more or less buggy than Nvidia's - they tnd to circle each other in terms of who currently has the most irritating bugs.
Intel's drivers are a whoooole other story, which is why it was hilarious that this troll was trying to make that argument.
Korguz - Friday, January 10, 2020 - link
vladx then write your own drivers..people keep saying amds drivers suck.. or nvidias suck.. but i have used both with no issues.. i bet if you did the same for nvidia.. you would find the same...
Spunjji - Friday, January 10, 2020 - link
vladx knows the game they're playing. If you Google "AMD driver issues", you'll get more results than for "Nvidia driver issues" by a factor of about 3:2. One could naively assume that means AMD's drivers are worse and be done.Google an outright derogatory line like "AMD drivers suck", though, and you'll get ~3 million results - while "Nvidia drivers suck" only gets ~700,000. Very few people with actual problems will state them in that way, so this likely reflects the vocal fanboy contingent - it also looks like their market share turned inside-out.
"Motivated individuals" who exclusively buy Nvidia have been spamming about AMD drivers being terrible for so long that they've generated the illusion of it being true - dupes and shills reinforce it by repeating it ad nauseam.
Having used both vendors for decades, I'll happily state that both have issues and both are fine 99% of the time. Neither are anywhere near as bad as Intel, which was the whole damn point of this topic offshoot in the first place.
vladx - Friday, January 10, 2020 - link
I own both Nvidia and AMD GPUs so I can tell with outmost confidence that Nvidia drivers unlike AMD drivers are problem free at least for basic stuff like gaming and video decode and encode. I can't claim how it fares in other workload, maybe there are indeed issues in other workloads.vladx - Friday, January 10, 2020 - link
"vladx then write your own drivers.."Thanks but no thanks, I'd rather pay for Intel and Nvidia who can hire competent software engineers and a proper QA team.
Korguz - Friday, January 10, 2020 - link
then it must be just you vlad.. cause i have no issues either way.. and i have vid cards..that are a tad old.. still in use from both in comps..where i just need a vid card...why not ??? sounds to me like you think you would write better drivers... then amd or nvidia would...
vladx - Friday, January 10, 2020 - link
Like I said, neither Intel's iGPUs or Nvidia cards gave me any issues, while the RX480 I have went from working to glitchy and back in certain games with each drivers update. Like I said, QA is pretty much non-existant at AMD.And now the newest and hottest Radeon 5700 XT is full of gfx issues in games, so indeed the infamous reputation of AMD's drivers is well deserved to this day.
Korguz - Saturday, January 11, 2020 - link
yea ok sure...TheinsanegamerN - Wednesday, January 22, 2020 - link
You need some air bud? You seem...to be pausing....a lot....when trying....to talkNicon0s - Tuesday, January 7, 2020 - link
If the multi-core performance is nearly 2x how are those 16 threads shitty? You don't make sense.Jugotta Bichokink - Tuesday, January 7, 2020 - link
The drivers themselves are actually excellent. Install them without installing the crap software in v12.(Unpack the installation files, cancel the installer, go to device manager - gpu, find driver, done)
Jugotta Bichokink - Tuesday, January 7, 2020 - link
"4 fast cores > 16 shitty cores. "We have an Intel moron who can't read apparently. Intel's cores no longer match AMD's.
You lose. Not only that, you lose all your data every time you speculate, lol.
Zizo007 - Wednesday, January 8, 2020 - link
So Intel doesn't cherry pick their benchmarks?? LMAOZizo007 - Wednesday, January 8, 2020 - link
And go read on Tomshardware how the high end Intel CPUs are not reaching their Boost clocks, they are capped at 4Ghz instead of 4.5Ghz. AMD is also much cheaper than Intel and has better ST AND waay better MT performance. Enjoy your overpriced slow 4 cores throttling Intel.Sub31 - Friday, January 10, 2020 - link
Why are you making the arguments used against Bulldozer? It's quite well known that Zen 2 cores are very competitive against Skylake+++++++++++++, and on par with Ice Lake. Also, real life usage undoubtedly uses more and more cores - notably, Intel's stupid marketing "benchmarks" are all applications that don't even load CPU. Nobody is buying an expensive laptop to use Microsoft Edge - you can do that on a $300 potato. And based on what we know, 4% is very reasonable given desktop Zen 2 performance. Without a chipset, the IO die, and reduction of other things, Renoir is very likely able to achieve that.And 4% was from Cinebench R20 1t. Love how Intel fanboys paraded their cinebench dong when Bulldozer was getting quashed in FP perf compared to Sandy - and now that we have AMD leading in FP performance suddenly it's the literal worst? Idk, it's a pretty good representation of compute performance of a CPU, and now that 4000 supports LPDDR4-4266 we can be assured that memory is pretty good too...
And what exactly is so bad about Adrenaline?
Korguz - Friday, January 10, 2020 - link
sub31.. yep..if intel does it its fine... if amd does it.. its a federal offence.. look at the power usage pre zen.. vs now....mdriftmeyer - Monday, January 6, 2020 - link
Coleagues of mine at Intel all agree--they have nothing to compete against AMD. But you keep thinking Intel has a bright future.msroadkill612 - Tuesday, January 7, 2020 - link
If they did, you would think they would have mentioned it at CES - nope - zip.LogitechFan - Monday, January 6, 2020 - link
Exactly that. And the leaked benchmarks from weeks ago have been confirmed for better or for worst -- a 4c/8t ICL is about as good as 8c/8t AMD, but way more power efficient... BUT a whole bunch of AMD brainwashed fanatics will tell you otherwise, I'm sure.Cooe - Monday, January 6, 2020 - link
Rofl, so you'll trust a single potentially leaked pre-release bench with absolutely no context & countless unchecked variables more than AMD's officially provided numbers? And you call ME a stupid fanboy??? O_____OAbout the only way I can respond to that level of absurdity is to laugh my freaking ass off.
timecop1818 - Monday, January 6, 2020 - link
> AMD's officially provided numbersyea those have NEVER been wrong
Spunjji - Tuesday, January 7, 2020 - link
Cite a time when they've been wrong since 2016.Their pre-release estimates for Zen *under-represented* the performance gains from 'dozer, and they did the same again from Zen to Zen+. Their marketing slides from the releases were all borne out by independent benchmarks.
That's why Ryzen 3000 CPUs are selling like hotcakes on the desktop.
Xyler94 - Tuesday, January 7, 2020 - link
So... we should trust a random leak on the internet over AMD's official slides?AMD uses these slides as both a means to sell to us, the consumer, and to their investors. They may be cherry picked, but they're also as truthful as they can be.
There's a laundry list of things Intel have done to downplay the Zen arch.
Jugotta Bichokink - Tuesday, January 7, 2020 - link
TIMEFAP CANNOT READ, STOP TEASING HERSpunjji - Thursday, January 9, 2020 - link
I don't get what's with referring to timecop as "her" or "fangirl" as a form of mockery.Call them mendacious, a troll, time-waster, FUD-spreader, dipshit, whatever - but there's nothing inherently insulting or degrading about being female.
SolarBear28 - Tuesday, January 7, 2020 - link
Except we don`t yet know how efficiency compares between Ice Lake and Renoir. We are all just speculating based on AMD`s improvements over Picasso (and our knowledge of Zen 2 desktop parts).palladium - Tuesday, January 7, 2020 - link
Which benchmarks?Spunjji - Tuesday, January 7, 2020 - link
Are you timecop1818's sockpuppet, or do you just follow him around agreeing with all his posts and disagreeing with his critics in the least factual manner possible?Zizo007 - Wednesday, January 8, 2020 - link
It might be the same person with a different account hahaNicon0s - Tuesday, January 7, 2020 - link
It must be nice in that fantasy land you are living.sseyler - Tuesday, January 7, 2020 - link
ROFL. Now THIS is what I'm talking about. Anandtech trolls, take note. Your exemplar has arrived.Spunjji - Tuesday, January 7, 2020 - link
Opens with ableist slur, follows with speculative FUD, finishes on pointless anecdata.Maybe you should hang out at WCCFtech instead? You'd fit right in with the other Intel/Nvidia fan trolls.
TheinsanegamerN - Wednesday, January 22, 2020 - link
"ableist slur"Nobody cares about your tumblr genders.
Korguz - Tuesday, January 7, 2020 - link
timecop1818 yea.. and intel is better ?? you forgetting all those quad core chips intel kept giving the mainstream, while telling every one we dont need more then 4 cores for the mainstream.. or how about their 95 watt cpus using up to 200 watts ??Hul8 - Tuesday, January 7, 2020 - link
Even if tile-based rendering isn't often done on laptops, what in that makes this *benchmark* irrelevant?Benchmarks aren't always chosen based on what lots of people use, but rather than by their ability to produce useful metrics - in this case multi-threaded computing when restricted by only the execution resources, power and thermals.
Spunjji - Tuesday, January 7, 2020 - link
They've got their trigger words, and by heck they're going to spin with them."Synthetic benchmark!"
"Vendor-provided!!"
"Efficiency!!!"
Meanwhile the rest of us can be casually excited and wait for the reviews before deciding what's not good enough.
Hul8 - Tuesday, January 7, 2020 - link
I swear some of the most rabid fanboys have a few words they Ctrl-F from the page and react to - without even reading the entire post or article. And that goes for both camps - I've seen it from some people, to defend AMD against some imagined slights (a few words taken out of context).Spunjji - Wednesday, January 8, 2020 - link
Yup. You can guarantee that whatever is actually being said, somebody will carry it off-topic to a talking point they feel more secure in - even if it's totally irrelevant and they're not actually right about that, either. Tends to be how most political discussions go, too...msroadkill612 - Tuesday, January 7, 2020 - link
Yeah, there have been lottsa scandals about doctored benchmarks. Oh wait....AMD under Lisa, have a rep as very straight shooters. Intel only tell the truth for practice. They have zip cred ATM - a laughing stock.
"Absolutely excelling" is a tautology btw. I highly doubt a semi literate finds paid work to justify a fancy laptop.
Nobody makes a living running benchmarks either - any pc - any brand. Benchmarks means "indicators", & are used by both - perhaps innappropriately, but no side can set the rules. they are simply what consumers have come to expect
Hul8 - Wednesday, January 8, 2020 - link
I think a large part of this stems from confusing benchmarks with real word performance (at a given task that is different than the benchmark).Benchmarks usually highlight one aspect or side of a product, and you use multiple benchmarks to prong it from different sides.
Cinebench and other tile-based renderers are used to gauge the maximum achievable multi-threaded performance under full load, when there is no inter-thread communication required that would affect the compute efficiency. You use other benchmarks to get at the single- and lightly threaded workloads, and do some real world tests.
It's only the totality of different kinds of tests that tells (enough of) the whole story. Since reviewers have a large audience, they can only give general recommendations on the kind of workloads a product is suitable for, and the kinds it's the best or one of the best for. It's on every individual to consider the tests that are relevant to their specific use case and make their own decisions. (Also: Follow multiple review outlets that do the kind of testing that caters to your use case.)
Spunjji - Thursday, January 9, 2020 - link
Nailed it.I think the big whinging from the Intel fans comes because, when Zen first released, Cinebench was very much a best-case scenario for AMD's chips - especially Threadripper. It showed them in the best possible light, while performance elsewhere wasn't so hot.
AMD have sorted most of those issues now, but Cinebench has become an easy way to compare their product generations - and sure, it still shows AMD's thread-heavy products in their best light. But the trolls don't have new arguments because Intel don't have new products, so they're going back to the same old ones.
deksman2 - Wednesday, January 15, 2020 - link
Its also not just up to AMD.As it was noted in various independent tests, developer optimisations for hw can make or break results in both games and data centre space.
Most developers implement Intel coding which of course does tend to behave somewhat better on Intel hw, and AMD is left to 'reach' Intel from brute forcing alone (which doesn't exactly makes the playing field fair to begin with).
In most recent tests of HEDT and data centre CPU's such as ThreadRipper and EPYC, it was noted that when developers optimised a program for Zen uArch, performance improved by over 50%.
So, we need to bear in mind that in the software field, performance can radically vary as code can be selective.
We need devs to write code (or have AI write code) which can execute as efficiently as possible (and make use of anything the said hw has to offer) on any hw without discrimination.
deksman2 - Wednesday, January 15, 2020 - link
I know that some people don't think there are people who do serious gaming and productivity work on laptops, but there are.For such people (like me), we do like high performing multithreaded performance which can be sustained indefinitely (especially for things such as 3d Studio Max which easily maxes out all the cores/threads when rendering animations).
Then there's video-editing, and occasional gaming.
We also need these systems to be portable, so people like me actually like systems such as Acer Helios 500 PH517-61 which have desktop 2700 and Vega 56 with powerful cooling.
I know its a desktop replacement with not so great battery life, however, it IS highly portable (infinitely so in comparison to a desktop), not to mention, quiet, and can easily sustain maxed out CPU/GPU performance indefinitely while remaining quiet and cool (cooler than some desktops even).
Anyway, I like the fact that 4800H for example seemingly comes within a spitting distance of 3700x (performance-wise) in just 45W TDP envelope, and has a capable iGP to boot which would enhance battery to a high degree (plus, depending on which dGPU is used, it would also be a capable gaming machine).
Emphasis on single threaded performance is not a big deal for lots of people since Zen uArch has been quite capable of doing this nicely.
Zen 2 is another ballpark though, so its certainly welcome that we are now going to have real mobile hw with some serious performance punch.
However, as you know, OEM execution will be key.
AMD can have superior hw, but if OEM's don't include capable cooling and then mismatch other internal components and cut corners, its going to be a problem (but this is not something AMD has influence over sadly).
I just hope OEM's stop treating mobile users as second class citizens and do things competently with high quality control on AMD hw this time around.
Real world performance should reach the advertised benchmarks/numbers if the cooling is done competently... if its not, we know OEM's will be to blame.
Zizo007 - Wednesday, January 8, 2020 - link
8 Core 15W AMD, what can Intel do with 10nm?Zen 2 has pretty much the same or faster ST performance vs Intel.
Zizo007 - Wednesday, January 8, 2020 - link
No matter what you say, AMD will be better because they offer similar or better ST performance AND they have better technology 7nm which is more efficient and faster.Zizo007 - Wednesday, January 8, 2020 - link
Enjoy your overpriced slow 4 throttling 10nm Intel cores :)vladx - Wednesday, January 8, 2020 - link
Agreed, but I'll probably go with a Comet Lake laptop since the higher frequency still no doubt makes it superior to Ice Lake.alufan - Thursday, January 16, 2020 - link
hmmTo further prove that the second gen Ryzens constituted the tipping point for AMD’s success, the Ryzen 7 2700X is currently the top best-selling CPU on Amazon, with a very enticing US$159 price tag. Intel’s only top 10 CPU is the i5-9600K occupying the 10th place, currently selling for US$222.99. The i9-9900K is occupying the 11th position, and most of the AMD CPUs are mid-range models. Here is the complete top 10 list:
AMD Ryzen 7 2700X - US$159
AMD Ryzen 5 3600X - SU$199.99
AMD Ryzen 7 3800X - US$329.99
AMD Ryzen 5 2600 - US$114.99
AMD Ryzen 7 3700X - US$309.99
AMD Ryzen 9 3900X - US$499.99
AMD Ryzen 5 2600X - US$119.99
AMD Ryzen 5 3600 - US$194
AMD Ryzen 7 2700 - US$139.99
Intel Core i5-9600K - US$222.99
looks like facts are a bit skewed timecop1818
nandnandnand - Monday, January 6, 2020 - link
From the slide deck, looks like Athlon Gold/Silver is not the ideal replacement for A6-9220C and A4-9120C, since the TDP is at 15W instead of 6W. The A6-9220C replacement is rumored to be "Dali" and I guess it will be announced later.Athlon Gold 3150U is just under Ryzen 3 3200U, if we go by the name and clocks. So it's not meant to replace 3200U per se.
At least there will be more diversity at the low end.
neblogai - Monday, January 6, 2020 - link
Dali is probably a refresh of the Raven2 on 12nm. Slides had it at the same tier as Raven2.nandnandnand - Monday, January 6, 2020 - link
I looked at the bottom of the slides and it says "Renoir/Dali product launch press deck". So I guess Athlon Gold 3150U and Silver 3050U are in fact Dali.I still want to see what a fanless Ryzen laptop chip could do.
brakdoo - Monday, January 6, 2020 - link
3150u looks like a rebrand of Athlon 300U (Same clocks). They probably just wanted four digit namesneblogai - Tuesday, January 7, 2020 - link
Or, they wanted a lower SKU (like the current 2c/2t), but there was no easy way to go lower with the 300U name.Jugotta Bichokink - Tuesday, January 7, 2020 - link
The AMD Pentium D.sorten - Monday, January 6, 2020 - link
Thanks for the details Ian. I can't wait for reviews!It seems Intel has run out of time with their 14nm games.
jeremyshaw - Monday, January 6, 2020 - link
Do we know the PCIe lane count? Is it still 12 lanes or does it get increased this gen?Also, are there any design wins with TB3 or USB4?
Cooe - Monday, January 6, 2020 - link
If I were to guess it's likely still 12 lanes to cut down on die space (aka technically 20x on die, but 8x are reserved by the iGPU like normal), but they're now PCIe 4.0 meaning effectively double the bandwidth. This should make adding stuff like uses up PCIe connections like fast networking MUCH easier than it was with the somewhat I/O deficient Raven Ridge & Picassso platforms (where stuff like 1x lane WiFi chipsets were annoyingly common to save on lanes). Well, assuming more & more such expansion devices start to come around with native PCIe 4.0 support that is.supdawgwtfd - Monday, January 6, 2020 - link
NOT PCIE 4.Still 3.0 which is plenty.
4 would use to much power.
Cooe - Monday, January 6, 2020 - link
You are right. Damn that's big miss. Designing laptops with just 12x available 3.0 lanes was already super tight with Raven Ridge/Picasso. And that's before you start talking about new tech like WiFi 6 for ex.Jugotta Bichokink - Tuesday, January 7, 2020 - link
Wifi 6 doesn't exist nor would it use more lanes, lol.Spunjji - Wednesday, January 8, 2020 - link
A single PCIe 3.0lane still provides 1GB/s bi-directional. 500MB/s read/write should more than suffice for even the most recent WiFi chips. It's still a big limitation for potential Thunderbolt implementations, though, but I'm part of the "couldn't care less" market on that front - USB C does everything I need at a much lower cost.msroadkill612 - Tuesday, January 7, 2020 - link
Not sure, but i think it is pcie 4 internally, but the accessible ~chipset lanes are pcie 3.The Hardcard - Monday, January 6, 2020 - link
One advantage of the mobile parts bringing up the rear is that previous Ryzen mobile parts have included features that weren’t ready when the desktop parts came out.Hopefully, AMD is willing to talk about how these into parts compare to other members of the family. For example, one CCX or two?
maybe now, there will be some fully premium AMD laptops. Ryzen 3000 laptops were better, but still not there.
extide - Monday, January 6, 2020 - link
Two CCX, it's right in the article.The Hardcard - Tuesday, January 7, 2020 - link
The charts and graphs pulled my eyes pass that short paragraph. I blame society.Spunjji - Tuesday, January 7, 2020 - link
We do, indeed, live in one of those. Or so I keep hearing.Hul8 - Tuesday, January 7, 2020 - link
The later release is probably also due to the higher degree of integration these products require, to design laptop chassis, system board and cooling around them, never mind the firmware optimizations. Manufacturing on leading edge nodes should also get more profitable for lower cost parts as time goes on - APUs have up until now been much cheaper than the desktop CPUs.mczak - Monday, January 6, 2020 - link
"AMD has also adjusted the L3 amount, to 4 MB per CCX, which is half that of the consumer desktop line."That's not correct, it's only a quarter, since consumer desktop line now has 16 MB per CCX.
(Raven Ridge had half that of the desktop line, IMHO it's slightly surprising that the cut this time is that drastic.)
brantron - Monday, January 6, 2020 - link
Between the integrated memory controller and "extra" L3 from unused CPU cores, they should come out closer to their desktop counterparts than before.AMD probably isn't too concerned with parity at 100% load. My laptop is a lowly dual-core, and I rarely, if ever, see that happen.
Cooe - Monday, January 6, 2020 - link
Ian made an error in the article. That should say "8 MB per CCX" not "4 MB". And the lack of any I/O die (aka, the memory controller & uncore is all right on die) should make up for that reduction by reducing latency, just like using just 1x CCX did w/ Raven Ridge & Picasso.Cooe - Monday, January 6, 2020 - link
Scratch that. It seems that dAMD DID actually cut the L3 by 3/4's (from 16MB to 4MB per CCX). That's really kinda surprising. Must mean that they got an even bigger latency benefit from bringing the IMC & uncore on die (from the desktop's separate 12nm I/O die) than I was originally expecting, such that the die space savings were more worth it than just cutting the L3 in 1/2 (aka doubling the L3 per CCX vs RR/Picassso), as was the case with prior APUs.Jugotta Bichokink - Tuesday, January 7, 2020 - link
Bigger cache is only shiny-useful in certain conditions, also needs watts. The sweet spot is calculated.nandnandnand - Monday, January 6, 2020 - link
Was a Ryzen 9 4900H even alluded to at the presentation? Or are other sites just mentioning it because it is the rumored top of the 45W stack?SolarBear28 - Tuesday, January 7, 2020 - link
No mention from what I've seen. Perhaps when AMD are done producing the 4800HS 35W exclusive part for Asus they will turn those bins into a 45W 4900H? I have no idea.zamroni - Monday, January 6, 2020 - link
it's time for Intel to spin off it's fab just like amd did for global foundry.AMD indeed waited to long that it's fab became Achilles's heel for years
Jugotta Bichokink - Tuesday, January 7, 2020 - link
Intel needs everything under their dress - or they'll no longer be relevant.If they spun off their fabs they'd be replaced in under a decade.
id4andrei - Tuesday, January 7, 2020 - link
AMD is using fabs that have benefited from insane amounts of money from the mobile boom. Going forward those fabs will still receive orders from those same sources. Intel's own demands do not generate enough profit for it to invest in its fabs as much as TSMC/Samsung can.dishayu - Monday, January 6, 2020 - link
Using 4000 series for what is still Zen2 architecture rubs me the wrong way... if it was at least Zen 2+ or something, if not Zen 3.nandnandnand - Monday, January 6, 2020 - link
That's the way they handled the Zen and Zen+ mobile chips. It's time to retire this complaint already.SolarBear28 - Tuesday, January 7, 2020 - link
It`s a valid complaint. Those less informed could incorrectly assume that Ryzen 3000 mobile APU`s have Zen 2 because that`s what the Ryzen 3000 desktop parts have.The Hardcard - Tuesday, January 7, 2020 - link
it is not really a valid complaint. The numbering system just tells people that they are getting the newest chips for any particular year. Everyone who cares about cores and the difference between Zen, Zen 2, Zen 3, etc. knows that the newest APUS come after Desktop and server and thus every year have an earlier core.If you care you know. If you don’t know you don’t care.
Fataliity - Tuesday, January 7, 2020 - link
Yeah it's stupid. It's done by marketings because the 4000 series will be sold during the APU's lifetime, so it doesn't look outdated from a naming perspective.But i agree, wtf
LogitechFan - Monday, January 6, 2020 - link
Can we just ban Cooe, please? I mean, AMD fambois are idiots, but he gives a bad name even to them...Cooe - Monday, January 6, 2020 - link
Lol, are you allergic to facts & reason or something? But way to attack my points with facts & data there!... Not...The cognitive dissonance required to continue believing that Intel is dominating mobile CPU's after today is kind of awe-inspiring. And you'd do well to realize that the vast majority of readers/commenters agree with me. It's literally you & just two others that are living in deep, DEEP denial by believing that AMD completely lied out their ass about the chip's performance. Find me a SINGLE post Ryzen, AMD provided benchmark graph that was totally outright fraudulent, & then maybe you might have some extremely shaky ground to stand on.
Otherwise "taking things w/ a grain of salt" ≠ "assume all provided benchmarks are total bullshit, despite years of historical precedence".
This isn't the company that conveniently forgot to tell people about a hidden 5HP chiller during a recent live performance demo, or arbitrarily deciding what does & doesn't count as "real world performance", remember?
The_Assimilator - Tuesday, January 7, 2020 - link
If you're willing to believe AMD's pre-baked marketing slides out of hand, you're just as gullible as those willing to believe Intel's slides. Regardless of AMD's massive progress in the CPU space, mobile has remained their Achilles' heel and until they show us the pudding, there's no proof.And Cinebench and anything from 3DMark is synthetic trash that has zero bearing on the real world and is only used by overclockers and e-peen-wavers.
Xyler94 - Tuesday, January 7, 2020 - link
Stop sounding like Intel if you are trying to say "stop believing AMD's slides".Maxon Cinema 4D is an actual piece of software for rendering. Cinebench R20 is based off of Maxon's software, and gives a fair, repeatable show of rendering performance. While I think they should use Blender more, it's also unfair to claim "it's trash data".
Also, fun fact: Before Zen continuously beat Intel in it, Intel had 0 issues with showing how well their CPUs performed using Cinebench. Intel had no problem using benchmarks to give performance data, but suddenly, when AMD's winning more and more benchmarks... Intel is concerned over "real world"?
Would you rather Intel sponsored reviews and benchmarks be used? How is that any better? A lot of Intel's latest benchmarks on their slides have been using benchmarks they sponsored or helped created... doesn't that rub the wrong way?
And also, I find it funny they recommended benchmarking MS Word.
Spunjji - Tuesday, January 7, 2020 - link
Who's talking about them out-of-hand? He specifically pointed out that taking them to be unreliable isn't the same as disregarding them entirely, as LogitechFan and co are. He also asked if anyone had any post-Ryzen charts from AMD that misrepresented their products - as far as I'm aware no such thing exists.Mobile has indeed remained their achilles heel, but they've already made tremendous progress in that space. More has been needed, and I agree that their relative silence on power efficiency isn't encouraging.
Cinebench and 3DMark are useful for apples-to-apples comparisons. You're right that they don't represent most workloads well, but no single benchmark does. What they do do is allow you to compare generational improvements. This is basic stuff that's repeated at the start of pretty much every technical review, so it's a bit weird that you're pushing the "synthetic trash" angle here - especially as CineBench isn't synthetic at all.
I feel like this comment section would be markedly improved if people just stopped taking the precise opposite side in every discussion to someone they think is a "fanboy".
Spunjji - Wednesday, January 8, 2020 - link
This post seems as good a reason as any to request a ban on LogitechFan. The shill clearly feels outmatched and should be retired.Hul8 - Wednesday, January 8, 2020 - link
If all the fanboys were removed, this place would be mostly deserted.Not only do they post a lot, and repeat what they've said multiple times per discussion, and in comment sections of multiple articles, they also incite other fanboys (and reasonable people) to respond and argue with them... :-)
Hul8 - Wednesday, January 8, 2020 - link
It's almost like they're the tech media equivalent of Russian troll factories... Hmmm...Spunjji - Thursday, January 9, 2020 - link
To think that political discourse is now influenced by the same forces that brought us the raging flame wars over the Red Ring of Death and Bumpgate... D:vladx - Thursday, January 9, 2020 - link
Yep he's an iAMD tard, but censorship is not a solution. His idiocy will make every person with half a brain ignore his nonsense anywaysSpunjji - Thursday, January 9, 2020 - link
Relying on ableist slurs to make your point is often a good sign that you're overestimating your own intelligence. Agreeing with an obvious troll tends to seal that deal.eastcoast_pete - Monday, January 6, 2020 - link
Looking forward to the first reviews! I am especially curious about head-to-head comparisons of the 45 W 6C/12T i7 and the 8C/16T Ryzen parts; AMD went for more cores, but less cache (Intel's i7 have 12Mb). Let the battle begin. Hope to see a bit of a price war here soon, as I am due for a new laptop.isthisavailable - Monday, January 6, 2020 - link
rip Ice Lake 2019end - 2020startSilma - Tuesday, January 7, 2020 - link
Priced similarly to Intel does not cut it.AMD should either undercut its rival significantly or offer significantly higher performances.
Jugotta Bichokink - Tuesday, January 7, 2020 - link
"Priced similarly to Intel does not cut it." - When it's beating it soundly, yes it does.You go buy your Penitum 6, nobody will care. This is the highest-end laptop chip.
You want the shiny, you pay the shiny. (It's STILL less than Intel's gouging)
GeoffreyA - Tuesday, January 7, 2020 - link
Wonder what the desktop APUs will be like. Darn, and I just got a 2200G a few months ago. Oh, well :)Spunjji - Tuesday, January 7, 2020 - link
Depends on whether you're after CPU or GPU performance.CPU performance should be a rout, but between the reduced CU count for the Vega GPU and the inability to use LPDDR4X on desktop, I think the GPU side of things might be closer than you might think.
Fataliity - Tuesday, January 7, 2020 - link
I don't think there will be desktop APU's anytime soon. The product stack doesnt leave an area where the APU will make alot of money. Unless an 8C $200+.7nm's getting cheaper but its still expensive
Jugotta Bichokink - Tuesday, January 7, 2020 - link
APU isn't about making money on the high-end. You are misconceiving something.Spunjji - Wednesday, January 8, 2020 - link
It would be odd for them not to. They have had trouble competing in the business desktop space due to their lack of an IGP in the standard Ryzen chips, and these chips can address that market very well indeed.Hul8 - Thursday, January 9, 2020 - link
Well then the solution would be:Consumers served by the existing 3000 series Ryzen CPUs as well as a few low-end 4000 series APUs (up to 4c/8t, maybe 6c/6t).
Pro series gets all the same, plus the high-end APUs (up to 8c/16t). Or decide that the APU graphics will never be enough for serious use, and have the higher end models include a basic 2 CU iGPU for 2D duties only.
I don't think the 6 and 8 core APUs would ever be as inexpensive as the current ones. Once a consumer is looking at spending $300+ on an APU, they probably wouldn't be satisfied with the iGPU performance.
Spunjji - Friday, January 10, 2020 - link
I'm not sure on the price side of things. If the die size estimate of 150mm^2 is accurate, that's almost exactly 3/4 the size of Raven Ridge/Picasso. We know that 7nm is a more costly node than 14/12nm, but I wouldn't have thought it would be to such an extent that a much smaller chip would wind up being significantly more expensive at retail.GeoffreyA - Wednesday, January 8, 2020 - link
The increased CPU performance would certainly be appreciated: Zen 2's IPC gains + more cores.SolarBear28 - Tuesday, January 7, 2020 - link
Based on Ian's round table interview with Dr. Lisa Su herself I don't think the Renoir desktop APU's will be available anytime soon.Spunjji - Thursday, January 9, 2020 - link
I saw that too. Damn shame, but inevitable when supply is constrained and they have a whole new market to break back into.nobodyblog - Tuesday, January 7, 2020 - link
%90 16 threads advantage in Cinebench AT 1st Run. If it is Really 15W, expect the next run to be 4% slower.On Paper, even on paper, AMD won't be as efficient as Intel, for both higher Idle power consumption AND higher Display power use, which is not Intel's 1W display panel. Intel's project Athena ASIDE. which are more optimized, just to mention.
It is very probably not so good in power use, you see 2 times more EFFICIENT won't translate to 2 times more battery life. Intel's Laptops were beating AMD ones greatly in likes of Microsoft Surface Laptop which had both AMD and Intel, 2 times MORE Efficient, you say, without being that optimized.. No 1w display which matters.... No tiger lake which comes soon... Yeah, I expect Intel be faster & ALWAYS BEST QUALITY...
Thanks!
sseyler - Tuesday, January 7, 2020 - link
ROFL. The quality of this troll is remarkably awful—at least put a little effort into it.Sub31 - Friday, January 10, 2020 - link
More capitalized words than a Diary of a Wimpy Kid book.SolarBear28 - Tuesday, January 7, 2020 - link
We don't know anything about Renoir's idle power consumption other than that it has been improved over Picasso. Also, why would you expect AMD's Cinebench performance to degrade on a second run more than Intel's? Intel (more than AMD) have a history of pushing mobile CPU's to many times their TDP for short bursts.No doubt Intel's optimizations within project Athena will play a role. But it's way to early to make the kind of assumptions you are making. Also, Tiger Lake will probably not be available until September/October at the earliest. OEM's are still launching Comet Lake products with availability of some not until March.
nobodyblog - Wednesday, January 8, 2020 - link
In past, cinebench benchmarks showed when AMD is High, it will degrade much more than Intel. As much as half in 3rd run, for example... They exist on web...AMD says it is limited & doesn't provide sustainable performance at 15W, It is surely so, Even Intel's 8 core 9th gen H i9 is faster than AMD's toppest both 45W & 16 Threads, so, AMD's isn't cooler...
Thanks!
Korguz - Wednesday, January 8, 2020 - link
nobodyblog " They exist on web... " care to share the source for this so we can be sure we are looking at the same thing ??Spunjji - Wednesday, January 8, 2020 - link
Absolute bollocks. How much the system's performance degrades by depends entirely on the cooling system used and its performance parameters. Some Intel laptops have a steady decline from run to run, some drop after the first run and stay in the basement - and some don't degrade much at all. The exact same is true for AMD notebooks, but they tend to be used in lower-quality designs with inferior cooling solutions (mostly looking at HP and Lenovo here).The rest of your post is gibberish.
Zizo007 - Thursday, January 9, 2020 - link
LMFAO Go see how Tomshwardware forums are flooded with issues of Intel mobile high end CPUs not reaching their turbo frequency, they are locked at 4Ghz while advertising 4.5Ghz. Why would AMD 3rd cinebench run be half degraded? Your post makes 0 sense. Post us proofs of your claim with temperatures otherwise stop wasting bandwidth using your BS trash. Where do you see the i9 surpassing the 4800H? The 4800H is faster than 9750H by 12% and slightly faster, by 2%, than the i9 9980HK 8 Core 5Ghz. The i9 is way more expensive and is meant to compete with AMD high end CPUs like the 4900H and above. AMD 4800HS is 4.2Ghz 35W which tells us that AMD could make a mobile chip with higher clocks at 45W such as 4.7Ghz and name it 4900H, that will surpass Intel i9 10th gen.Zizo007 - Thursday, January 9, 2020 - link
"Always better quality" LMFAO Intel fanboiliquid_c - Tuesday, January 7, 2020 - link
Reading the comments section on an AMD article has been cancer inducing, for a while, even on Anandtech (sadly).Rarely have i seen such fervor and hate as to what these so called fans can spill out. What’s really painful is that they accuse others of the things they do / believe.
Lord of the Bored - Tuesday, January 7, 2020 - link
'S an amazing number of folks that seem to have registered an account just to get their flame on today. CES is such an exciting time for the fanchildren.silverblue - Tuesday, January 7, 2020 - link
I'm not sure they actually care that their comments are public for all to see.Holliday75 - Wednesday, January 8, 2020 - link
These are the times I kinda wish people had to put their name and face out there for all to see. People tend to be a little more civilized when they have to put their name on it.Spunjji - Tuesday, January 7, 2020 - link
Agreed. It's still a little better than WCCFtech, though. You'll see the same troll post 15-20 "unique" comments on one article, plus hundreds of bitchy replies :|SolarBear28 - Tuesday, January 7, 2020 - link
Yeah, Wccftech comments in general are an absolute horror show. At least most here seem to stay on topic.Xyler94 - Wednesday, January 8, 2020 - link
WCCFTech seems to be 50% random stuff, 40% fanboy screeches, and 10% about the article in questionZizo007 - Thursday, January 9, 2020 - link
I stopped using WCCFtech because of their idiotic and useless comments, I prefer Tomshardware and AnandTech. Whenever WCCFtech put a new article, you instantly see tons of useless and stupid comments spamming.Spunjji - Thursday, January 9, 2020 - link
Exactly that. These days it feels less like a tech site and more like a recruiting / trolling ground for the worst parts of the internet. Even the worst comments here pale by comparison - probably because it's harder to do instantaneous responses and impossible to post images here.V1tru - Tuesday, January 7, 2020 - link
Pair that fancy ROG 14" laptop with discrete GPU like 2070S and price it decently: ultrawinyankeeDDL - Tuesday, January 7, 2020 - link
Do you know why the Zephirus G14 will use NVidia discrete graphics?Will there be issues when switching between iGPU and dGPU? Will both Nvidia an AMD drivers need to be installed?
The iGPU of the 4800H should perform close to the 1050 (discrete, non-mobile): I understand that the 2060 is noticeably more powerful, but I would prefer why no "H" CPU will be in a laptop without discrete graphic card?
neblogai - Tuesday, January 7, 2020 - link
iGPU of 4800H will only perform similar to curent Vega11, because in Renoir APU has only 8 Vega CUs, albeit at higher clocks. So, comparable to MX230 or maybe MX250 if coupled with DDR4 3200 or quadchannel LPDDR4, but nowhere close to 1050.Spunjji - Tuesday, January 7, 2020 - link
There are already a bunch of laptops with AMD APUs and Nvidia GPUs - the 3750H and 1660Ti seem to be a popular combination.AFAIK: Yes, you need both AMD and Nvidia drivers. No, there aren't any more issues than there are with an Nvidia dGPU and an Intel iGPU - i.e. it's mostly fine, but every once in a while the Nvidia driver forgets to power up the dGPU fully, or forgets to power it down after you're done.
yankeeDDL - Thursday, January 9, 2020 - link
OK, thank youFataliity - Tuesday, January 7, 2020 - link
I think they imagine, if your buying a 45W part, your looking for power not battery life. So might as well go all out. I'm sure there will be one sooner or later without dGPU, but none at release.And I think they chose nVidia over their Radeon cards because they still win a performance per watt, and probably provide easier thermals to cool with the bigger chip.
Fataliity - Tuesday, January 7, 2020 - link
Plus, the 15W Variants can be configured into 25W variants, with around the same performance and more CU's.They said they don't think there will be 25W configurations, but makers could introduce a bios flag.
Jugotta Bichokink - Tuesday, January 7, 2020 - link
It's a bigger deal than a bios flag, lol.Hul8 - Friday, January 10, 2020 - link
Obviously in terms of the overall laptop design (cooling and thermal insulation or component placement for comfort), but for the APU itself, it's just that the firmware (included in the BIOS/UEFI) tells it to run at whatever cTDP setting is required - range of 10 - 25W is available for the 4000 U-series.yankeeDDL - Thursday, January 9, 2020 - link
It makes sense. I suppose there's no real counterpart from AMD to nVidia dGPU on mobile. Yet.Gonemad - Tuesday, January 7, 2020 - link
This generation should bury water-cooling, and 15W should allow even passive cooling in thin designs.I'd risk someone has the gumption to slap one of these in a phone.
Intel has to be squirming in their chairs right now.
Zizo007 - Thursday, January 9, 2020 - link
You need to be under 10W for passive cooling in a small and crowded compartment such as a laptop. Phones have 5W TDP. 15W is perfect for a laptop, I find 45W a lot and cause overheating like my 4820HQ reaching 90C.Sub31 - Friday, January 10, 2020 - link
My Firepro W7100 reaches 98C under regular use - doesn't even ramp the fan over 70%. Got the thing recycled - miracle it's still aliveyeeeeman - Tuesday, January 7, 2020 - link
These are very nice products from AMD and I am pretty sure they will sell and raise AMDs image in the notebook market.Still, we need to see where they stand with these products because their handicap on mobile was much bigger than in desktop and mobile is a different market where efficiency is paramount.
H series will sell well with gamers, maybe not being the definitive choice for gamers (lower max boost speeds means lower gaming perf) but we need to see reviews.
U series also is interesting but here it all depends if they can get the idle power consumption under control. Thin and light laptops are called like that for a reason. People buy them to use them during travel and battery life is paramount. Not threads, not even absolute performance, but battery life and small things like sleep battery drain, sleep on/off speed, things that AMD is not usually great with (not even Intel, but they did improve massively with Ice lake).
SolarBear28 - Tuesday, January 7, 2020 - link
Wow, a well reasoned and balanced comment! Someday in the distant future I dream these will outnumber the argumentative troll posts...oh who am I kidding lol
Zizo007 - Thursday, January 9, 2020 - link
Intel high end mobile CPUs are capped at 4Ghz and do not reach 4.5Ghz, users at Tomshardware are reporting this issue. Let see if AMD manages to reach 4.2Ghz as they claim, if they do, they will also be faster in single threaded loads than i9 10th gen.5080 - Tuesday, January 7, 2020 - link
Now it's time for Microsoft to announce an upgraded Surface that sports the new Ryzen 7 4800U!Spunjji - Tuesday, January 7, 2020 - link
It's a real bugger that their product cycle aligns so poorly with AMD's products...That said, they're still selling a Surface Book 2 with a GTX 1060 in it. Maybe they've been waiting to build the Surface Book 3 with a 4800U and an RX 5600M in it... we can dream, eh?
Jugotta Bichokink - Tuesday, January 7, 2020 - link
Why a surface? You really love that touchscreen? LolSpunjji - Wednesday, January 8, 2020 - link
Honestly? I mostly like the Surface Book for the display. There are not many devices out there with such a high quality 3:2 panel, let alone devices that I could potentially use for gaming, for photo editing with a stylus, *and* as a touch-screen content consumption tablet.Gondalf - Tuesday, January 7, 2020 - link
Yes it is strange, why Microsoft not waiting for this??Hey!! it is clocked faster than actual Ryzen Mobile, it have two times the cores, faaaaar more IPC and a faster GPU. In average is 3X actual Ryzen Mobile, faster than a Ryzen desktop dropped at 15W and faster than Epyc module......... with only one node step.
More or less is done on 5nm but AMD don't know this. Too good to be true?? :)
Spunjji - Wednesday, January 8, 2020 - link
Reading an attempt at a straw man distraction written by somebody who couldn't tell straw from his own pubic hair is a real treat.quadibloc - Tuesday, January 7, 2020 - link
As Intel has currently a 1.1 GHz 6 core part at 15 watts to compete with a 1.8 GHz 8 core part, AMD has, at least for the moment, become the only choice in laptops. That's where Intel was ahead, so it's an existential threat in my opinion. Of course, in a few months, Intel will have 10nm+ out, but now they will have more work to do catching up.Gondalf - Tuesday, January 7, 2020 - link
Are you sure amd is capable to sustain this speed for more than few minutes before dropping at 0.8 Ghz 8 cores??. You know what is their strategy, their old slides are pretty clear.The other strange thing is that suddenly Zen 2 have gained more than 10% in single core performance :), pretty funny he?? To be noticed they have now only 1MB of L3 per core, this is not a good sign for real world performance.
Looking at claimed results, this core should be inside the Epyc....but it is not.
My suspect there is a lot of marketing at work and a generous reference design that cool a lot of heat.
Jugotta Bichokink - Tuesday, January 7, 2020 - link
Gondalf you are dangerously simple.THIS IS A CPU. IT DOES NOT INCLUDE THE COOLING ENVELOPE OF A BUILT LAPTOP.
Jugotta Bichokink - Tuesday, January 7, 2020 - link
"Looking at claimed results, this core should be inside the Epyc....but it is not."Man, you are just... I'm going to be nice and say nothing. You are nothing!
SolarBear28 - Tuesday, January 7, 2020 - link
Surely the base clock can be maintained at 1.8 GHz while keeping power at 15W. That is, after all, what the base clock is supposed to mean.Zen 2 have gained 10% compared to what?
Total cache per core is around 1.5MB. Intel's 6 core i7 has 2 MB per core. Total cache is about the same. I don't see a problem.
Same TDP as Picasso with twice the performance per watt running Cinebench , why would this have heat issues?
Spunjji - Wednesday, January 8, 2020 - link
"Looking at claimed results, this core should be inside the Epyc....but it is not"It is. It's Zen 2. It released in desktop Ryzen and Epyc first, and it's bludgeoning Intel in both areas. Do try to keep up.
5080 - Tuesday, January 7, 2020 - link
Yes, it is indeed. Especially since Intel is still dealing with all the security holes and patches for it.samal90 - Tuesday, January 7, 2020 - link
Hopefully someone does a laptop with a 4800H chip without a dedicated GPU.Gondalf - Tuesday, January 7, 2020 - link
Are you sure?? with only 8 MB of L3 instead 32MB the performance will be idiotic outside their fake slides. There isn't enough L3 to meet the L2 needs.SolarBear28 - Tuesday, January 7, 2020 - link
Why do you think Zen 2 needs mountains of cache to perform well? The 8 core AMD has just as much total cache as the 6 core i7. Do you label everything you don't like as fake? Is it possible to believe that Cinebench doesn't tell us everything about a processor, while also believing that AMD didn't fake their results? Did I use use enough question marks?Korguz - Wednesday, January 8, 2020 - link
solarbear28.. why ?? because gondalf thinks the majority of zen 2s ipc gain is from its cache sizes.. if that was the case.. wouldnt intel of increased its caches as well ? he is an intel troll.. and will do what it takes to make intel look good.. and amd look bad. plain and simple... he just cant handle intel not being the best anymore...Holliday75 - Wednesday, January 8, 2020 - link
I think you guys are forgetting one thing here. Monolithic die.Nicon0s - Thursday, January 9, 2020 - link
It's a mobile chips not a desktop CPU and it uses a monolithic die which mean there's no latency penalty for having multiple chiplets.The 8MB cache won't stops these 45W chips from reaching their full potential.
enzotiger - Tuesday, January 7, 2020 - link
Why hasn't AMD launched desktop Renoir? I think that is the only segment Intel still has a lead and AMD shall be able to take it easily. Maybe this segment is too small and AMD doesn't care.Gondalf - Tuesday, January 7, 2020 - link
To have what? Low performance?? You need of clock speed and large cache to meet workloads. Renoir have nothing of both.Jugotta Bichokink - Tuesday, January 7, 2020 - link
Extra-large cache is not useful to all workloads. False. Debunked.Gondalf - Tuesday, January 7, 2020 - link
All nope but many yes, expecially a low latency uniform access L3 like that of Zen 2Korguz - Wednesday, January 8, 2020 - link
gondalf.. ok fine. then explain WHY intel didnt increase their cache sizes more then they have ?? lets see what bs you can come up with to explain that...Holliday75 - Wednesday, January 8, 2020 - link
IO is on die now. The whole point of increasing cache sizes was to keep needed data close to the cores to avoid the latency penalty. Not needed for this monolithic die.Korguz - Wednesday, January 8, 2020 - link
Holliday75 gondalf doesnt care.. he is certain zen2 gets most of its performance from the caches..Jugotta Bichokink - Tuesday, January 7, 2020 - link
Low performance = Intel now.Go on, pay more for the privilege, fangirl.
No one will miss you in the real world!
Gondalf - Tuesday, January 7, 2020 - link
It's easy post good number running at 25W + chipset power. They does the same with Ryzen Mobile 3000. But reviews said the truth.At a core level right now Intel is well ahead. Zen 2 is a little obsolete after Sunny Cove
SolarBear28 - Tuesday, January 7, 2020 - link
Zen 2 and Sunny Cove are within 5-10% of each other on IPC and frequency, so neither is obsolete.Korguz - Wednesday, January 8, 2020 - link
umm dont you mean ryzen mobile 4000 ?? or are you trying to prove your point in a way that favors your beloved intel ?Korguz - Wednesday, January 8, 2020 - link
gondalf.. shut up already.. INTEL is the one that NEEDS clock speed now... not amd. and cache sizes does not increase ipc as much as you keep trying to claim...Spunjji - Wednesday, January 8, 2020 - link
Ah, I see the idea here. Everything that AMD's CPUs need to perform well is what Intel's CPUs have now; when Intel have different things, then those will be the things AMD need to perform well, but none of the things AMD ever has will ever allow their CPUs to perform well and Intel never need to learn anything from them.Gondalf logic!
Spunjji - Thursday, January 9, 2020 - link
Best guess: they're capacity constrained and mobile is a bigger target for them - they need to start getting design wins ASAP.There may also be other factors. The desktop APUs are probably a lesser bin. If they're getting good yields of chips that make the grade for mobile, they may not yet have enough "inferior" chips around to do a proper launch for the desktop APU. In that case we should expect to see it later when they've built up enough of them, a bit like the 5600 XT.
eastcoast_pete - Tuesday, January 7, 2020 - link
I am a bit amazed at the heated passion for or against the Ryzen 4000 mobile APUs. Personally, I think it's great that we finally have real competition in the ultraportable and performance laptop space, and look forward to the first head-to-head reviews. Since I neither own shares not work for AMD or Intel, all I care about is who can sell me the biggest bang for my bucks. If that ends up being AMD, even better, as it keeps the competition alive.SolarBear28 - Tuesday, January 7, 2020 - link
+1Spunjji - Wednesday, January 8, 2020 - link
+1,000Hul8 - Friday, January 10, 2020 - link
I have to wonder if some people have gambled and sold stock short...msroadkill612 - Tuesday, January 7, 2020 - link
There is more to a laptop than processor. Significantly, there has been a deluge of classy amd based models.No more oem crap configurations models and foot dragging to oblige intel. Intels stuff is all over Asus's front lawn e.g.
They have had it up to here w/ intel's years of reiterated & rebroken promises on 10nm (which they have invested heavily in and relied on for their roadmaps).
The final straw has been their egocentric hubris in robbing supply of more humble cpuS, in a bid to win the unwinnable battle of cores, using ever bigger & lower yield chips.
msroadkill612 - Tuesday, January 7, 2020 - link
mosesman cribs it well imo"Summary
Better than expected.
The Ryzen 4000 mobile 7nm will ramp in 1Q20 (1 quarter ahead of our view), with specs including 8-cores that beat competitors single thread performance and blow them away in all other metrics, including performance per watt. There will be over 100 laptop designs for 2020 in all categories (Thin & Light, gaming/creator, pro) and AMD is set to replicate desktop success in 2019 in laptops. Intel has limited options, given their 14nm power and die size issues, and 10nm being a broken node at the moment, in our opinion."
eastcoast_pete - Tuesday, January 7, 2020 - link
Thanks Ian! Question: do I see this correctly? One could, in theory, run a 4800U chip just as hard and fast as the 4800H, provided the cooling solution can handle the higher TDP? If so, I would love to see some vendors offering this variant. Being able to clock down to 1800 MHz and its low power envelope and yet turbo up to the same 4.2 GHz Max as the 4800H would be very attractive!On a related note, did you hear whether SmartShift is indeed ready for prime time? The ability to switch seamlessly between iGPU and dGPU is essential to make a performance laptop a real daily driver, at least for my situation. A dGPU just eats too much battery when unplugged.
SolarBear28 - Tuesday, January 7, 2020 - link
AMD states that the U series chips can be configured to run in 25W mode rather than the standard 15W (if the OEM supports it and provides enough cooling) for better sustained performance. Obviously this still wouldn't match the sustained performance of a 45W H series part.I would think there are at least some binning differences between the U series and H series that allow the H series to run better at high frequencies (or allow the U series to run more efficiently at lower frequencies), however I don't know if it's anything more than that.
Spunjji - Wednesday, January 8, 2020 - link
Usually there's a bit of a trade-off between being able to operate well at low voltages and being able to operate well with higher clocks and higher voltages - so it'd make sense that a chip binned for 15W operation might not run as well at 45W as a "genuine" 45W chip, even though they're the same design.eastcoast_pete - Wednesday, January 8, 2020 - link
I would normally agree, however, the specs list the 4800U with a top turbo and all other specs (minus the one CU for graphics in the H) identical to the 4800H. That, and the in fact lower top speed of the GPU in H chips, leads me to believe that the H line APUs are actually lower-binned U chips. I hope AT or other channels can play with a 4800U and try pushing the thermal envelope; unfortunately, that'd void any warranty on the laptop.Spunjji - Thursday, January 9, 2020 - link
They will definitely be lower binned chips - I'm not sure I was clear, so to reiterate, sometimes the "lower" bins tolerate voltage better than the "higher" ones, so even though they don't run as efficiently as the best bins they can run a little faster when given more juice.This is just speculation, though - I don't know enough about the properties of the 7nm process :)
SolarBear28 - Tuesday, January 7, 2020 - link
I'm a little confused why the 4700U isn't multi-threaded. It doesn't use much power does it? But it doubles multi-threaded performance. I understand if AMD wants to gimp the low end parts, but it seems strange to do on a Ryzen 7 chip.Hul8 - Tuesday, January 7, 2020 - link
I think you really are confused. Simultaneous multi-threading doesn't double the compute power.The two threads sharing the same physical core can only run concurrently whenever they happen to require different parts of the core. Other times they end up waiting for each other. The total effect is that SMT will increase performance by around +25-30%.
8 cores with SMT will be equivalent to around 10 real cores.
This is also the reason why a 8c/8t (i7-9700K) is considered largely equivalent to a 6c/12t (~7.5 physical cores' worth of computational power), not accounting for differences in IPC, frequency or overclockability.
When you factor in that doing more work - you have double the number of threads and get +25% more work done - will consume more power. Running such a CPU at full tilt will require one or more of:
- lower frequencies
- lower voltages
- better power delivery and cooling; or
- better binned silicon (so you can run the same frequencies with lower voltage and get the required power usage).
I would postulate that 4800U is better binned, so can stay within the power and thermal envelope even with double the threads, without affecting frequencies too much.
SolarBear28 - Tuesday, January 7, 2020 - link
Interesting, so that is why the 4700U has a higher base frequency than the 4800U despite likely being a lower binned part.Hul8 - Wednesday, January 8, 2020 - link
Of course, it could also be that they're *not* binned any differently, or differently enough that it would make much of a difference; The worst quality APU silicon with all 8 CPU cores functional would probably be stockpiled for the future desktop products, or used now for the lower core count models (just with 2 - 4 cores fused off).serendip - Tuesday, January 7, 2020 - link
Surface Pro 8: 8C/16T APU, 15W, LPDDR4x, fanless, one can dream. I'm quietly excited about these new APUs but performance numbers take a back seat to energy efficiency for mobile parts. AMD will have trounced Intel if they can come up with a part for the Surface Pro tablet.wilsonkf - Wednesday, January 8, 2020 - link
It is absolutely not possible to cool 15W heat fanless in a labtop.neblogai - Thursday, January 9, 2020 - link
It is possible, and has been done, even at ultraportable- look at Acer Alpha and similar. 37Wh battery, runtime at full load is 112minutes= 20W power use- cooled fanless. And this is in a very limited 12" tablet formfactor.https://www.notebookcheck.net/Acer-Aspire-Switch-1...
mattkiss - Tuesday, January 7, 2020 - link
The article mentions support for LPDDR4X memory, but for the 4800H, I don't see it mentioned on AMD's site:https://www.amd.com/en/products/apu/amd-ryzen-7-48...
Perhaps Ian can clarify this.
neblogai - Wednesday, January 8, 2020 - link
Maybe AMD do not see the use of it on 4800H, especially if Renoir LPDDR4X support is dual channel (lower bandwidth than DDR4), not quad channel. Although that is exactly what I would want (13" ultraportable H-series with LPDDR4 if quad channel).www.pakmegaplace.com - Wednesday, January 8, 2020 - link
AMD has been crushing Intel in all other categories but wasn’t able to in mobile processors specifically single thread performance and power efficiency was behind Intel’s.Now since both of these issues have been resolved, as said by the AMD, this should be great thing for customers as they have choice in laptops for getting AMD or Intel.
Zizo007 - Wednesday, January 8, 2020 - link
There is no way Intel can match AMD even in another year, AMD are on 7nm EUV while Intel are still on 14nm in desktop and 10nm in mobile. Also, AMD ST performance is equally or slightly better than Intel 10nm while MT performance is more than double. AMD integrated graphics are also waay superior to Intel's.Intel's mobile high end CPUs have a big issue with boost clocks, they are capped to 4Ghz instead of the advertised 4.5Ghz, its all over the place in Tomsharwdare forums. If AMD can really hit its 4.2Ghz boost then they will significantly surpass Intel's 10nm in ST performance and more than 250% in MT.
Gondalf - Wednesday, January 8, 2020 - link
ST performance os AMD slides is a fake. We already know Sunny Cove badly beat per GHz a Zen 2 for desktop with fast Drams.Bet they tested the worse Intel laptop available with their generous and well cooled reference design.
Zizo007 - Wednesday, January 8, 2020 - link
How is it fake? Because you decided so?So what you are telling is that Intel is not fake?
What an Intel fanboi. Full of them at AnandTech sadly.
Zizo007 - Wednesday, January 8, 2020 - link
Where are you reading that Sunny Cove is beating Zen 2 on desktop, there isn't any benchmarks about it yet or any engineering sample. Even then, by the time Sunny Cove is released, AMD would already have released Zen 3 in 2021.Gondalf - Wednesday, January 8, 2020 - link
Hey you are on Anandtech.There is a full article about Sunny Cove core of Mobile Ice Lake (for sale now).A good session of Spec (15W stable) showed the clear per Ghz superiority of Intel Sunny Cove with mobile memory over Zen 2 for desktop with fast Drams and gigantic L3.
Sunny Cove is out since middle last year, now is time for Willow Cove in the middle this year, just to stay well ahead Zen 3 a quarter before.
Korguz - Wednesday, January 8, 2020 - link
yea ok sure gondalf.. you also still believe what intel says dont you ??Zizo007 - Thursday, January 9, 2020 - link
You can't compare Mobile vs Desktop, that doesn't make any sense. What did you smoke?Clock for Clock Zen 2 beats Sunny Clove as Zen 2 has better IPC than any current Intel CPU. Intel's 5Ghz 9900KS is comparable to a Zen 2 at 4Ghz in ST performance.
Zizo007 - Wednesday, January 8, 2020 - link
I meant Ryzen 5000 in 2021.wilsonkf - Wednesday, January 8, 2020 - link
I would safely bet AMD laptop won't have thunderbolt ports if I were an Intel fanboy.Hul8 - Wednesday, January 8, 2020 - link
Well obviously since "Thunderbolt" is an Intel trademark - no one else can use the name.There are *compatible* solutions available for Ryzen, though, like this Gigabyte one:
"Threadripper: Perfect Thunderbolt Compatibility on the Designare TRX40 Tested"
https://www.youtube.com/watch?v=59svND1ZtXI
a YouTube video by Level1Techs, 2019-12-24.
Alistair - Wednesday, January 8, 2020 - link
AMD just announced a 2.1Ghz 6 core 15W laptop CPU to compete against Intel's 1.1Ghz laptop CPU. This is a big deal.Gondalf - Wednesday, January 8, 2020 - link
Likely you mean 25 W?? i think yes. I don't see competition here, simply 8 cores do not fit in 15 W at these clock speed. In fact they say "up to" about clock speeds, turbo included.My bet, we will never see a 8 core Ryzen Mobile 4000 in a thin laptop. In fact what announced right now is of 25 W class.
The funny thing AMD have not a fast clocked 4 cores 8 threads, a pretty suicidal thing in mobile.
Holliday75 - Wednesday, January 8, 2020 - link
You keep this up we are going to have to take you to the vet and put you down.Zizo007 - Wednesday, January 8, 2020 - link
The 8 Core is a 15W chip @1.8Ghz with 4.2Ghz Turbo. Of course it won't be 15W Turbo but its worse for Intel, just like their 300W 4 Cores desktop CPUs, the 9900KS will use up to 280W under Turbo, 350W when overclocked which is just insane. The 6 Core from AMD is 2.1Ghz @15W with a Turbo boost of 4Ghz.Intel Turbo boost is throttled to 4Ghz instead of the advertised 4.5Ghz. Tomshardware forums are flooded with Intel throttling issues on mobile. If AMD really reaches 4.2Ghz Turbo then it will be faster in ST than the best Intel mobile CPU and waay faster in MT.
Zizo007 - Wednesday, January 8, 2020 - link
AMD made 8 Core thin laptops a reality with decent performance.Gondalf - Wednesday, January 8, 2020 - link
I think better strategy is to lower as they can the SOC die size, crank up seriously the clock speed and try to grab more and more market share from Intel in a situation of tight 7nm supply.This one is the wrong SOC for AMD needs in actual timeframe.
Korguz - Wednesday, January 8, 2020 - link
new flash gondalf.. intels 10nm chips are in even shorter supply.. cant go above 4 cores.. and are clocked lower then their 14nm counterparts..cant handle your beloved intel loosing on practically all front can you ? keep trying to spread more bs you dont have a chance of proving..
Spunjji - Monday, January 13, 2020 - link
1) The SoC die size is pretty small already. 80+% yields will do fine.2) The clock speed won't go higher without a different process or a different architecture. They're not in a position to radically change either.
3) I trust their perspective on what SoC they need better than yours.
Korguz - Wednesday, January 8, 2020 - link
gondalf.. post links to your bs.. or shut up.. most of what you say.. is bs.. AND it looks like you are comparing sunny cove to zen+ trying to pass it off as zen 2...the ONLY thing that is fake.. is youBerenApJiriki - Wednesday, January 8, 2020 - link
For what I use my work laptop for - full desktop replacement driving three screens - these new processors look awesome. Sure we are not getting full fat 8/16 but this is a genuine desktop replacement for the mainstream user. There is a big difference between 100+ designs and my ability to order one though... I hope we see good supply but like I hope most buyers will do... I will have a budget look at the models that are actually available for that budget and then look at good real world reviews of those machines plus if at all possible get actually hands on with the one that looks best before buying. We all know that OEM limitations on memory speed and the TDP will make far more difference than +- 1-10% theoretical performance in an ideal scenario. Right now these CPU's look awesome, let's see them in the wild and then compare!Zizo007 - Thursday, January 9, 2020 - link
If the 4800H can really reach 4.2Ghz, then it will be faster than the i9 10th gen in single threaded loads and way faster in multithreaded loads. Many users on Tomshardware are reporting that their mobile high end Intel CPUs are capped at 4Ghz and not reaching the advertised 4.5Ghz.Let see if AMD can reach their turbo speeds when benchmarks appear this year.
Spunjji - Thursday, January 9, 2020 - link
You've repeated that claim a few times, but I can't find the reports you mention. Can you link us to an example/s?Zizo007 - Thursday, January 9, 2020 - link
https://www.reddit.com/r/intel/comments/c6d8hk/i7_...My thread in Tomshawrdware was about the 8750H not reaching 4.1Ghz.
What I mean is that those Intel 45W chips never reach their 4.5Ghz Turbo speed because temps doesn't allow reaching 98C and the CPU starts throttling. My 4820HQ reaches 98C during gaming with the fan at max and with Grizzly Kryonaut thermal paste and a coolpad.
In other words, if AMD doesn't throttle and allows 4.2Ghz, it would beat Intel's i9 10th gen.
My only AMD laptop was a Turion X2 and temps weren't that high. With 7nm its possible to hit 4.2Ghz Turbo without throttling I hope.
Korguz - Thursday, January 9, 2020 - link
i find it funny.. that on order to get the notebook to work as it was intended... one has to spend hours and hours tweaking and researching so it will..maybe that cpu just shouldnt of been put in a notebook like that ?
Zizo007 - Saturday, January 11, 2020 - link
Exactly, laptops shouldn't be more 28W.45W is too much for a laptop.
Korguz - Monday, January 13, 2020 - link
Zizo007 thats not what i mean..28w, 45w, what ever watt... you still shouldnt have to spend hours tweaking and playing with settings to get it to work as it is supposed to..45w isnt too much for a notebook, IF the cooling system for it.. is done right. obviously, the cooling for these chips isnt...
deksman2 - Wednesday, January 15, 2020 - link
Or rather, laptop manufacturers should do a better job in designing laptop cooling rather than cutting corners like they usually do.This is especially accurate for AMD laptops, but Intel suffers in this area too (along with the fact that people had to delid their CPU's to get it performing as it should have).
Spunjji - Monday, January 13, 2020 - link
Thanks for the share. My 17" gaming laptop has a 6700HQ with a maximum boost of 3.5Ghz - it will hit 80+C when gaming, and that's with a ~125mv undervolt. Considering that the underlying architecture hasn't changed much from 6th to "10th" gen, it makes sense that they can't actually hit their supposed maximum boost speeds.Aljon Pobar - Friday, January 10, 2020 - link
Competition is really good for Us consumers.carcakes - Friday, January 10, 2020 - link
"Support for Infinity Fabric Link GPU interconnect technology – With up to 84GB/s per direction low-latency peer-to-peer memory access1, the scalable GPU interconnect technology enables GPU-to-GPU communications up to 5X faster than PCIe® Gen 3 interconnect speeds2" source Radeon Pro Vega || Duohttps://www.notebookcheck.net/AMD-Radeon-Pro-Vega-...
&
https://www.amd.com/en/graphics/workstations-radeo...
if it was only pci 3.0 on mobile systems its still plenty quick but there is already IF2. it has reached its 2nd gen. just imagine the kind of power available in the apu only..for a mobile gpu+IF??
Is it IF2? and even then the die is not like chiplets guess what?? yup..smartshift.
carcakes - Friday, January 10, 2020 - link
it means that @LPDDR4x 4ch@2667 at least...and now with smartshift even more...how is memory access? which gen is it? is it all clocked the same?carcakes - Saturday, January 11, 2020 - link
How the discrete gpu with HT off as well as compared to smartshift?SharonTTurner - Monday, January 13, 2020 - link
..I dunno... ice lake is pretty competitive.hSharonTTurner - Monday, January 13, 2020 - link
nicepeevee - Monday, January 13, 2020 - link
WTVega?eastcoast_pete - Monday, January 13, 2020 - link
@Ian: I may have missed it among the 380+ comments, but isn't this chip, especially the H line, also the dress rehearsal for the CPU part of the custom APUs that will power the PS5 and the next Xbox? Any rumors from AMD? The APUs must be in production already if SONY and MS want to ship the new consoles in mid/late 2020.rrinker - Monday, January 13, 2020 - link
Gimme an 8C/16T desktop with integrated graphics, even if it's the lowest of the low, and my next build would use it. Just need basic graphics as a server, but the current desktop line is limited to 4C/8T with IGPU.Targon - Thursday, January 16, 2020 - link
One possible error here. The mention of the 8 cores being broken into two 4 core CCX doesn't apply with Zen2, since AMD went to 6 and 8 core CCX units for the desktop. This means that for the laptop chips, we may be looking at a single 8 core CCX linked to the GPU instead of two four core CCX linked to the GPU.carcakes - Friday, January 17, 2020 - link
We saw that the cpu speed to 12% as well. Now that work shifted to the mobile dgpu having in mind a 12%+ when the api shared the cpu+igpu on if?? Which gen?? If it speed to the mobile dgpu the same shared I want222 smartshift...o.k...but...is it 12% of for a link if almost 100GB/sec?? Lolcarcakes - Friday, January 17, 2020 - link
We saw that the cpu speed to 12% as well. Now that work shifted to the mobile dgpu having in mind a 12%+ gap when the apu shared the cpu+igpu on if?? Which gen?? If it speed to the mobile dgpu the same shared I want222 smartshift...o.k...but...is it 12% of for a link if almost 100GB/sec?? Lolsharath.naik - Monday, January 20, 2020 - link
They missed an important trick here if they can allow the 4800u to upclock to 45 watts. It would make those who require a cpu performance and occasional gaming with long battery life a good alternative in the market.Fulljack - Monday, January 20, 2020 - link
I doubt AMD nor OEM would allow that, since increasing TDP that high from what it's supposed could be dangerous to the chip, and not to mention the inadequate cooling the OEM will offer on their product. Allowing it outright despite the system couldn't handle the increasing temperature would just calling a lawsuit.peevee - Wednesday, January 22, 2020 - link
A proper desktop APU would have:1) multi-chip design similar to Ryzen, with separate chips for CPU and GPU
2) Navi-based GPU part, similar to 5500M
3) 4 channels of RAM, preferably DDR5 to feed that GPU. Would be great if the 4 SODIMM slots for it are on the package (4 sides) to reduce latency.
4) COMBINATION of GPUs in case of another Navi-based GPU used in the PCIex16 slot, for example, adding 5500 should double the performance.
It would make sense to have the memory controller chip also having all the L3 cache used by both CPU and GPU.
obi210 - Sunday, February 2, 2020 - link
8 Zen 2 cores. APUs just got interesting.