Comments Locked

279 Comments

Back to Article

  • 29a - Tuesday, March 30, 2021 - link

    No iGPU tests?
  • Alistair - Tuesday, March 30, 2021 - link

    Quote from Ars Technia: Rocket Lake-S gets a small but noticeable upgrade to its integrated graphics performance—the 10th-generation Core CPU's UHD 630 graphics gets bumped up to UHD 750. While it is an improvement, it's nothing to write home about—if you were hoping for an equivalent to Intel's Iris Xe graphics in Tiger Lake laptop CPUs (or AMD's Vega 11 in desktop APUs) you'll be sorely disappointed.

    A modest GeForce GTX 1060 is good for a Time Spy Graphics score of roughly 4,000. Intel's flagship i7-1185G7 laptop CPU manages nearly half that at 1572, with AMD's Vega 11 lagging noticeably behind at 1226. Rocket Lake-S' UHD 750 comes in at a yawn-inducing 592—a little less than half the performance of Vega 11 and a little more than one-third the performance of Iris Xe.
  • KaarlisK - Tuesday, March 30, 2021 - link

    Also, notice that i5 11400 has UHD Graphics 730, which has less EUs (24 not 32). So with the cheapest i5 (10400->11400) there may actually be a regression in iGPU performance.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    Sounds like even on as advanced a process as 14nm+++++++++++++++++++++++++++++++++++++ that yields aren't exactly that spectacular then for this backport.
  • tipoo - Tuesday, March 30, 2021 - link

    Well density definitely isn't.
  • III-V - Tuesday, March 30, 2021 - link

    Why in the world would you come to that conclusion?
  • firewolfsm - Wednesday, March 31, 2021 - link

    Because Intel generally hasn't had to cut the IGP for i5 models in the past. The cut indicates they're producing chips with bad EUs.
  • KaarlisK - Wednesday, March 31, 2021 - link

    In the past, they could offload half-funcioning GPUs to Pentiums and Celerons. There are no Rocket Lake i3s even...
  • Alistair - Tuesday, March 30, 2021 - link

    I was bored, so I went and bought the i5-11500 just to test Intel Xe haha. I'll post benchmarks later.
  • Alistair - Tuesday, March 30, 2021 - link

    Ok it gets ~40 fps in Overwatch at 1080p, and ~100fps at 50 percent of 1080p (scaling at higher resolutions is bad with DDR memory). Ouch. Not great. Usable, but not great. This is with very fast memory. DDR4 3600 C16.

    Now I'm going to try Runeterra.
  • macakr - Tuesday, March 30, 2021 - link

    really? that bad? I can get that on a 15w Ryzen 4700u!
  • Slash3 - Tuesday, March 30, 2021 - link

    The 4700u mobile APU has a much stronger iGPU core, similar to that of Rocket Lake.
  • Alistair - Wednesday, March 31, 2021 - link

    Yeah it is that bad. Generally if you keep the resolution at 900p or 720p (or 50 percent scaling of 1080p, which is ~768p) the performance is ok. But it falls off dramatically at 1080p. No linear scaling here. Basically it is MUCH worse than laptop parts. I have DDR3600C16 so was expecting better. Oh well.

    Runeterra was barely playable at 1440p, just a basic card game, but the FPS shoots up dramatically at 1080p or lower, so that's fine. Would be nice to play Hearthstone and Runeterra with integrated graphics one day...
  • Tom Sunday - Thursday, April 8, 2021 - link

    I am getting on the years and like to finally replacing my 13-year old Dell XPS 730x. Its time after being forced to replacing (3-times) PSU's, Motherboards, AIOs, GPU's and RAM. The new Intel i5 11600K holds interest. Will the 'integrated graphics' be good enough for just browsing the Net and watching old western or war movies on utube and with not doing any gaming? How good is the IGPU in this regard? Once I have more money I can hopefully buy a used discrete GPU 'over the table' next year at the local computer show? Will probably have my new system cobbled together by the local stripcenter PC shop and by one of the Bangladesh boys. So it will be good to sound somewhat intelligent discussing the hardware and not being pushed into what is cheap and in stock that day. Thoughts?
  • Spunjji - Friday, April 9, 2021 - link

    The iGPU on Rocket Lake will be fine for those purposes. However, so would the iGPU on the cheaper Comet Lake processors out there - they may be a better (cheap) option if you're going to buy now and upgrade later.

    Another option would be to go for a system based around the AMD Ryzen 5 3600 and re-use an existing GPU, which would also give you the option to upgrade the CPU again to something like a 5800X or even 5900X later. Personally, I'd go with that approach.
  • 0ldman79 - Friday, April 16, 2021 - link

    The integrated GPU is fine for movies and web.

    I've got a Skylake laptop with GTX 960M, it uses the iGPU until I fire up a game.

    The h264 and h265 playback are accelerated through the iGPU, barely draws any power at all for video playback. The screen draws all the power. It'll play back 1080P 60 h264 or h265 all day long at under 2W. There are no issues using it for the web or anything else using integrated, it'll even play some games at lower settings, roughly 1/4 of a 750 Ti (960M) in gaming, though the newer chips will be slightly better.
  • Alexvrb - Tuesday, March 30, 2021 - link

    Vega 11 is actually a bit slower than the latest 8 CU Vega found in Renoir/Cezanne. Not enough to catch up to Iris Xe, I don't think... but impressive given the smaller GPU and same power (or better). That's still GCN, too. If they release an APU with a ~10 CU RDNA2 GPU, it should give them a substantial boost... as long as bandwidth doesn't cripple it. Next gen memory should help, but they might also integrate a chunk of Infinity Cache. It has proven effective on larger RDNA2 siblings, giving them good performance with a relatively narrow memory bus.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    Good ole iGPU distraction.

    How about the most important stuff? How about having it appear on the first page?

    • performance per watt

    • performance per decibel

    Apples-to-apples comparison, which means the same CPU cooler in the same case for Intel and AMD.

    That is important, not this obsession over a pointless sort-of GPU.
  • Jezwinni - Saturday, April 3, 2021 - link

    I agree the iGPU is a distraction, but disagree on what declare the important things.

    Personally the performance for the price is the important thing.

    Any extra power draw isn't going to blow up my PSU, make my electricity bills unmanageable, or save the world.

    Why you consider the performance per watt most important?
  • 0ldman79 - Friday, April 16, 2021 - link

    Performance per watt on iGPU only matters in mobile devices, even then it's barely measurable.

    The iGPU is only going to pull 10W max, normally they peak around half that.
  • robbro9 - Tuesday, March 30, 2021 - link

    Has anyone seen igpu tests? Toms did not test them either apparently. Given the challenges in locating add in gpu's the integrated should be of high interest for many. I know I just put together a 3400G system, just cause its about the best you can get graphics wise without paying scalper pricing. Was curious if these were as good or better?
  • Lookslikeamhere - Tuesday, March 30, 2021 - link

    Phoronix has some
  • ilt24 - Tuesday, March 30, 2021 - link

    Hexus has some...https://hexus.net/tech/reviews/cpu/147440-intel-co...
  • robbro9 - Tuesday, March 30, 2021 - link

    Thanks, those are kinda disappointing. The 3400G I put together does roughly 13K night raid, 1.4K time spy, while the new UHD 750 does 9.5K and .7k respectively. I figured it would be closer. Guess its still king of the hill for desktop integrated... which is kinda sad. I wish AMD would up their integrated game, or Tigerlake was available for desktop...
  • Slash3 - Tuesday, March 30, 2021 - link

    Tiger Lake is 96EU, RKL-S is only 36 or 24EU. It was always going to be a small bump over Comet Lake.
  • antonkochubey - Tuesday, March 30, 2021 - link

    RKL is 32EU. Exactly a third of Tiger Lake.
  • Slash3 - Tuesday, March 30, 2021 - link

    Whoops, yes. Typo.
    32EU on the i5-11500 and above, 24EU on the i5-11400 parts.
  • Pmaciel - Tuesday, March 30, 2021 - link

    "The Core i9-11900K in our test peaks up to 296 W, showing temperatures of 104ºC"

    "The cooler we’re using on this test is arguably the best air cooling on the market – a 1.8 kilogram full copper ThermalRight Ultra Extreme, paired with a 170 CFM high static pressure fan from Silverstone."

    Not even the much-derided AMD FX-9590 got this far
  • blppt - Tuesday, March 30, 2021 - link

    To be fair, the 9590 was such a POS that it was a blast furnace AND wasn't really competitive in real life usage.

    At least this cpu is competitive, performance wise. Everything else is laughable---or would be if AMD wasn't having a nightmare keeping their 59xx series in stock.
  • TheinsanegamerN - Tuesday, March 30, 2021 - link

    Credit where it’s due, bulldozer was easier to cool
  • blppt - Tuesday, March 30, 2021 - link

    I disagree. I had a 9590 (which shipped WITH a small AIO cooler!) and the thing was shaky at best for stability, easily topping 90c at stock settings.

    Not the mobo fault either, I had the top end ASUS CHVF-Z 990FX, which was such a mature chipset it practically had grey hairs.
  • TheinsanegamerN - Wednesday, March 31, 2021 - link

    the 9000 series all had stability issues. Backing off 1 clock bin or tinkering with voltage would usually fix them.

    Bulldozer didnt have the thermal density issues modern CPUs have. If you had the cooling, it would work. Bulldozer's issue was the sheer amount of heat being being generated would overwhelm many CPU coolers of the time, which were built aroudn the more tradiitonal ~100w power draw of intel I7s and the ~125-140 of phenoms. The 200+ that bulldozer was pulling was new territory.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    Certain motherboard makers played loose with the VRMs. AsRock in particular was known for its 9000-series-certified boards frying. MSI was also bad. Only a few boards were suited to the 9000 series and any enthusiast would have skipped the 9000 series in favor of one of the lower-leakage chips, which could be overclocked to the same 4.7 GHz. 5 GHz with Piledriver was not stable, requiring too much voltage. ASUS tried to hide that by under-reporting the voltage used in its flagship board. 4.4 GHz was optimal, 4.5 was okay, and 4.7 was as far as one wanted to go for frequent use. That's with the lower-leakage 'E' parts.

    "The Stilt" said AMD would have sent the 9000 series to the crusher had it not come up with an after-the-fact lower standard for leakage. So, Hruska gets his take spectacularly wrong in his Rocket Lake article. The 9000 series was not aimed at 'the enthusiast faithful'. Those people knew better than to buy a 9000 series chip, even though there were a few astroturfers trying to get people to buy them — like one guy who claimed his was running at 5.1 GHz 24/7.

    It was aimed at people who could be tricked by the 5 Ghz number. It was the most cynical cash grab possible. Not only did AMD offer only 4 FPU cores (important for gaming) it offered a CPU that was priced into the stratosphere while having un-fixable single-core performance.

    Piledriver's fatal flaw was its abysmal single-thread performance, not its power consumption. It could have been okay enough with the lower-leakage standard (and a more strict socket standard as Zen 1 had). But, reportedly, the 32nm SOI wasn't very good for some time (Bulldozer and the first generation of Piledriver), so AMD let the AM3+ spec be pretty loose (although not as loose as FM).

    Overclocking Piledriver even to 5 GHz wasn't enough to give it decent single-thread performance.

    I do have to agree that the 9590 was the single worst consumer CPU product ever released. It even edges out the Pentium III that wasn't stable — since that one was actually pulled from the market. Not only was the 9590 100% cynical exploitation of consumer ignorance, it was really bad technologically. Figures that Hruska would praise it.

    (If, though, one lived in Iceland with a solar array backed by an iron-nickle battery complex, the 9590 would have been okay for playing Deserts of Kharak, provided one didn't buy it at its original price.)
  • blppt - Thursday, April 1, 2021 - link

    "Those people knew better than to buy a 9000 series chip, even though there were a few astroturfers trying to get people to buy them — like one guy who claimed his was running at 5.1 GHz 24/7."

    What is especially sad here is that even IF he managed to pump the 250-300W into that 9590 to run at 5.1 (all cores), it was probably still slower than a 4790K at stock speeds.
  • Oxford Guy - Saturday, April 3, 2021 - link

    In single core, certainly. However, 2011 is stamped onto the spreaders of Piledriver and it hit the market in 2012. The 4790K hit the market in Q2 2014.

    In 2014, the only FX to consider was the 8320E. Not only was it cheap (at least at MicroCenter), it could run in any AM3+ board without killing it — and could be overclocked better than a 9000 series with anything below nitrogen, due to its much superior leakage.

    The 8320E was the only FX worth anyone’s time. Paired with a UD3P board it could do 4.4 GHz readily and could manage 4.7 with a fast fan angled at the VRM sink. Total cost was very low for the CPU and board from MicroCenter, which is why I recommended that setup to the tightest budget people. But, the bad single core was a problem for frametime consistency.

    AMD should have been publicly tarred and feathered by the tech press for the 9590. All the light mockery wasn’t enough.
  • Spunjji - Friday, April 9, 2021 - link

    Broadly agreed, but I'd note that the 6300 was also reasonable if you were on a painfully low budget. I suggested it to a friend (his alternative was a Sandy Bridge i3) and it lasted him until a year back as his main gaming system. It's now moved on to another friend, who still uses it for games. Those chips have aged surprisingly well, all things considered, though it is probably holding his RX 470 back a little bit.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    • The 9590 posted the highest results in the game Deserts of Kharak, in a dual 980 Ti setup at only 1080 or 1440. And, SLI setups showed competitive 4K scores for many games back then.

    • The overclocked 'The Stilt' said the 9000 series is not the chip to judge the design by because it has the worst leakage characteristics and would have been sent to the crusher had AMD not decided to create a lower standard after the fact. Instead, the chips that should be used to represent Piledriver are the 'E' series. They have the lowest leakage and can manage the same 4.7 GHz the 9590 uses with much more reasonable (although still non-competitive) demands. The 9000 series was really AMD's gift to Intel, by making the bad ancient Piledriver design look much worse.

    • AMD was a small cash-strapped company, thanks to Intel's monopoly abuses. When AMD was leading the x86 industry Intel kept it from getting the profit. So, Piledriver, although very bad in a number of ways, will never be as bad as Rocket Lake. The 9000 series is the only exception, though, since it was a purely cynical cash grab by AMD, using '5 GHz' to sucker people.
  • blppt - Thursday, April 1, 2021 - link

    "The 9590 posted the highest results in the game Deserts of Kharak, in a dual 980 Ti setup at only 1080 or 1440. And, SLI setups showed competitive 4K scores for many games back then."

    As I stated, in the (exceedingly rare) case where a game or app can saturate all 8 cores, when the 9590 was in its prime, it could be competitive.

    That almost never happened, especially in games. About the only 2 I can think of offhand that could do that in the 9590's prime was GTA5 and Company of Heroes 2. And even then, you were using 150+ more watts to get the same or slightly better performance than Intel's high-end quad cores. Along with the required AIO water cooling and required high-end mobo with a beastly VRM setup. As far as I know, only 3 pricey mobos were approved for the 9590, my CHVF-Z, one Gigabyte board, and an ASRock.

    9590 was one of the worst cpus ever. Probably the single worst (special edition) cpu. I had one for years.

    This rocket lake, while disappointing, hot, and power consuming, is consistently competitive in every game versus its direct competitors. The 9590 cannot come close to saying that.
  • Oxford Guy - Saturday, April 3, 2021 - link

    I cite Desert of Kharak because it’s the only game I’ve seen put the FX ahead of Intel at below 4K.

    Not only would the game need to be able to leverage 8 integer cores without needing more than 4 FPU cores, it would have to be able to saturate a narrow deep pipeline and not rely heavily on single thread IPC. It should also scale with clock and not need the best RAM and L3 performance. RTS is probably the best genre for the Piledriver design.
  • Gondalf - Tuesday, March 30, 2021 - link

    AMD FX-9590 had not AVX-512. Very high performance have a cost.
    Try to image Zen 3 with AVX-512, it could not be a champion in low power consumption at all.

    If you do not like high power draw, simply disable AVX-512.
  • schujj07 - Tuesday, March 30, 2021 - link

    It would have the exact same power draw under AVX512 as AVX2. The 142ishW draw is socket maximum. The only way to increase power draw to the CPU socket is to change sockets.
  • maroon1 - Tuesday, March 30, 2021 - link

    Only way to get same power draw with AVX-512 is to lower clock speed a lot which effects performance
  • schujj07 - Tuesday, March 30, 2021 - link

    That doesn't change the fact that Ryzen is socket limited for power draw. While lowering clocks affects performance, AVX512 could still be faster at same power draw on Ryzen.
  • whatthe123 - Wednesday, March 31, 2021 - link

    Zen 3 isn't socket limited. All you have to do is enable PBO and you can manually set the package limit to whatever you want. I can set my 5900x power limit to whatever I want, though the boost gains aren't worth the extra heat.
  • Qasar - Wednesday, March 31, 2021 - link

    um yes it is, 142 watts is as much as it can use : " Notably, AMD's decision to stick with the AM4 socket still constrains its maximum power consumption to 142W, which means that it could not increase power consumption for the new flagship models. "
    from here : https://www.tomshardware.com/reviews/amd-ryzen-5-5...
  • TheinsanegamerN - Wednesday, March 31, 2021 - link

    Hrm um yeah, no, you're wrong.

    Gamers nexus measured over 190 watts on a 2700x, which is socket AM4:

    https://www.gamersnexus.net/hwreviews/3287-amd-r7-...
  • 29a - Wednesday, March 31, 2021 - link

    Thats overclocked, non overclocked wattage is 142W. Nice try.
  • SaturnusDK - Wednesday, March 31, 2021 - link

    AM4 and whatever intel calls the current iteration of the 1150/1151/1200 socket has the exact same technical power limit. Well, almost. It's 142W vs 144W. Usually written as 125W (+15%).
    You can safely draw double that wattage through the socket though on both platforms. The interesting thing is that the 11th gen apparently throws all sense and caution to the wind in an attempt to stay competitive that they're willing to accept an obscene RMA percentage on the sales.
  • whatthe123 - Wednesday, March 31, 2021 - link

    Toms literally contradicts itself in that article by running 5900x with PBO at 172 watt. Socket is not the limit, the bios imposed PPT is the limit.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    What cooler was used? It bet it was stronger than the Noctua used here for AMD.
  • Hulk - Tuesday, March 30, 2021 - link

    I have also been looking for iGPU tests?

    It's strange. It's like it doesn't exist.
  • eastcoast_pete - Tuesday, March 30, 2021 - link

    Predictable results. I don't believe Intel back-ported to 14 nm because their 10 nm couldn't reach high frequencies, they back-ported because their yields at 10 nm aren't high enough and they had manufacturing capacity available at 14 nm. That made the expense of back-porting the design worthwhile.
    Regarding Rocket Lake, the most interesting CPUs in this lineup are the non-K i5, especially the ones that still have the 32 EUs enabled. Any chance you (Ian) can put one or two 11500 or 11600 through their paces. I would really like to also see how "35 W" the T models are. One of those, plus a decent, WiFi-enabled 560 MoBo for $100-130 could serve HTPC and office duties.
  • Otritus - Tuesday, March 30, 2021 - link

    Rocket lake is an 8 core cpu based on the cypress (sunny) cove microarchitecture. Tigerlake H is an 8 core cpu based on the willow (sunny+) cove microarchitecture. Both have 32 Xe EUs. 10SF and 10ESF yields well (Intel is shipping much larger server processors just fine). The problem is 10SF seems to max out around 5Ghz which is the upper bounds of the 11700K. The slight clock bump of the 11900K lets Intel claim the fastest gaming cpu, which would not have been possible on Tigerlake H. 14nm having excess capacity was simply the cherry on top.
  • goatfajitas - Tuesday, March 30, 2021 - link

    I dont th9ink anything Intel has done here can be called "cherry on top" if anything we will look on this as a hot mess (pun intended). :P
  • AntonErtl - Wednesday, March 31, 2021 - link

    Intel shipping larger chips in their current 10nm processes does not disprove yield problems. If there is a flaw in a core on a 40-core die, jut disable the core (and another one) and sell it as 38-core CPU. If there are flaws in three cores, sell it as 36-core CPU, etc. Of course that's also possible and done for Tiger Lake, but there is also parts of the CPU where you have no such redundancy, but the area for these parts is not necessarily larger for the bigger dies, and the huge price for the big dies may make it economically more viable there than on the desktop.

    What makes me believe that either 10nm yield or 10nm capacity is not so great (or capcity is not reat because yield is not great) is that the announced Xeon W-13xx CPUs are going to be Rocket, not Tiger Lake; at 80W TDP, I expect that Tiger Lake would outperform Rocket Lake for most multi-threaded and (thangs to larger cache) some single-threaded workloads, yet they give us Rocket Lake,
  • Spunjji - Friday, April 9, 2021 - link

    Intel arriving nearly 3 years late with Ice Lake SP and only managing "over 200,000" in the first 3 months isn't "just fine", it's pretty indicative that they're still struggling.

    We have no indication of ESF yields yet as there are no ESF products shipping yet.

    Rocket Lake was ported when Intel couldn't get clock speeds *or* yields out of their 10nm process. If their yields and capacity for 10SF were as good as you're implying, we wouldn't still be waiting for Tiger Lake H to actually hit the market so long after Tiger Lake launched.
  • YazX_ - Tuesday, March 30, 2021 - link

    4 years ago, AMD was broke, fighting for survival, targeting the poor and intel was the top dog and spit the same s.hit for the 10 years, fast forward now, AMD is the top dog and intel cannot even catch up.

    Regarding the review, yah as usual s.hit intel CPU that draws alot of power and still priced higher than AMD while offering less.
  • haukionkannel - Tuesday, March 30, 2021 - link

    And Intel will be selling these much, much more than amd can their own...
    Intel just need to exist, to win amd in market share...
  • Qasar - Tuesday, March 30, 2021 - link

    haukionkannel yep, only because its on the shelf, if/when ryzen 5000 supply gets better expect that to change. once that happens intel wont be selling that well. no one i know is even looking at intel right now, all waiting for zen 3.
  • SkyBill40 - Tuesday, March 30, 2021 - link

    You mean Zen 4?
  • Qasar - Tuesday, March 30, 2021 - link

    nope zen 3, they are waiting for the ryzen 5000 series to be in stock, and once they do, they will upgrade, some were looking at 5600X or 5800X, but now, might move up a tier vs what they would of picked up if they were in stock from day one.
  • Tomatotech - Tuesday, March 30, 2021 - link

    Is there a problem with your keyboard? It doesn’t seem able to type the word ‘shit’ properly. Seems a common problem with American keyboards.
  • Holliday75 - Tuesday, March 30, 2021 - link

    I have an American keyboard.

    Shit.

    Works for me.
  • ImSteevin - Tuesday, March 30, 2021 - link

    AMD got used to fighting hard with reduced resources and Intel got used to being comfy at the top. Bought AMD at $13, always believed in the real MVP.
  • SaturnusDK - Wednesday, March 31, 2021 - link

    I bought AMD shares when they hit the $2 mark. I usually do that when any tech stock hits $2 and I have some spare cash, and then keep it for a minimum of a year. In 2016 it was AMD .Last year it was Kodak.
  • JayNor - Tuesday, March 30, 2021 - link

    "AMD is the top dog and intel cannot even catch up...."
    Intel is already sampling 16 core 24 thread Alder Lake chips ... pcie5, ddr5, new cores. They showed a desktop running it at CES. When will AMD catch up with these features?
  • SkyBill40 - Tuesday, March 30, 2021 - link

    Intel couldn't get 10nm on desktop, yet we're supposed to believe that they're suddenly going to pull a magic rabbit out of a hat tomorrow? Hardly. By the time Intel gets around to having a worthwhile process on something other than 14nm, AMD will be on 5nm. They're almost there as is.

    16/24? Why bother with that when AMD has 16/32 NOW? While it may not have PCI-E5 or DDR5, it doesn't need it but will likely have it soon enough. AMD catch up? Come on, man. AMD is in FRONT and has been for a while now. It's all about Intel getting it together and trying to close the gap they themselves created due to complacency, mismanagement, and underestimating their opponent.
  • Qasar - Tuesday, March 30, 2021 - link

    jaynor, i think that will be Zen 4, for the most part, its intel that has caught up to AMD with its features. and um if you haven't noticed even with a release bios, microcode etc, looks like rocket lake is still the dud AT shows it was turning out to be a couple of weeks ago.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    > Intel is already sampling 16 core 24 thread

    LOL. 16 cores and 32 threads of full fat Zen cores is ALWAYS better than 8 cores and 16 threads of full fat Core cores and 8 cores and 8 threads of garbage tier Atom cores.
  • flgt - Tuesday, March 30, 2021 - link

    Yeah, that seems like a mobile first design which will be “good enough” for corporate desktops. Seems like Intel is giving up on the desktop enthusiast market.
  • 1_rick - Tuesday, March 30, 2021 - link

    Half of those 16 cores are Atoms.
  • shabby - Wednesday, March 31, 2021 - link

    Atom on desktop... whoever thought of that should be fired.
  • GeoffreyA - Wednesday, March 31, 2021 - link

    In its original inception, Atom was utter rubbish but the microarchitecture has improved a lot since then (Bonell > Goldmont > Goldmont Plus > Tremont > Gracemont). I've got a funny feeling that this design, taken further, could become their main one in the future. Similar to the Pentium M becoming Core.
  • mitox0815 - Tuesday, April 13, 2021 - link

    The Pentium M had a major IPC advantage to begin with - it was a full-fat-core based off the P6, after all. The Atom derivates don't have that, they were compromised designs from the get-go.
  • Spunjji - Friday, April 9, 2021 - link

    When will AMD catch up with an unreleased product? Some time after it's released and it makes sense to catch up, presumably... 🤡
  • mitox0815 - Tuesday, April 13, 2021 - link

    8 of those are Atom cores...gahd dingit Intel, give us 16 full-sized cores on mainstream! Spare me the cop outs. Granted, finding a way for that to NOT draw 400W+ on its own first would be nice...
  • Oxford Guy - Wednesday, March 31, 2021 - link

    And consumers always pay the price for having quasi monopolization.

    We get overpriced quads from Intel for forever.

    Then, we get overpriced 5000 series from AMD.

    rinse, repeat

    Having adequate competition is supposed to fix the problem of capitalism. Monopolization is not supposed to occur. But, when it does... it concentrates wealth rapidly in the hands of few. Everyone else gets to pay much more for far less. They have the 'choice' of that or nothing.
  • Qasar - Wednesday, March 31, 2021 - link

    "Then, we get overpriced 5000 series from AMD." FYI, the prices are the 5000 series are partly do to the current situation, and demand. cant really blame AMD for stores setting the prices they charge.

    you seem to be one one angry person oxford guy....
  • Oxford Guy - Thursday, April 1, 2021 - link

    Ok ELIZA. : )
  • Spunjji - Friday, April 9, 2021 - link

    Intel seem to think AMD's prices are fair 😬
  • vanish1 - Tuesday, March 30, 2021 - link

    Best thing about Intel CPUs is you dont need a dGPU to make them work nor only limit your selection to APUs from AMD.

    If youre building a computer right now and you dont already have a GPU then there is zero value in any AMD CPU currently.
  • BushLin - Tuesday, March 30, 2021 - link

    So it's the only option for someone who builds their own gaming PCs but doesn't have a GPU from the last ~6 years to offer better than crappy dGPU performance...
  • vanish1 - Tuesday, March 30, 2021 - link

    ^No idea what youre trying to say
  • 29a - Wednesday, March 31, 2021 - link

    He's trying to say if you've got a 6 year old dGPU it will outperform Intel's iGPU.
  • BushLin - Wednesday, March 31, 2021 - link

    Thanks for that, yes, well deduced. Can't fix typos, substitute dGPU for iGPU and hopefully makes sense.
    Also, I'd rather pay over the odds for an old, bottom tier RX 550 than game on a poor iGPU and that's from someone who only buys Nvidia.
  • vanish1 - Thursday, April 1, 2021 - link

    Right that makes sense (sarcasm); so you want to buy into and support the currently overpriced GPU market with the purchase of future E-waste, game on said old GPU for a year, then spend even more money on another GPU. The money wasted is better spent on other parts of your PC or saved when the time comes to buy the desired GPU.
  • BushLin - Thursday, April 1, 2021 - link

    There are way better options than a RX 550 like a GTX 960 or whatever you can scavenge; I used an extreme example to demonstrate just how bad the iGPU option is that you're championing if you have any intention of gaming
    Only talked about performance (or lack of) so far, there's also the issue of drivers: AMD GPU drivers are bad enough for me to pay a premium for Nvidia but Intel's iGPU drivers are even worse for gaming. Intel drivers are usually well developed but the iGPU drivers are an afterthought beyond basic functionality. Perhaps this will change on future products.
  • vanish1 - Friday, April 2, 2021 - link

    You continue to miss the point. PC gaming has become an expensive hobby only saved for those willing to pay the premium for a dGPU or with skin in the game already (which the latter are a different group than my original point, this is about PEOPLE BUILDING A NEW PC RIGHT NOW). Smart money puts that elsewhere, on a better CPU, mobo, case, etc., picks up another hobby in the meantime, then invests in a GPU when the appropriate time comes. Dumb money wastes their cash and time on old E-waste GPUs because they have nothing better to do. 11600k + $100-$200 extra dollars to play with, yes please.
  • Qasar - Friday, April 2, 2021 - link

    " PC gaming has become an expensive hobby only saved for those willing to pay the premium for a dGPU " if that is your view, then im sure sony or microsoft have a product that fits your price point.
    " Smart money puts that elsewhere, on a better CPU, mobo, case, etc., picks up another hobby in the meantime, then invests in a GPU when the appropriate time comes. " a smarter person would just save ALL of their money, and buy this comp, when all of the prices drop back down to normal levels after all the demand from what is currently going on in the world, settles down. and probably save more then the point you are trying to make. to buy a whole comp, or even parts of one, is not where the " smart money " is, as the whole industry is seeing inflated prices cause of whats going on in the world with covid19.
    i would love to see you play a modern recent game, on that OH so powerful iGPU, on anything greater then 720p with the graphics option set to any thing other then mid range or lower
    " 11600k + $100-$200 extra dollars to play with, yes please. " more like 5600X + practically any vid card thats $100 or less, and being able to play games at more then 720P at medium or less, graphics settings :-)
  • vanish1 - Friday, April 2, 2021 - link

    Are you seriously an idiot? Im asking in all honesty. Because you keep moving the goal posts to keep fluffing your argument that holds no water.

    Consoles? Were talking about PCs, nice try though. Let me know when I can shove an Xbox into a mobo as a permanent GPU, otherwise youre just wasting more money kicking the can down the road.

    You also do realize that GPUs are the only part of the industry thats overpriced? You can literally buy everything else at msrp or less and things like RAM, SSDs are going to be cheaper NOW than a year later.

    I also never said use the iGPU to game, because gaming on a iGPU, basic dGPU, or APU will be a crappy experience on modern titles.

    But see this is where you continue to miss the point, all I said is if you want to BUILD A PC, never once mentioned GAME ON A PC or ALREADY OWN A GPU. You gloss over this every time because it doesnt fit the incorrect narrative youre trying to portray.
  • Qasar - Friday, April 2, 2021 - link

    no but i am sure you are as YOU are the one that keeps moving the goal posts, not me. YOU said PC gaming was an expensive hobby, so i suggested a console, so you can save money vs a comp, as this seems to be you WHOLE POINT, to save money.

    " I also never said use the iGPU to game, because gaming on a iGPU, basic dGPU, or APU will be a crappy experience on modern titles. " no but you INSINUATED that you did, so who is the idiot ? and to go buy a 11600k and NOT use it for gaming, as YOU IMPLIED, (cause if you are not going to game with and need the cores, the 5600X is clearly the better choice, as its multi threaded performance, is above the 11600k) is well, whats that word you keep crying about, oh yea, E waste. if that is the case and you dont intend on gaming then getting a MUCH cheaper cpu, with your beloved igp, would be a better option.
    as i said in my other post, as you are now resorting to name calling, is further shows, you are wrong, and your whole point, has been proved wrong by giving other options, so, run along little child, when you can talk with out resorting to name calling, then come back
  • vanish1 - Monday, April 5, 2021 - link

    please stop, you keep being wrong.

    why would anyone buy a console if they intend to build a PC or PC game? Do you understand what saving money means? It means not spending it.

    Once again, I never said gaming on a PC, I said build a PC. You keep assuming incorrectly. As such, see original post.
  • 1_rick - Tuesday, March 30, 2021 - link

    Ridiculous. Bottom-tier dGPUs are $50-60, even on Newegg. Sure, they're worthless for gaming, but they'll be fine for office work and basic web browsing.
  • vanish1 - Wednesday, March 31, 2021 - link

    Okay so spend $60 on overpriced E-waste that you will have to eventually replace anyways when that money could have been put into a higher tier CPU, saved towards your actual GPU, or spent on other parts of the PC build.

    Who wants to spend $60 on a GPU just to make their CPU work? Its ridiculous.
  • Qasar - Wednesday, March 31, 2021 - link

    who says you have to throw it out ? you COULD keep it for emergencies, put it in another comp, or, um i dunno, sell/give it to a friend who could use a vid card for what ever reason.

    you say intel is the only option/best option, but you obviously havent considered anything thing else.
  • vanish1 - Wednesday, March 31, 2021 - link

    The fanboys that exist here crack me up. Constant complaining about the GPU market, overpriced and out of stock, yet willing to add fuel to that fire just to have an AMD CPU grace your presence; the hypocrisy is outstanding. I never said throw it out, it just ends up being E-waste at the end, but your mindset is the issue with the disposable culture we live in. Beyond that, I dont want to go through the hassle of buying and selling multiple cards, Ill buy one when its time, plug it into my system, and be done. Put it into another computer? So build another computer on top of the one youre already building, not alot of sense there. Give it to a friend, why would you waste your friends time with a 710 gt? Sounds more like trying to pass the buck.
  • 29a - Wednesday, March 31, 2021 - link

    Did you really just call other people fanboys?
  • Qasar - Wednesday, March 31, 2021 - link

    thats what i thought, looks like there is a new intel fanboy on here :-) maybe he is upset cause rocket lake is well, pathetic ( going by GN's review )
  • vanish1 - Thursday, April 1, 2021 - link

    I mean when people like yourself and 29a cant comment on the point I'm making and instead try to dunk on me for calling out Intel shills when I see them, it clearly shows who is right (me) and who is wrong (both of you)
  • BushLin - Thursday, April 1, 2021 - link

    Your argument is to PC gaming enthusiasts that they should enjoy the performance they had in their gaming rig over a decade ago but on modern titles because there is a GPU shortage. If you truly cared about ewaste, why not just continue using your old rig rather than buy a dead end motherboard to have a worse experience?
  • Qasar - Thursday, April 1, 2021 - link

    " calling out Intel shills when I see them " ahh so you are calling out your self then ?
    the point you are making, is more like your own OPINION then any thing else. while YOU may not see the point in getting a cheap vid card so you can at least use the over all better cpu ( zen 3) then this dud, others may be fine with it. a few people i work with, are currently waiting for zen 3 to be in stock, and will upgrade to it, they have seen the reviews of rocket lake, and have no issues waiting, cause they know they will still be getting the better cpu.
    " then there is zero value in any AMD CPU currently. " again YOUR opinion.
  • vanish1 - Friday, April 2, 2021 - link

    Youre cherry picking words from my overall statement, nice try Intel shill. IF YOU ARE BUILDING A NEW PC RIGHT NOW THERE IS ZERO VALUE IN AMD CPUS. Some people actually care about spending their money frugally, not having to spend hundreds of dollars on a GPU they dont want just to make their CPU work.

    Beyond that, if E-peen matters so much to you and money is no object then why waste your cash in the first place on E-waste GPU, isnt it even more baller to buy a GPU when its most expensive? If you care so much about having the BEST cpu that does the BEST in every benchmark, how dare you grace its presence with such a lowly dGPU like a rx550 or gtx970? Hmmm conflict of interest, picks and chooses what part of the GPU crisis to support instead of skipping it completely.

    Sorry you two, some of us live in the real world where money matters more than E-peen and video gaming addictions.
  • Qasar - Friday, April 2, 2021 - link

    and you are trying to make an argument that so far, only YOU seem to be making, on here, and on other sites. YOU are the intel shill here, not me, nice try, if i was an intel shill would i be calling rocket lake a dud, or the better cpu being Zen 3 ? " IF YOU ARE BUILDING A NEW PC RIGHT NOW THERE IS ZERO VALUE IN AMD CPUS " and its YOUR opinion, plain and simple, while YOU see no value in them, i'd guess others see A LOT more value in them, then rocket lake.
    " not having to spend hundreds of dollars on a GPU they dont want just to make their CPU work. " ahh so now, 50 bucks is hundreds if dollars ? and its doubtful someone buying the top, lets say 2 rocket lake cpus, is going to then use the IGP to play games on, that are recent, come on, get real.
    with the way you keep crying about a 50 buck gpu being ewaste, you must have a comp that is quite old, so you can flex you own epeen, and not contribute e waste your self.

    the fact that you are now, resorting to insults and name calling, proves nothing more then the fact you are probable just a child, have a good day, and i hope your old comp doesnt die on you, so you have to contribute to e waste yourself.
  • BushLin - Friday, April 2, 2021 - link

    vanish1: OMG ewaste! OMG be frugal with money...
    Also vanish1: advocates buying 200w CPU on 6 year old 14nm process and a new motherboard which won't support future 10nm/7nm CPUs. Delivering about half the performance per watt of current AMD CPUs and needs custom water cooling or datacentre loud fans just to do that.
    You're either an Intel shareholder, somehow trying to justify to yourself your bad purchase or just... you know, dumb.
  • vanish1 - Friday, April 2, 2021 - link

    @qasar go home bud, you lost the argument days ago.

    @bushlin ahhh you, the shill, continues to spread FUD. You know nanometer size isnt a 1:1 standard across the industry right? https://www.youtube.com/watch?v=ROS008Av4E4 Linus is asking you sit down now please.

    Beyond that, most people actually use their computer for longer than a product cycle and could really care less what cpu upgrade path exists, if you genuinely think 5 years from now a 6 or 8 core processor will be out of date then you need to seriously need to get your head checked. PC industry standards move independently of each category. You think SK Hynix cares about Intel and AMDs CPU upgrade paths when theyre ready to push DDR5 onto the world? (the answer is no).

    11600k + mobo + money in my pocket = winning
    5600x + mobo - overpriced dGPU to make it work = losing
  • BushLin - Friday, April 2, 2021 - link

    " You know nanometer size isnt a 1:1 standard across the industry right?"
    Yes, obviously since I talked about *future* 10nm/7nm CPUs which only applies to Intel's metric as TSMC are knocking out 7nm Zen3 and 5nm Apple SoCs.

    "11600k + mobo + money in my pocket = winning"
    You have an office PC for a couple of years, by the time GPU shortages end you've got a terrible value for money, inefficient platform to slot a GPU into; by then, the same money you spent today would buy you a 5nm/3nm CPU with better IPC, DDR5 and even bigger gulf in performance per watt compared to your space heater, factory overclocked build.
  • Qasar - Friday, April 2, 2021 - link

    sounds more like you lost the argument as i have suggested counter point to you BS claims about ewaste and such. bottom line is, from what it sounds like you only prefer the intel cpus, cause of the iGPU, which is fine but there ARE other options, its just you sound to cheap to consider them. a discrete vid card, can and could be used in pretty much any comp, so its not a waste as you claim.

    11600k + mobo + money in my pocket = winning
    5600x + mobo - overpriced dGPU to make it work = losing
    this is YOUR opinion nothing more.
  • 29a - Friday, April 2, 2021 - link

    I just bought a brand new AMD processor.
  • vanish1 - Monday, April 5, 2021 - link

    https://youtu.be/KPgmeNstLa8?t=722

    Where the clip starts the first guy has a Ryzen system sitting unused, using his phone instead.
    The next one after, same situation no GPU, but Intel build, and look its running.

    step back fade away J for the win at the buzzer.....swooshhhhhh
  • vanish1 - Monday, April 5, 2021 - link

    Seriously; outside of work, do not support the GPU industry currently in any form, shape, or manner.
  • Fulljack - Wednesday, March 31, 2021 - link

    yeah, just bought an AMD Ryzen 7 4750G with much faster Vega 8 graphics than paltry Xe-LP 32 EU that is barely enough for 720p gaming.
  • vanish1 - Wednesday, March 31, 2021 - link

    Ryzen 4000 APUs are not available for purchase through retail, only OEMs
  • rUmX - Wednesday, March 31, 2021 - link

    You're fucking stupid.
  • jospoortvliet - Thursday, April 1, 2021 - link

    That are available in about a week. https://www.anandtech.com/show/9793/best-cpus
  • vanish1 - Thursday, April 1, 2021 - link

    Woof a Zen 2 based APU that costs currently $637 on Newegg, ouch.

    Also, youre missing the point. Instead of overspending and wasting money to game, put the cash towards other parts of the system then focus on gaming when GPU prices return to normal.
  • Prosthetic Head - Tuesday, March 30, 2021 - link

    The past called, they want their processors back!

    But seriously, it is sad to see back ports on to older processes with (relatively) awful performance / Watt. Talking of which, can anyone point me to a recent power / performance analysis of current CPUs?
  • Prosthetic Head - Tuesday, March 30, 2021 - link

    e.g. sum up the area under these traces from the handbreak test to see the total energy used to do the same job: https://images.anandtech.com/doci/16495/Power-HB.p...
  • Bigos - Tuesday, March 30, 2021 - link

    Thanks for Factorio test results. I am looking forward to the Bench DB being filled.

    Could you share more about the save you are using for the test? Is it a big factory (a "mega base") or something smaller? Is it mostly bot or belt focused? Are trains being used?
  • wr3zzz - Tuesday, March 30, 2021 - link

    Handbrake seems to scale better with additional cores on Rocket Lake than on Zen3. Why is that?
  • 29a - Tuesday, March 30, 2021 - link

    I had a Zen+ CPU and Handbrake had trouble utilizing all of the cores
  • GeoffreyA - Tuesday, March 30, 2021 - link

    It could be due to x264 limiting the number of threads because when vertical resolution divided by threads drops below a certain threshold---I think round about 30 or 40---quality begins to suffer.
  • GeoffreyA - Wednesday, March 31, 2021 - link

    I tested this now on FFmpeg but it should be the same on Handbrake because the x264/5 libraries are doing the actual encoding.

    I only have a 4C/4T CPU but used the "-threads" switch to request more. On x264, regardless of resolution, once more than 16 threads are asked for, it logs a warning that it's not recommended but goes ahead and uses the requested count, up to 128. I assume that running at default settings, like AT is probably doing with Handbrake, will let x264 cut off at 16 by itself. If someone could confirm this with a 32-thread CPU, that would be nice. As for x265, I gave it a try as well and the encoder refuses to go on if more than 16 threads are requested, saying the range must be between 0 and X265_MAX_FRAME_THREADS.

    In short, I reckon both these codecs are cutting off at 16 threads on default settings. If Ian or someone else could test how much extra is gained by manually putting in the count on a 32T CPU, that would be interesting.
  • scott_htpc - Tuesday, March 30, 2021 - link

    Splat. Backporting doesn't really work & dead-end platform.

    What I'd really like to read is a detailed narrative of Intel's blunders over the last 5-10 years. To me, it probably makes a case study in failed leadership & hubris, but I would really like to read an authoritative, detailed account. I'm curious why the risks of their decisions were not enough to dissuade them to take a better path forward.
  • Prosthetic Head - Tuesday, March 30, 2021 - link

    Yes, some sort of post mortem on Intel development over the last few years would be interesting. Once they abandoned the Pentium 4 madness, they did a good job with Core, Core2 and then the early stages of the 'i' series. Because AMD were by that point down their own dead end, they had essentially no competition for about a decade. The tempting easy explanation is that as a de facto monopoly for desktop and laptop CPUs, they only innovated enough to keep the upgrade cycle ticking over, then when AMD made a rapid comeback they got caught with their pants down and some genuine technical difficulties in fab tech.... But the reality could be a lot more complex and interesting than that.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    > But the reality could be a lot more complex and interesting than that.

    The reality is Conroe was a once-in-a-lifetime IPC improvement, literally 90% better (or nearly double the performance!) clock-for-clock than the ill-fated Pentium 4 (see here: https://www.reddit.com/r/intel/comments/m7ocxj/pen... They are not going to get that again unless Gelsinger clones himself across Intel's entire leadership team. Now, they may get something Zen-like in the ~50% range, but nothing Conroe-like unless ALL the stars align after a decade of complacency.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    https://www.reddit.com/r/intel/comments/m7ocxj/pen...
  • 29a - Tuesday, March 30, 2021 - link

    Keep in mind that P4 was a piece of shit built for marketing high clock speeds and was easily beaten by Athlon 64 running 1Ghz slower so getting that much IPC wasn't as hard as usual.
  • GeoffreyA - Wednesday, March 31, 2021 - link

    "Keep in mind that P4 was a piece of"

    Not to defend the P4, but Northwood wasn't half bad in the Athlon XP's time, beating it quite a lot. It was Prescott that mucked it all up.
  • TheinsanegamerN - Wednesday, March 31, 2021 - link

    TBF, the only reason it wasnt half bad is AMD's willingness to just abandon XP. I mean, only 2.23 GHz? 3 GHz OCs were not hard to do with their mobile lineup, and those obliterated anything intel would have until conroe. IF they had released 2.4, 2.6, and 2.8 GHz athlon XPs intel would have been losing every benchmark against them.
  • GeoffreyA - Friday, April 2, 2021 - link

    Oh yes, the XP had the higher IPC and would have given Intel a sound drubbing if its clocks were only higher. Thankfully, the Athlon 64 came and turned the tables round. I remember in those days my heart was set on the 3200+ Barton but I ended up with a K8 budget system of sorts.
  • mitox0815 - Tuesday, April 13, 2021 - link

    "Just abandon"...those clocks you dream of might have been possible on certain CPUs, but definitely noton a broader line-up. The XPs ran hot enough as it was, screwing more out of them would've made no sense. THAT they tried with the 9590...and failed miserably. Not to mention people could OC the Northwoods too, beyond 3.6 or 3.7 Ghz in fact...negating that point entirely. As was said...Northwood, especially the FSB800 ones with HT were the top dogs until the A64 came around and showed them the door. Prescott was...ambitious, to put it nicely.
  • mitox0815 - Tuesday, April 13, 2021 - link

    *not on
  • TheinsanegamerN - Wednesday, March 31, 2021 - link

    Netburst was built for both high clock speeds and predictable workloads, such as video editing, where it did quite well. Obviously it royally sucked for unpredictable workloads like gaming, but you could see where intel was heading with the idea.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    'you could see where intel was heading with the idea'

    Creating the phrase 'MHz myth' in the public consciousness.
  • GeoffreyA - Friday, April 2, 2021 - link

    "MHz myth in the public consciousness"

    And it largely worked, even in the K8 era with the non-enthusiast public. Only when Core 2 Duo dropped to lower clocks was it accepted overnight that, yes, lower clocks are now all right because Intel says so.
  • Prosthetic Head - Tuesday, March 30, 2021 - link

    Your point still stands, however P4 was also a VERY low bar for to measure IPC improvements relative to.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    Well, Bulldozer was too and look what AMD did with Ryzen...
  • Oxford Guy - Saturday, April 3, 2021 - link

    AMD had a long time. 2011 is stamped onto the spreader of Piledriver and that was only a small incremental change from Bulldozer, which is even older.
  • Oxford Guy - Saturday, April 3, 2021 - link

    And, Bulldozer had worse IPC than Phenom. So, AMD had basically tech eternity to improve on the IPC of what it was offering. It made Zen 1 seem a lot more revolutionary.
  • GeoffreyA - Saturday, April 3, 2021 - link

    "It made Zen 1 seem a lot more revolutionary"

    You're right; and if one compares against Haswell or Skylake, one will see that the Intel and AMD designs are crudely the same from a bird's-eye point of view, except for AMD's split-scheduler inherited from the Athlon. I think that goes to show there's pretty much only one way to make an efficient x86 CPU (notice departures are disastrous: Netburst/Bulldozer). Having said that, I'm glad AMD went through the BD era: taught them a great deal. Also forced them to start from scratch, which took their design further than revising K10 would have done.
  • mitox0815 - Tuesday, April 13, 2021 - link

    Discounting the possibilty of great design ideas just because past attempts failed to varying degrees is a bit premature, methinks. But it does seem odd that it's constantly P6-esque design philosphies - VERY broadly speaking here - that take the price in the end when it comes to x86.
  • blppt - Tuesday, March 30, 2021 - link

    Even Jim Keller, the genius who designed the original x64 AMD chip, AND bailed out AMD with the excellent Zen, didn't last very long at Intel.

    Might be an indicator of how messed up things are there.
  • BushLin - Tuesday, March 30, 2021 - link

    It's still possible that a yet to be released Jim Keller designed Intel CPU finally delivers a meaningful performance uplift in the next few years... I wouldn't bet on it but it isn't impossible either.
  • philehidiot - Tuesday, March 30, 2021 - link

    Indeed, it's a generation out. It's called "Intel Dynamic Breakfast Response". It goes "ding" when your bacon is ready for turning, rather than BSOD.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    Raja Koduri is a terrible human being and has wasted money on party buses and booze while “managing” his side of the house at Intel. I think Jim Keller knew the corporation was a big pander fest of bureaucracy and was smart to leave when he did. The chiplet idea he brought to the table, while not innovation since AMD already was first to market, will help them to stay in the game which wouldn’t have happened if he hadn’t contributed it.
  • Oxford Guy - Saturday, April 3, 2021 - link

    Oh? Firstly, I doubt he was the exec at AMD who invented the FX 9000 series scam. Secondly, AMD didn’t beat Nvidia for performance per watt but the Fury X coming with an AIO was a great improvement in performance per decibel — an important metric that is frequently undervalued by the tech press.

    What he deserves the most credit for, though, is making GPUs that made miners happy. Fact is that AMD is a corporation not a charity. And, not only is it happy to sell its entire stock to miners it is pleased to compete against PC gamers by propping up the console scam.
  • mitox0815 - Tuesday, April 13, 2021 - link

    The first to the x86 market, yes. Chiplets - or modules, however you wanna call them - are MUCH much older than that. Just as AMD64 wasn't the stroke of genius it's made out to be by AMD diehards...they just repeated the trick Intel pulled off with their jump to 32 bit on the 386. Not even multicores were AMDs invention...I think both multicore CPUs and chiplet designs were done by IBM before.

    The same goes for Intel though, really. Or Microsoft. Or Apple. Or most other big players. Adopting ideas and pushing them with your market weight seems to be much more of a success story than actually innovating on your own...innovation pressure is always on the underdogs, after all.
  • KAlmquist - Wednesday, April 7, 2021 - link

    The tick-tock model was designed to limit the impact of failures. For example, Broadwell was delayed because Intel couldn't get 14nm working, but that didn't matter too much because Broadwell was the same architecture as Haswell, just on a smaller node. By the time the Skylake design was completed, Intel had fixed the issues with 14nm and Skylake was released on schedule.

    What happened next indicates that people at Intel were still following the tick-tock model but had not internalized the reasoning that led Intel to adopt the tick-tock model in the first place. When Intel missed its target for 14nm, that meant it was likely that 10nm would be delayed as well. Intel did nothing. When the target date for 10nm came and went, Intel did nothing. When the target date for Sunny Cove arrived and it couldn't be released because the 10nm process wasn't there, Intel did nothing. Four years later, Intel has finally ported it to 14nm.

    If Intel had been following the philosophy behind tick-tock, they would have released Rocket Lake in 2017 or 2018 to compete with Zen 1. They would have designed a new architecture prior to the release of Zen 3. The only reason they'd be trying to pit a Sunny Cove variant against Zen 3 would be if their effort to design a new architecture failed.
  • Khenglish - Tuesday, March 30, 2021 - link

    I've said it before but I'll say it again. Software render Crysis by setting the benchmark to use the GPU, but disable the GPU driver in the device manager. This will cause Crysis to use the built-in Windows 10 software renderer, which is much newer and should be more optimized than the Crysis software renderer. It may even use AVX and AVX2, which Crysis certainly is too old for.
  • Adonisds - Tuesday, March 30, 2021 - link

    Great! Keep doing those Dolphin emulator tests. I wish there were even more emulator tests
  • dsplover - Tuesday, March 30, 2021 - link

    Fast Forward to 2022 when Intel strikes back. 14nm is just old stock, each re iteration is hotter than the last.
  • eva02langley - Tuesday, March 30, 2021 - link

    Anyone defending RL is either paid by Intel, or a sad tool doing Intel's bidding for free.
  • lmcd - Tuesday, March 30, 2021 - link

    The i5 looks competitive performance wise and performance per dollar wise. Given how expensive B550 boards are, it looks like a solid midrange option when power consumption isn't a factor.
  • TheinsanegamerN - Wednesday, March 31, 2021 - link

    The i5 11600 and 15 11400 are both good deals relative to AMD.
  • Slash3 - Tuesday, March 30, 2021 - link

    Your RDR2 "8k" benchmark graphs are still mis-mis-marked as 4K, again.
  • hansip87 - Tuesday, March 30, 2021 - link

    Were disappointed that my z490g board can do Rocket Lake but seeing the result, maybe 10850k will be better upgrade from my 11400f when my upgrade syndrome comes.
  • hansip87 - Tuesday, March 30, 2021 - link

    Can't*
  • 29a - Tuesday, March 30, 2021 - link

    Now is a good time to buy a 10000 series chip, they're super cheap.
  • rolfaalto - Tuesday, March 30, 2021 - link

    Why is the i9 significantly slower at AVX-512 than the i7? My main interest is getting the fastest single-core AVX available, leaving all the parallel stuff to the GPUs.
  • schujj07 - Tuesday, March 30, 2021 - link

    Unless your software can specifically use AVX512 you are better with the Ryzen 5000 series.
  • SystemsBuilder - Tuesday, March 30, 2021 - link

    Honestly, I think they just mislabeled the 11900k and 11700k in the first 3D Particle Movement v2.1 tests result OR it's thermal throttling because they are exactly the same but architecturally (11900k higher frequencies).
    But the whole topic of AVX512 is interesting. It looks like Intel did not release Cypress Cove AVX-512 architecture details like they did for Sunny Cove last year (and Skylake-X cores before). But if Cypress Cove is close enough to Sunny Cove (and tit should be), they have crippled the AVX-512 quite severely but to confirm that we need to see the official Intel slides on core design. I am specifically talking about that core port 5 is crippled in Sunny Cove (compared to what Skylake-X and Cascade lake-X have) and does not include a FMA, essentially cutting the throughput in half for FP32 and FP64 workloads. Port 5 do have a ALU so Integer workloads should be running at 100% compared to Skylake-X. I really like to see Cypress Cove AVX-512 back end architecture design at the port level from Intel to understand this better though.
  • GeoffreyA - Tuesday, March 30, 2021 - link

    I believe they added an FMA to Port 5, and combining that one and the FMA from Port 0, create a single AVX512 "port," or rather the high- and low-order bits are dispatched to 5 and 0.

    https://www.anandtech.com/show/14514/examining-int...

    https://en.wikichip.org/w/images/thumb/2/2d/sunny_...
  • GeoffreyA - Tuesday, March 30, 2021 - link

    I'm not fully sure but think Cypress is just Willow, which in turn was just Sunny, with changes to cache. Truly, the Coves are getting cryptic, and intentionally so.
  • SystemsBuilder - Tuesday, March 30, 2021 - link

    The official intel slide posted on this page: https://www.anandtech.com/show/14514/examining-int... (specifically this slide: https://images.anandtech.com/doci/14514/BackEnd.jp... , shows what i'm talking about. The 2nd FMA is missing on port 5 but the ALU is there - 50% of FP compute power vs. Skylake-X architecture. The other slide on wikichip is contradicting official intel slide OR it only applies to server side with full Sunny Cove enabled (usually the consumer client side version is a cut down).
    In any case these slides are not Cypress Cove so the question remains what have they done to AVX-512 architecture port 0+1 and port 5.
  • GeoffreyA - Tuesday, March 30, 2021 - link

    You're right. I didn't actually look at the Intel slide but was basing it more on the Wikichip diagram and Ian's text. Will be interesting if we can find that information on Cypress C.
  • JayNor - Tuesday, March 30, 2021 - link

    "My main interest is getting the fastest single-core AVX available..."

    Rumors within the last week say there will be Emerald Rapids HEDT chips next yr. Not sure about Ice Lake Server workstation chips. If either of these provide dual avx512 they might be worth the wait.
  • SystemsBuilder - Tuesday, March 30, 2021 - link

    I'm thinking Sapphire Rapids, which is due to arrive in late 2021 (very best case or, more likely, 2022), is the one to hold out for. Build on the better performing 10nm Finfet, it will add PCIe 5.0, DDR5 and further improve on AVX 512 with BF16 and AMX Advanced Matrix Extensions https://fuse.wikichip.org/news/3600/the-x86-advanc...
    Now, if you read about this you realize what step up for (FP and int) compute that is. Massive!
  • scan80269 - Tuesday, March 30, 2021 - link

    It looks like Rocket Lake is the first desktop class x86 processor to support hardware acceleration for AV1 video format decode, similar to Tiger Lake for mobile. Interesting how this power hungry processor family can deliver good energy efficiency when it comes to watching 4K HDR movies/videos. OTT platform providers need to offer more content encoded in AV1, though.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    Meh... This is a fixed-function IP block for a very specific task so it is going to be low power. At this point, for most people, HEVC support is what actually matters. HEVC already offers a 50% improvement in bitrate efficiency over H.264, and AV1 only claims the same thing. Royalties or not, because HEVC was first to the game, it became the industry standard for UHD/4K Blu-ray. Timing is everything and AV1 missed the boat on that by about five years. So with the industry locked into HEVC, AV1 is going to have an incredibly hard time getting uptake outside of online streaming, which is a whole other ball of wax. And even then, as a content creator, you can use HEVC royalty free anyway if you are livestreaming on YouTube.
  • GeoffreyA - Tuesday, March 30, 2021 - link

    While AV1's quality is excellent, surpassing that of HEVC, its encoding speed is impractical for most people,* and I'm doubtful whether it's going to get much better. If people (and pirates, yes) can't use it easily, its spread will be limited. The only advantage it has, which I can vouch for anecdotally, is superior quality to HEVC. But even this advantage will be short lived, once VVC enters the fray in the form of x266. I've got no idea how x266 will perform, but from testing the Fraunhofer encoder, saw that VVC and AV1 are in the same class, VVC being slightly ahead, sharper, and faster.

    * libaom, the reference encoder. Intel's SVT-AV1 is faster but has terrible quality.
  • mitox0815 - Tuesday, April 13, 2021 - link

    Hard to see that as a selling point...niche functionality doesn't really shine when paired with noticeable performance and VAST efficiency disadvantages
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    Yawn... wake me up in 9 months when Alder Lake is out. Honestly, what a snoozefest.
  • TristanSDX - Tuesday, March 30, 2021 - link

    they should backport Golden Cove than Sunny Cove. Maybe with 12 gen (Golden Cove core) they release i9, i7 and i5 on 10 nm, while i3, Pentium and Celeron on 14 nm
  • PaulHoule - Tuesday, March 30, 2021 - link

    Intel is making a big branding mistake when they come out with "i9"; they should just admit the state they are in and make it "i1" or "i-2" instead.
  • BushLin - Tuesday, March 30, 2021 - link

    They won't fool the geeks but the majority of the market just sees confusing product names on a Dell dropdown option and will pick the bigger number if they can afford it. Sad but true.
  • mitox0815 - Tuesday, April 13, 2021 - link

    We proudly introduce the Intel Core iGotNothin
  • Bagheera - Tuesday, March 30, 2021 - link

    we all knew RKL was gonna be a steaming pile when it was announced, but Intel fanboys kept believing in miracles.

    moving the goalpost 101:
    - "Rocket Lake will trash Zen 3!" (fail)
    - "Rocket Lake will be so much cheaper!" (epic fail)
    - "Rocket Lake will have stock at least..." (yes because nobody is buying them lol)
  • schujj07 - Tuesday, March 30, 2021 - link

    "The i7 benchmarks are invalid because they are early"
    Well they were quite indicative of what we are seeing from the 11900k.
  • arashi - Wednesday, March 31, 2021 - link

    Rocket Lake will do 5.6!!!
  • WaltC - Tuesday, March 30, 2021 - link

    Intel has become a company run by old men, for old men, and by old men. Just interesting to see that the former CPU Emperor has no clothes...all that time...all that time ahead of AMD and Intel had nothing at all in the oven. Not one bloody thing! Absolutely amazing--points out with drama what's wrong when major hardware markets are dominated by monopolists like Intel.
  • mitox0815 - Tuesday, April 13, 2021 - link

    The development doesn't seem so special when you think about how extremely high-tech this market is...freshly entering it without special circumstances is nearly impossible. The entry barriers are crazy. So increasing market consolidation seems natural.
  • TrueJessNarmo - Tuesday, March 30, 2021 - link

    5600x abnormally topping the gaming charts.

    On gamers nexus charts on the other hand it was consistently below 5800x/5900x/5950x in gaming even though they clearly have strong bias towards 6 core budget CPU's in their recommendation.

    Completely opposite picture. Why is that?
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    It may come down to the memory settings. Ian strictly follows JEDEC timings which all reviewers should.
  • TrueJessNarmo - Tuesday, March 30, 2021 - link

    Stock 5600x has lower clocks vs 5800x/5900x/5950x. It should be slower. Any other result is abnormal.

    Besides Gamers nexus is arguably the most trusted and the most accurate benchmarks in regards to gaming, CPU's and GPU's. You can read up on their test methodology here:

    https://www.gamersnexus.net/guides/3577-cpu-test-m...

    I suspect it has something to do with subtle insufficient power or cooling or both. Or perhaps they just have really great 5600x sample and really terrible 5800x/5900x samples.
  • BushLin - Tuesday, March 30, 2021 - link

    They absolutely shouldn't if their audience is mostly buying decent ram and enabling XMP. I would argue that people reading anandtech deserve to see what a platform can reasonably deliver rather than castrated pre-built systems. DDR4-3600 isn't expensive or unstable for modern platforms.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    I realize this, but AnandTech is in the interesting position of appealing to both enthusiasts and professionals so using JEDEC is an effort to give a nod to the professional side of the audience... presumably, at least. Nonetheless, I agree: I have always used XMP/DOHC settings since day 1 of Ryzen as much as possible so it would be nice to see the performance with faster RAM settings
  • Oxford Guy - Wednesday, March 31, 2021 - link

    ' Ian strictly follows JEDEC timings which all reviewers should.'

    BS. Motherboard makers even put out lists of supported XML RAM sticks.

    They try to have their cake and eat it, to justify their kneecapping of Zen 1 and 2.

    • Extremely high CFM coolers, with no decibel level information

    • Allowing boards to bypass the supposed Intel standard for turbo

    No, sorry... they're trying to make Intel look and sound good because they want to retain early access. It's part of the devil's bargain the tech press has to follow. Either pay the piper by posting a lot of awful spin and tricky tactics or be excluded and lose all those clicks to others.
  • Qasar - Wednesday, March 31, 2021 - link

    here is a thought, if you think AT is doing such a crappy job with their testing methodology, go make your OWN review site, and review products how think it should be done, problem solved
  • Oxford Guy - Thursday, April 1, 2021 - link

    It’s not a particularly original nor useful thought. Feedback from customers is valued by most businesses.
  • Qasar - Thursday, April 1, 2021 - link

    not when they sound more like whinning and complaining
  • Oxford Guy - Saturday, April 3, 2021 - link

    Since you are interested in playing Mr. Censor I can give you some advice. Instead of campaigning to have this site degraded to be like Ars and Slashdot — echo chambers of post hiding and clique voting, there are more than enough sites like that where you can find that kind of entertainment.

    I doubt that this site is going to change the comments system into one of those echo chambers for you. But, I can’t stop you from continuing your peevish inept censorship campaign.
  • Qasar - Saturday, April 3, 2021 - link

    nor will it ever change no matter how you whine and complain about everything, just to give you some thing to whine and complain about, whats your point :-)
  • marsdeat - Tuesday, March 30, 2021 - link

    Slight error on page 1: "it really has to go against the 12-core Ryzen 9 5900X, where it loses out by 50% on cores but has a chance to at least draw level on single thread performance."

    No, it loses out by 33% on cores, or the 5900X has 50% more cores. The 11900K doesn't lose by 50% on cores.
  • factual - Tuesday, March 30, 2021 - link

    At this point in time, the best bang for buck CPU is 10700K (At least in Canada). There's no point in wasting money on 11th gen Intel CPUs, you are better off paying the premium for the Ryzen 5000 instead of buying 11th gen Intel.
  • Fulljack - Wednesday, March 31, 2021 - link

    or 10700F if you already have GPU. it's much cheaper even against older 3700X in my country.
  • JayNor - Tuesday, March 30, 2021 - link

    While ABT may be hard for Intel to explain, the description chart in this article indicates that a good cooler + enabled ABT might provide an interesting benchmark result, especially if this is effectively what AMD enables in their default configuration.
  • Byte - Tuesday, March 30, 2021 - link

    Rocket lake is akin to Nvidias Turing, hard pass generation of chips. Unless you really really can't wait a year. But then again, maybe you can buy it.
  • jeremyshaw - Tuesday, March 30, 2021 - link

    Turing was at least consistently faster than their predecessors in the consumer space (and could still claim #1 gaming against its contemporaries). Just being #1, in and of itself, is reason enough to exist. Turing also didn't move the needle in price/perf, but it didn't regress, either. Rocket Lake does not have any real performance angle to hold onto, outside of the narrow AVX512 market (is it even a market?), and its price/perf is worse in many aspects (and that's just MSRP to MSRP, nevermind street prices).
  • AdamK47 - Tuesday, March 30, 2021 - link

    2080 Ti?

    What going on over there at AnandTech?
  • Slash3 - Tuesday, March 30, 2021 - link

    That 2080 Ti is almost brand new to AT, too. Game tests until recently had been done with a GTX 1080. It's not really their wheelhouse at this point and I think they're more than aware. It's ok, the gap has long since been filled.
  • Billy Tallis - Tuesday, March 30, 2021 - link

    Ian needs three matched GPUs to keep multiple testbeds running in parallel, otherwise it would take far too long to run a reasonable number of CPUs through this kind of test suite. Sourcing three GPUs like that is a lot harder than getting an individual review sample.
  • shabby - Wednesday, March 31, 2021 - link

    Plenty of gpus on eBay 😂
  • Oxford Guy - Wednesday, March 31, 2021 - link

    There is no good explanation for not testing AMD with the same wind tunnel copper cooler as used on the Intel.

    And, both should be tested with a popular cheap cooler like the EVO. The budget has to include cooler cost. And, if AMD has parts that perform just as well with an EVO that is a difference in availability, not only in terms of parts people have lying around but also in terms of being able to easily buy one in a computer store.

    Don't have time to do all the tests, ok. Choose the most computer-consuming game, the most power-consuming AVX-2 bench, whatever your standard AVX-512 is, and one non-AVX power-demanding real-world bench. That's four tests, which should be easily done.
  • Qasar - Wednesday, March 31, 2021 - link

    " There is no good explanation for not testing AMD with the same wind tunnel copper cooler as used on the Intel. " yes there is, cant mount it to the board as it doesnt fit, cause there is no mounting hardware.
  • Oxford Guy - Thursday, April 1, 2021 - link

    Then pick a different cooler for an apples to apples comparison.

    That’s obvious.
  • Qasar - Thursday, April 1, 2021 - link

    and if you dont happen to have one on hand, and dont have the time to go and get one ?

    even that should of been obvious to you.
  • Oxford Guy - Saturday, April 3, 2021 - link

    What’s obvious is that this is an old professional organization. That means very basic things like apples-apples cooler tests are expected.
  • Makste - Tuesday, April 6, 2021 - link

    Agreed
  • Oxford Guy - Wednesday, March 31, 2021 - link

    Those 2080 Tis are running for extremely high prices these days.

    Maybe they should switch to something people can get for an affordable price, like a $250 Radeon 7750 or GeForce 1030.
  • BushLin - Wednesday, March 31, 2021 - link

    So... Create a GPU bottleneck in a CPU review? I can see it now: Every CPU scored the same in our gaming benchmarks... The $100 CPU looks fantastic value!
  • Oxford Guy - Saturday, April 3, 2021 - link

    It’s clearly sarcasm. The Turing stuff, which was a poor value even before this latest mining fiasco is very expensive at its top end — putting it quite outside the budget of a lot of people — that is if they could even get their hands on one in the first place.
  • Qasar - Wednesday, March 31, 2021 - link

    maybe you should stop whining and just leave if AT makes you this unhappy, and angry, oxford guy
  • Oxford Guy - Thursday, April 1, 2021 - link

    When this becomes your personal website then you can decide who is to be censored and who is not. Until then, keep your comments relevant.
  • Qasar - Thursday, April 1, 2021 - link

    right after you do
  • Oxford Guy - Saturday, April 3, 2021 - link

    And look up the tu quoque fallacy.
  • Qasar - Saturday, April 3, 2021 - link

    whining and complaining is still the same, no matter how you look at it, again, if this site makes you that unhappy and angry, due to the way they test and review products, then why do you keep coming here ?
  • zamroni - Tuesday, March 30, 2021 - link

    it's should be called rocket lame.
    it runs hot like rocket too
  • SkyBill40 - Tuesday, March 30, 2021 - link

    Tech Jesus said it best and most bluntly: A waste of silicon.
  • SystemsBuilder - Tuesday, March 30, 2021 - link

    Ian,
    your writing about how "hard" AVX-512 is to program was fine in the first article and maybe even in the second article you wrote, but you keep on repeating the exact same sentence (paragraph) on how hard AVX-512 is to program and keep on quoting Jim Keller: "there are only a couple dozen or so people who understand how to extract the best performance ...".
    That was a while ago and I can assure you there are plenty of people who know how to do this now. It's assembly and any CS/CE major graduate worth their salt, with linear algebra (vector calculus also helps), advanced computer architecture and a serious parallel programming class would know how to do that with some work.
    AVX-512 is not mysterious or strange, it's just vectors math + vectorization of normal scalar operations on configurable 512 bit vectors. Yes you do need to vectorize your algorithms from the ground up because you cannot rely on compilers to vectorize your old sequential scalar algorithms for you (they'll do some attempts, but will disappoint), and you do need to write some code in either assembly directly or using intrinsic, as well as understand pipeline and scheduling of the AVX-512 instructions and the dependencies (there are tools to help you with this too), BUT it's not harder than that. It's not magic. It’s just normal solid Computer Science work. Can you please change the narrative on AVX-512 hardness because I think it is just misleading today 2021, having been available in mainstream CPUs since Skylake-X was released. Thx.
  • Hifihedgehog - Tuesday, March 30, 2021 - link

    Haha. No. Ian is absolutely right and big names in the industry (like Linus Torvalds) are mostly in agreement on this too: AVX-512 hides the the warts of the underlying performance discrepancies of their hardware when doing general everyday compute.
  • SystemsBuilder - Wednesday, March 31, 2021 - link

    and you are a Computer Science graduate? What Linus T. is saying is that AVX-512 is a power hog and he is right about that. Linus T. is not saying that "a couple dozen or so people" are able to program it. Power requirements and programing hardness are 2 different things.
    On the second point, I 100% stand by that any decent Computer Science/Engineering graduate should be able to program AVX-512 effectively (overcoming hardness not power requirements).
    Also, I do program AVX-512 and I 100% stand by what I said. You just need to know what you are doing and vectorize algorithms. If you use the good old sequential algorithms you will not archive anything with AVX-512, but it you vectorize you're classical algorithms you will also achieve >100% benefits in many inner loops in so called mainstream programming. AVX-512 can give you 2x uplift if you know how to utilize both FMA units on port 0+1 and 5 and it's not hard.
    Lastly, with decent negative AVX-512 offsets in BIOS, you can bring down the power utilization to ok levels AND still get 2x improvements in the inner loops (because of vectorized algorithmic improvement).
  • Hifihedgehog - Wednesday, March 31, 2021 - link

    > and you are a Computer Science graduate?

    No, I am a Computer Engineering graduate. Sorry, but you are grasping at straws. Plus you are overcomplicating the obvious to try to be an Intel apologist. Just see this and this. Read it and weep. Intel flopped big time this release:

    https://i.imgur.com/HZVC03T.png

    https://i.imgflip.com/53vqce.jpg
  • SystemsBuilder - Wednesday, March 31, 2021 - link

    So fellow CS/CE grad. I'm not arguing that AVX-512 is a power hog (it is) or that the AVX-512 offsets slows down the rest of the CPU (they do). I am arguing the premise that AVX-512 is supposed to be so incredibly hard to do that only "couple dozen or so people" can do is wrong today - Skylake-X with AVX-512 was launched 2017 for heaven's sake. Surely, I can't be the only CS/CE guy how figured it out by now. I mean really? When Ian wrote what Keller said (and keep on writing it) that that this AVX-512 is sooo hard to do that only a few guys on the planet can do it well, my reaction was "let's see about that". I mean come on guys, really!
  • SystemsBuilder - Wednesday, March 31, 2021 - link

    More specifically Linus is concerned that because you need to use negative offsets to keep the power utilization down when engaging AVX-512 it slows down everything else going on. i.e. AVX-512 power requirements overall CPU impact. The new cores designs (already Cypress Cove maybe? but Sapphire Rapids definitely!) will allow AVX-512 workloads to run at one frequency (with lower negative offsets that for instance Skylake-X) and non AVX-512 workloads at a different frequency on various cores and keep within the power budget. this is ideal.
  • arashi - Wednesday, March 31, 2021 - link

    This belongs in r/ConfidentlyIncorrect and r/IAmVerySmart, anyone who thinks coding for AVX512 PROPERLY is doable by "any CS/CE major graduate worth their salt" would be laughed out of the industry.
  • Hifihedgehog - Wednesday, March 31, 2021 - link

    Exactly. The real reason for the nonsensical wall of text is SystemsBuilder is trying desperately to overexplain things to put lipstick on a pig. And he repeats himself too like I am listening to an automated bot caught in a recursive loop which is quite funny actually.
  • SystemsBuilder - Wednesday, March 31, 2021 - link

    So you are a CE major, have you actually tried to program in AVX 512? If not, try to do a matrix by matrix multiplication of 16x16 FP32 matrices for instance and come back. You'll notice incredible performed increase. It's not lipstick on a pig, it actually is very powerful, especially computing through large volumes of related data SIMD style.
  • Meteor2 - Saturday, April 17, 2021 - link

    Disappointing response. You throw insults but not rebuttals.

    Me thinks SB has a point.
  • SystemsBuilder - Wednesday, March 31, 2021 - link

    really? any you are one CS graduate? have you tried?
  • MS - Tuesday, March 30, 2021 - link

    What the he'll is that supposed to mean that you can't you can't get the frequency at 10 nm and therefore you have to stick with the 14 nm node? That's pure nonsense, AND is at 7 nm and they are getting the target frequencies. Maybe stop spreading the Coolaid and call a spade a spade....
  • schujj07 - Wednesday, March 31, 2021 - link

    Intel 10nm is not TSMC 7nm.
  • watzupken - Wednesday, March 31, 2021 - link

    "What the he'll is that supposed to mean that you can't you can't get the frequency at 10 nm and therefore you have to stick with the 14 nm node? That's pure nonsense, AND is at 7 nm and they are getting the target frequencies. Maybe stop spreading the Coolaid and call a spade a spade...."

    I am not sure how true this is, but the clockspeed for early versions of 10nm were abysmal. If you look at the first gen of 10nm chip from Intel, Cannon Lake, not just is clockspeed low, but specs is bad. Second gen 10nm, Ice Lake, and you see similar trend of very low clockspeed. I am using an i5 Ice Lake U that is advertised with a base clock of 1Ghz. It is only with 10nm Super Fin (third gen) where you start seeing higher clockspeed. Also, yield with early 10nm is certainly an issue, or they will not have to push out Rocket Lake @ 14nm, while laptops and servers/ workstations (only recently) are on 10nm. I suspect Intel is pushing their 10nm towards the same path as their current 14nm, feed it with more power and push clockspeed as high as possible. I will not be surprise that Alder Lake may bring better performance with a max of 8 big cores, but power consumption wise may only see marginal improvements at load. Light load may not expose the power inefficiency because of the small cores will pick up the load.
  • boozed - Tuesday, March 30, 2021 - link

    There's some weirdness going on in at least one, possibly two of the FFXV 95th percentile graphs
  • watzupken - Wednesday, March 31, 2021 - link

    I feel I have to give Intel the credit of moving forward with a 14nm Rocket Lake, instead of hanging around like they did for the last 5 years with the same Skylake chip but boosted with steroids. But evidently, 14nm is becoming a burden to their progress. I know Intel supporters will claim that 14nm is capable of competing with 7nm. On the surface, yes. But at the cost of massive power draw and heat output with regression in performance as compared to the previous i9 in some cases. I would say that i5 would still be a chip worth considering, but not the i7 or i9 if you your main use case is gaming. At the respective price points, looking just at the price of an Intel i7 or i9 Rocket Lake chip appears to be cheap, but if you consider you need some hardcore motherboard and cooling to keep the chip chugging at the a high all core clockspeed, the cost actually skyrockets.
    Personally after looking at a number of reviews of Rocket Lake, it seems to me its a product that is too little and too late. Plus, if you are going for an i7 or i9, your upgrade path is dead since there will be no Rocket Lake with a higher core count. At least on the AMD camp, if you settled for a Ryzen 5 or 7, one may still have the option to scoop up a Ryzen 9 if prices come down with the introduction of Zen 4. In the absence of AMD chips at MSRP, I guess I will only recommend a Rocket Lake i5 because of the significant improvement over last gen. Otherwise, I don't think most will lose out much by going for the discounted Comet Lake chips.
  • Hifihedgehog - Wednesday, March 31, 2021 - link

    LOL. Keep dreaming...

    https://i.imgflip.com/53vqce.jpg
  • 529th - Wednesday, March 31, 2021 - link

    No chipset fans for their PCIe 4.0?
  • JMC2000 - Wednesday, March 31, 2021 - link

    Intel 500-series chipsets don't have PCI-E 4.0, only the CPU does.

    https://ark.intel.com/content/www/us/en/ark/produc...
  • yeeeeman - Wednesday, March 31, 2021 - link

    One of the few tech sites that remained professional and didn't use click baity titles or disrespect intel.
    Rocket is clearly a stop gap and a product that doesn't make sense, but it is what it is and as a professional tech writer you should treat it with decency not write insulting words and call it a poop like hardware unboxed did for example.
  • XabanakFanatik - Wednesday, March 31, 2021 - link

    Ok Piednoel
  • Qasar - Wednesday, March 31, 2021 - link

    go see how well gamers nexus liked this cpu.
    intel deserves ALL the flack they get for this cpu, its a joke, and a dud.
  • Beaver M. - Wednesday, March 31, 2021 - link

    Acting like GN and HUB gives them lots of AMD fanboy clicks/views.
  • Beaver M. - Wednesday, March 31, 2021 - link

    I guess living in isolation for a year makes people more and more antisocial and aggressive. And that then in turn sells well.
  • Qasar - Wednesday, March 31, 2021 - link

    so they are amd fanboys cause they told the truth ? more like you are the intel fanboy trying to defend this dud of a cpu. come on, most reviews say they same thing, just some are more harsh, and rightfully so
  • Oxford Guy - Wednesday, March 31, 2021 - link

    Tech fans are continually disappointed by the lack of adequate competition in a multitude of tech markets (e.g. GPUs, CPUs, search, leading-edge lithography machines/foundries, etc. etc.)

    We saw, way back in the 80s, what happens when competition is shut down. The Japanese seized the DRAM market by dumping, forcing American DRAM makers out of the market — then promptly raising prices drastically.

    We went from seeing revolutionary products like the Apple Lisa being replaced by years of toy-grade machines at very high prices. The Lisa shipped with 1 MB of RAM and the first Mac with just 128K. The Apple IIGS shipped with just 256K, many years after the Lisa. We can thank more than Apple's love of fat margins. We can thank inadequate competition.

    We have seen that play out, time and time again. Now, it's so bad that it's worse than the bread lines of the USSR. At least if one waited in one of those one might end up with some bread. These days, you have two choices: a line to get a very overpriced product (like the latest iPhone) or you can skip waiting in line because they're nothing to buy (GPUs).
  • Oxford Guy - Wednesday, March 31, 2021 - link

    (The Amiga 1000 only shipped with 256K of RAM, too, as I recall. It was a problem throughout the industry, not something due merely to Apple's margins.)
  • GeoffreyA - Saturday, April 3, 2021 - link

    I agree with your anti-corporation sentiment, but there's little we can do, except sigh. A worldwide boycotting of their products will work wonders but that'll never happen. As long as these rotters are out to make money---Intel, AMD, Google, the rest---it'll go on like this. Who knows, perhaps there's some vital link to entropy in all this, and why everything always goes awry on earth.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    'One of the few tech sites that remained professional and didn't use click baity titles or disrespect intel.'

    This article uses bad spin to try to make Intel's product look better than it is.

    Just one example:

    ‘Intel has stated that in the future it will have cores designed for multiple process nodes at the same time, and so given Rocket Lake’s efficiency at the high frequencies, doesn’t this mean the experiment has failed? I say no, because it teaches Intel a lot in how it designs its silicon’

    The spin also includes the testing, using a really loud high-CFM CPU cooler in the Intel and a different quieter one on the AMD.

    It's a pile of spin, like the glorified press release stuff trying to turn CEO Pat into some sort of superhero. That stuff sounds like it was written for investors.
  • FirstStrike - Wednesday, March 31, 2021 - link

    Ian, you are missing SpecInt and SpectFp suite
  • Orkiton - Wednesday, March 31, 2021 - link

    I'm not either Intel or AMD "fanboy" though some sympathy to AMD due to unfair, anti-competitive Intel practices in the past and AMD merit to emerge from their ashes. That said, best wishes to both in the name of progress, innovation and better value to us, the consumers.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    The rated TDP is 125 W, although we saw 160 W during a regular load, 225 W peaks with an AVX2 rendering load, and 292 W peak power with an AVX-512 compute load.

    ‘Intel’s claimed TDP’ rather than ‘that rated’. The latter implies an independent rating standard/body.

    If both processors were found at these prices, then the comparison is a good one – the Ryzen 7 5800X in our testing scored +8% in CPU tests and +1% in gaming tests (1080p Max). The Ryzen is very much the more power-efficient processor, however the Intel has integrated graphics (an argument that disappears with KF at $374).

    Again, no specifics about the power consumption difference.

    On high-end gaming both processor performed the same, the AMD processor was ahead an average of 8% on CPU workloads, and the AMD processor came across as a lot more efficient and easy to cool, while the Intel processor scored a big lead in AVX-512 workloads.

    Again, no specifics about the power consumption difference.

    AGAIN, testing AMD with a weaker cooler, even though the CPU will go faster with the loud fast cooling you’re using on Intel.

    Makes it APPLES to APPLES.
  • Makste - Tuesday, April 6, 2021 - link

    I again have to agree with you on this. Especially with the cooler scenario, it is not easy to spot the detail, but you have managed to bring it to the surface. Rocket Lake is not a good upgrade option now that I look at it.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    (Sorry I messed up and forgot quotation marks in the previous post. 1st, 3rd, and 5th paragraphs are quotes from the article.)

    you wrote:
    ‘Rocket Lake on 14nm: The Best of a Bad Situation’

    I fixed it:
    Rocket Lake on 14nm: Intel's Obsolete Node Produces Inferior CPU'

    ‘Intel is promoting that the new Cypress Cove core offers ‘up to a +19%’ instruction per clock (IPC) generational improvement over the cores used in Comet Lake, which are higher frequency variants of Skylake from 2015.’

    What is the performance per watt? What is the performance per decibel? How do those compare with AMD? Performance includes performance per watt and per decibel, whether Intel likes that or not.

    ‘Designing a mass-production silicon layout requires balancing overall die size with expected yields, expected retail costs, required profit margins, and final product performance. Intel could easily make a 20+ core processor with these Cypress Cove cores, however the die size would be too large to be economical, and perhaps the power consumption when all the cores are loaded would necessitate a severe reduction in frequency to keep the power under control. To that end, Intel finalised its design on eight cores.’

    Translation: Intel wanted to maximize margin by feeding us the ‘overclocked few cores’ design paradigm, the same thing AMD did with Radeon VII. It’s a cynical strategy when one has an inferior design. Just like Radeon VII, these run hot, loud, and underperform. AMD banked on enough people irrationally wanting to buy from ‘team red’ to sell those, while its real focus was on peddling Polaris forever™ + consoles in the GPU space. Plus, AMD sells to miners with designs like that one.

    ‘Intel has stated that in the future it will have cores designed for multiple process nodes at the same time, and so given Rocket Lake’s efficiency at the high frequencies, doesn’t this mean the experiment has failed? I say no, because it teaches Intel a lot in how it designs its silicon’

    This is bad spin. This is not an experimental project. This is product being massed produced to be sold to consumers.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    One thing many are missing, with all the debate about AVX-512, is the AVX-2 performance per watt/decibel problem:

    'The rated TDP is 125 W, although we saw 160 W during a regular load, 225 W peaks with an AVX2 rendering load, and 292 W peak power with an AVX-512 compute load'

    Only 225 watts? How much power does AMD's stuff use with equivalent work completion speed?
  • Hifihedgehog - Thursday, April 1, 2021 - link

    "The spin also includes the testing, using a really loud high-CFM CPU cooler in the Intel and a different quieter one on the AMD."

    Keep whining... You'll eventually tire out.

    https://i.imgur.com/HZVC03T.png

    https://i.imgflip.com/53vqce.jpg
  • Makste - Tuesday, April 6, 2021 - link

    Isn't it too much for you to keep posting the same thing over and over?
  • Oxford Guy - Wednesday, March 31, 2021 - link

    Overclocking support page still doesn’t mention that Intel recently discontinued the overclocking warranty, something that was available since Sandy Bridge or something. Why the continued silence on this?

    ‘On the Overclocking Enhancement side of things, this is perhaps where it gets a bit nuanced.’

    How is it an ‘enhancement’ when the chips are already system-melting hot? There isn't much that's nuanced about Intel’s sudden elimination of the overclocking warranty.

    ‘Overall, it’s a performance plus. It makes sense for the users that can also manage the thermals. AMD caught a wind with the feature when it moved to TSMC’s 7nm. I have a feeling that Intel will have to shift to a new manufacturing node to get the best out of ABT’

    It also helps when people use extremely loud very high CFM coolers for their tests. Intel pioneered the giant hidden fridge but deafness-inducing air cooling is another option.

    How much performance will buyers find in the various hearing aids they'll be in the market for? There aren't any good treatments for tinnitus, btw. That's a benefit one gets for life.

    ‘Intel uses one published value for sustained performance, and an unpublished ‘recommended’ value for turbo performance, the latter of which is routinely ignored by motherboard manufacturers.’

    It’s also routinely ignored by Intel since it peddles its deceptive TDP.

    ‘This is showing the full test, and we can see that the higher performance Intel processors do get the job done quicker. However, the AMD Ryzen 7 processor is still the lowest power of them all, and finishes the quickest. By our estimates, the AMD processor is twice as efficient as the Core i9 in this test.’

    Is that with the super-loud very high CFM cooler on the Intel and the smaller weaker Noctua on the AMD? If so, how about a noise comparison? Performance per decibel?

    ‘The cooler we’re using on this test is arguably the best air cooling on the market – a 1.8 kilogram full copper ThermalRight Ultra Extreme, paired with a 170 CFM high static pressure fan from Silverstone.’

    The same publication that kneecapped AMD’s Zen 1 and Zen 2 but refusing to enable XMP for RAM on the very dubious claim that most enthusiasts don’t enter BIOS to switch it on. Most people are going to have that big loud cooler? Does Intel bundle it? Does it provide a coupon? Does the manual say you need cooler from a specific list?
  • BushLin - Wednesday, March 31, 2021 - link

    I won't argue with the rest of your assessment but given these CPUs are essentially factory overclocked close to their limits, the only people who'd benefit from an overclocking warranty are probably a handful of benchmark freaks doing suicide runs on LN2.
  • Oxford Guy - Thursday, April 1, 2021 - link

    That’s why I said the word ‘enhancement’ seems questionable.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    ‘Anyone wanting a new GPU has to actively pay attention to stock levels, or drive to a local store for when a delivery arrives.’

    You forgot the ‘pay the scalper price at retail’ part. MSI, for instance, was the first to raise its prices across the board to Ebay scalper prices and is now threatening to raise them again.

    ‘In a time where we have limited GPUs available, I can very much see users going all out on the CPU/memory side of the equation, perhaps spending a bit extra on the CPU, while they wait for the graphics market to come back into play. After all, who really wants to pay $1300 for an RTX 3070 right now?’

    • That is the worst possible way to deal with planned obsolescence.

    14nm is already obsolete. Now, you’re adding in wating for a very long time to get a GPU, making your already obsolete CPU really obsolete by the time you can get one. If you’re waiting for reasonable prices for GPUs you’re looking at, what, more than a year of waiting?

    ‘Intel’s Rocket Lake as a backported processor design has worked’

    No. It’s a failure. The only reasons Intel will be able to sell it is because AMD is production-constrained and because there isn’t enough competition in the x86 space to force AMD to cut the pricing of the 5000 line.

    Intel also cynically hobbled the CPU by starving it of cores to increase profit for itself, banking that people will buy it anyway. It’s the desktop equivalent of Radeon VII. Small die + way too high clock to ‘compensate’ + too-high price = banking on consumer foolishness to sell them (or mining, in the case of AMD). AVX-512 isn’t really going to sell these like mining sold the Radeon VII.

    ‘However, with the GPU market being so terrible, users could jump an extra $100 and get 50% more AMD cores.’

    No mention of power consumption, heat, and noise. Just ‘cores’ and price tag.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    'Intel could easily make a 20+ core processor with these Cypress Cove cores, however the die size would be too large to be economical'

    Citation needed.

    And, economical for Intel or the customer?

    Besides, going from 8 cores to 20+ is using hyperbole to distract from the facts.

    'and perhaps the power consumption when all the cores are loaded would necessitate a severe reduction in frequency to keep the power under control.'

    The few cores + excessive clocks to 'compensate' strategy is a purely cynical one. It always causes inferior performance per watt. It always causes more noise.

    So, Intel is not only trying to feed us its very obsolete 14nm node, it's trying to do it in the most cynical manner it can: by trying to use 8 cores as the equivalent of what it used to peddle exclusively for the desktop market: quads.

    It thinks it can keep its big margins up by segmenting this much, hoping people will be fooled into thinking the bad performance per watt from too-high clocks is just because of 14nm — not because it's cranking too few cores too high to save itself a few bucks.

    Intel could offer more cores and implement as turbo with a gaming mode that would keep power under control for gaming while maximizing performance. The extra cores would presumably be able to do more work for the watts by keeping clocks/voltage more within the optimal range.

    But no... it would rather give people the illusion of a gaming-optimized part ('8 cores ought to be enough for anyone') when it's only optimized for its margin.
  • arashi - Wednesday, March 31, 2021 - link

    Calm down Piednoel, Intel isn't going to hire you as CEO after Pat leaves either way.
  • Qasar - Wednesday, March 31, 2021 - link

    he's just a very angry person for some reason, let him be, maybe he will just get tired of whining, and go somewhere else.
  • Oxford Guy - Thursday, April 1, 2021 - link

    Ad hominem much?
  • AlyxVariant - Wednesday, March 31, 2021 - link

    From 10nm to 14nm...

    Why?.... Why Intel...

    But What about iGPU tests?

    The known YouTube Sdfx Show prove that at mid/low range game config the Iris iGPU can game at solid 60FPS
  • TheinsanegamerN - Wednesday, March 31, 2021 - link

    This isnt an iris GPU and pales in comparison to AMD's vega.

    "gaming at solid 60 FPS" I could load up shovel knight on an atom netbook at game at a "solid 60 FPS". Doesnt mean the netbook is any good. Intel's desktop GPUs suck. 32 EUs (24 for the i5 10400) VS the 96 EU+64MB cache of tiger lake.
  • JimmyZeng - Thursday, April 1, 2021 - link

    Please compare 5800X to 11700KF instead of 11700K, you're anandtech, don't make such rookie mistakes.
  • Bagheera - Thursday, April 1, 2021 - link

    you know the KF chips still have the iGPU on-die, just disabled, right? one can simply disable the iGPU on the K and it would be the same??
  • Hifihedgehog - Thursday, April 1, 2021 - link

    Shhh... Piednoel will spend a whole evening again writing pages of nonsense here if you egg him on.
  • Bagheera - Saturday, April 10, 2021 - link

    the KF is lower price, so if someone wanted to save some money and don't need the iGPU they can go for that part. but for performance review K and KF are effectively identical. there's nothing wrong with comparing the K against the 5800X.
  • JimmyZeng - Friday, April 2, 2021 - link

    But why? Intel provides KF SKUs at a lower price tag, do not forget that.
  • ozzuneoj86 - Thursday, April 1, 2021 - link

    "Rocket Lake also gets you PCIe 4.0, however users might feel that is a small add-in when AMD has PCIe 4.0, lower power, and better general performance for the same price."

    If a time traveling tech journalist would have told us back in the Bulldozer days that Anandtech would be writing this sentence in 2021 in a nonchalant way (because AMD having better CPUs is the new normal), we wouldn't have believed him.
  • Hrel - Friday, April 2, 2021 - link

    Just in case anyone able to actually affect change reads these comments, I'm not even interested in these because the computer I built in 2014 has a 14nm processor too... albeit with DDR 3 RAM but come on, DDR4 isn't even much of a real world difference outside ultra specific niche scenarios.

    Intel, this is ridiculous, you're going to have been on the SAME NODE for a DECADE HERE!!!!

    Crying out loud 10nm has been around for longer than Intels 14nm, this is nuts!
  • James5mith - Saturday, April 3, 2021 - link

    " More and more NAS and routers are coming with one or more 2.5 GbE ports as standard"

    No, they most definitely are not. lol
  • Linustechtips12#6900xt - Monday, April 5, 2021 - link

    gotta say, love the arguments on page 9 lol
  • peevee - Monday, April 5, 2021 - link

    "the latest microcode from Intel should help increase performance and cache latency"

    Do we really want the increase in cache latency? ;) :)
  • 8 Cores is Enough - Wednesday, August 4, 2021 - link

    I just bought the 11900k with a z590 Gigabyte Aorous Pro Ax mobo and Samsung 980 pro 500GB ssd. This replaced my 9900k in a z390 Gigabyte Aurous Master with a 970 pro 512GB ssd.

    They're both 14nm node processors with 8c/16t and both overclocked, 5GHz all cores for 9900k and 5.2GHz all cores with up to 5.5GHz on one core via tiurbo modes on the 11900k.

    However, the 11900k outperforms the 9900k in every measure. In video encoding, which I do fairly often, it's twice as fast. In fact, the 11900k can comvert 3 videos at the same time each one as fast as my rtx 2070 super can do 1 video af a time.

    On UserBenchmark.com, my 11900k is the current record holder for fastest 11900k tested. It beats all the 10900k's even in the 64 thread server workload metric. It loses to the 5900x and 5950x in this one metric but clobbers them botb in the 1, 2, 4 and 8 core metrics.

    I wish I had a 5900x to test on Wondershare Uniconverter. I suspect my 11900k would match it given the 2X improvement over the 9900k, which was about 1/2 as fast as the 3950x in video comversion.

    I do a lot of video editing as well. Maybe on this workload an AMD 5900x or 5950x would beat the 11900k. It seems plausible so let's presume this and accept Ryzen 9 is most likely still best for video editing.

    But the cliam thaf being stuck on 14nm node means Intel RKL CPUs perform the same as Haswell or that they are even close does not make sense to me based on my experiences so far going from coffee lake refresh to RKL.

    The Rocket Lake CPUs are like the muscle cars of 1970. They are inefficient beasts that haul buttocks. They exist as a matter of circumstance and we may never see the likes of them again.

    Faster more efficient CPUs will be built but the 11th gen Intel CPUs will be remembered for being the back ported abominations they are: thirsty and fast with the software of 2021 which for the time being still favors single thread processing.

    If you play Kerbal Space Program then get an 11900k because that game is all about single thread performance and right now the 11900k beats all other CPUs at that.
  • Germanium - Thursday, September 2, 2021 - link

    My experimentation with my Rocket Lake Core I 11700k on my Asus Z590-A motherboard has shown me that it least on some samples AVX512 can be more efficient & cooler running than AVX2 at the same clock speed.

    I am running my sample at 4.4GHz both AVX512 & AVX2. When running Hand Brake there is nearly a 10 watt savings when running AVX512 as opposed to AVX2.

    Before anyone says Hand Brake does not use AVX512 & that is true out of the box but there is a setting script I found online to activate AVX512 on Hand Brake and it does work. It most be manually entered, no copy & paste available.

    With stock voltage settings at 4.2GHz using AVX2 at was drawing over 200 watts. With my settings I am able to run AVX512 at 4.4 GHz with peak wattage in Hand Brake of 185 watts. That was absolute peak wattage. It mostly ran between 170 to 180 watts. AVX2 runs about 10 watts more for slightly less performance at same clock speed.
  • Germanium - Thursday, September 2, 2021 - link

    Forgot to mention that on order to make AVX512 so efficient one must set the AVX Guard Band voltage Offset at or near 0 to bring the power to acceptable levels. Both AVX512 & AVX2 must be lowered. If AVX2 is not lowered at least same amount AVX512 setting will have little or no effect.
  • chane - Thursday, January 13, 2022 - link

    I hope my post is considered on topic

    Scenario 1: Without discrete graphics 1080p grade card, using on-chip graphics: Given the same core count (but below 10 cores), base and turbo frequencies and loaded with the same Cinebench and/or Handbrake test loads, would a Rocket lake Xeon w series processor run hotter, cooler or about the same as a Rocket Lake i family series processor with the same TDP spec?

    Scenario 2: As above but with 1080p grade discrete graphics card.

    Note: The Xeon processor pc will be using 16GB of ECC memory, however much that may impact heat and fan noise.

    Please advise.
    Thanks.

Log in

Don't have an account? Sign up now