This is more or less what I thought we were going to see when Intel was claiming a 2.09x cost efficiency improvement in this class of processors vs. the prior generation.
18 cores at $979 feels like the best product in that stack. A $800 14C versus the $750 16C Ryzen seems like a difficult matchup, maybe held up by a slight frequency advantage, but a 10C at $600 doesn't seem like it will hold up very well against a 12C $500 Ryzen 9. Threadripper is going to give AMD some seriously impressive pricing power.
sure Intel has a single thread advantage, but i've been enjoying my $450 16 core Threadripper 1950X since last feb when i accidentally won an auction for the board at ~$190
That single-thread advantage may play out differently here; traditionally intel HEDT chips haven't been nearly as great as their plain old desktop K processors in practice; but there are features this time around (e.g. turboboost 3.0) that suggest it might be a little better this time. And we have pretty little info on what (zen 2) threadripper will be like; at least not in the kind of details you'd need to really predict stuff like single-core perf.
As far as I know, Intel doesn't have 512 bit AVX registers so it has to do 2 passes for 512 just like AMD does. The difference being, AMD processors don't need an AVX offset like Intel processors do.
AMD does simply not have any part of AVX-512 implemented. It can't do it at this stage. Intel CPUs implement different parts of AVX-512 at various market segments. There are AVX-512 sets of instructions that make no sense in a consumer CPU. Also the hardware implementation of AVX-512 requires a lot of die space, so it's not even feasible to have all instructions on all chips.
But the takeaway here is that no, AMD does not do AVX-512 at all and Intel does it selectively.
In their initial implementation of AVX512 intel had two lines of processors - 2 AV512 units for the higher end and a single AVX512 unit for the lower end. In both cases one of the units comes from the fusion of both AVX2 units and there is an additional (or there isn't, depending on model) a dediacted avx512 unit.
So for these lower end ones with the single AVX512 unit the situation is identical to how AMD used to do AVX2 with both SSE units. I assume that's what the OP had in mind. However, as far as I can tell, all these Core X processors *do* have two AVX512 units so the only disadvantage will be the lower clocks when they are in use (and lack of software that uses them).
In any case, I think this is something AMD can easily compensate for with higher core count. The rest is up to price.
Can anyone of you guys show advantage of AVX512 on linear algebra as the simplest and most used application. The only cited example showing boost with AVX512 is own Ian Cutress' 3D particle movement test showing doubtful 300%. Our PIC code boost was just 10-20%. Take this simple code for dense matrix solution AX=B with Intel's MKL library which now supports AVX512 and run it on any Intel and any AMD multicore
Program LinearAlgebra implicit none integer :: i,j,neq,nrhs=1,lda,ldb, info real*8,allocatable :: A(:,:),b(:) integer, allocatable :: piv(:) Integer count_0, count_1, count_rate, count_max
do neq=1000,20000,1000 lda=neq; ldb=neq allocate(A(neq,neq),b(neq),piv(neq)) call random_number(A) call random_number(b) Call system_clock(count_0, count_rate, count_max) CALL dgesv (nEq,nrhs,A,ldA,piv, b, ldb, info) Call system_clock(count_1, count_rate, count_max) Write (*, '(1x,A,i6,A,2x,F8.3,A)') 'nEqu = ',nEq,' ', & dble(count_1-count_0)/count_rate, ' s' deallocate(A,b,piv) end do end program
No, it's not the same as AMD on the lower end units that use the two AVX2 units. They gang together two 256bit wide vector units to do the 512 bits at once, not half on one clock and the second half on another clock. Zen 1 had only a single 128 bit wide unit and ran it twice to get avx2 256 bit work done.
Far too many people overestimate how often AVX-512 is used. Outside of benchmarks and very specific use cases, The average user isn't going to use AVX-512 much, if at all. If AMD considered it to be an important factor, they would have implemented it. The fact that Zen 2 is able to beat Intel on an IPC basis in the majority of applications shows that they are correct in that assessment. If you exclude overclocking they are also pretty close clock-per-clock on the higher end parts. I expect the 3950X is going to eat the Core i9-10980XE alive. I think Intel is aware of this as well.
New instruction sets suffer naturally from the chicken and egg problem. If no CPU supports it, noone writes code for it, and if there is no code, other CPUs won't adopt it.
The same has been said for AVX, AVX2, and probably also some of the SSE's. Then today AVX2 actually quite important, as it gets wide use in video decoding from eg. YouTube already. I fully expect the same to happen with AVX512 eventually, it'll just take a generation or two of CPUs.
I wouldn't say that AMD is "better" in their decision. Their full AVX/AVX2 support honestly came a bit too late, only appearing earlier this year.
It makes sense that Intel does it selectively. Given that AVX 256 causes a huge spike in power draw and heat output I can only imagine what AVX 512 does.
This was only true for I believe one CPU in the original SKL-X lineup, the higher-up models had dedicated AVX512 ports.
The real problem with AVX512 is the downclock. AVX512 can still be incredibly strong, if the workload is a lot of pure math, and the AVX512 units can get busy for a prolonged time. But if you have light math mixed with other stuff, then the downclock can cost too much performance.
We'll have to see if they managed to tune this behavior a bit in this generation.
Just to help me understand: You're saying, that a single AVX512 process brings down the other 17 cores, not just the "control core" for that AVX512 workload?
That could be a bit of a problem with mixed loads on these high-turbo CPUs, probably much less so for HPC workloads that instruction set was originally aimed at.
And potentially quite an acceptability issue for Ice Lake notebook chips, when an AVX512 optimized inference workloads would cause a "freeze" at the interaction end.
AVX takes a LOT of power, even running on just one core, and much of how the Intel chips boost is due to power budget. So, yes, a single core running AVX can cause the other cores to quit boosting and drop to the default clock, depending on the overall budget, and definitely cause them to quit boosting as high. Multiple cores running AVX workloads would increase the likelihood of the other cores quitting their boost clocks as well.
Only the core running AVX512 load will clock down, the others are unaffected - as long as the power budget is not affected.
But even with one core slowing down, it takes a couple milliseconds to clock back up once its done with AVX512, and milliseconds are an eternity in CPU world.
"Only the core running AVX512 load will clock down, the others are unaffected - as long as the power budget is not affected."
This simply isn't true at all. Take the i9 9900k, it has a turbo of 5ghz on two cores, but only if the other cores aren't used. As soon as the other cores are used, that turbo drops.
On the AVX offset, Zen 2 isn't immune from the higher power usage with AVX code. In my testing of 3600 and 3700X at stock, both hit the PPT limit of 88W and power throttle so you get lower clocks than if running non-AVX code. Basically AMD have a better limiter than using offset, but it is still a limiter.
You can't compare Zen 2 to Intel chips in this regard. An Intel chip can easily run close to 200 watts on AVX. They have completely different implementations. Also, an AVX heavy workload on my 1950X throttles the 1950X below the 3400 MHz base clock (3349 MHz, and to think everyone is complaining about the Zen2 clocks being slightly off...lol).
Not only is the register width doubled to 512 bit wide, there are more of them including new mask registers. Depending the algorithm and compiler, going beyond a 2x performance increase is indeed possible depending on where the previous bottleneck was in the code. However, such gains are certainly not common.
In comparison to Ryzen 3000, there is of course also more memory channel and PCIe lanes, and in comparison to low-end Thread Rippers likely going to be a big ST advantage.
Personally thats why I liked Intel HEDT CPUs in the past (ie. first SKL-X), they offered strong MT and strong ST in one package, while with most other choices you had to pick, either get MT (from TR or similar), or ST from Intel consumer CPUs.
On an actual workstation (where I favor the PCIe lanes and memory throughput), in contrast to servers etc, both ST and MT are still very important, as different applications require different things.
We'll have to see how the new TR is positioned in ST load as well.
Threadripper has more PCIE lanes (44 for Intel vs 64 for TR). TR for Zen/Zen+ had quad channel memory, which is the same as Intel parts. You have things backwards. Zen 2 is rumored to use a new socket for at least 2 of it's parts, if not all of them. So expect the above to be the minimum spec for Zen 2 Threadrippers. We could very well see 8 channel memory and 128 lanes of PCIE similar to EPYC for the Zen 2 refresh (or we could end up with the same 2, 64 config).
nevcairiel " In comparison to Ryzen 3000, there is of course also more memory channel and PCIe lanes, and in comparison to low-end Thread Rippers likely going to be a big ST advantage. " too bad you need to compare these cpus to threadripper NOT ryzen 3000.
I compare to what I wish to compare to. What would you like to compare a 12-core or 16-core Ryzen to then?
Also rumors say that TR will start with 24-cores, possibly to not conflict with 16-core Ryzen. 24-core TR and 10-18 Core Intel HDT are different market segments, honestly.
and that would be where you are wrong. comparing hedt to mainstream is like comparing a porsche 911 rurba to a boxter. "What would you like to compare a 12-core or 16-core Ryzen to then? " oh i dont know... the 9900k/x maybe ??
they are not in different markets.. they are both hedt platforms
The other issue is that when you have twice as many CPU cores in your processor, AVX256 at full speed can perform as well as AVX512 on the processor with fewer cores. Intel recently added AVX256 to a video encoding library that was previously smashing it with AVX512 on Intel, and ThreadRipper suddenly became more than competitive (seen at Phoronix).
Which means again that a developer might as well spend their time optimising for AVX256 which is far more common than AVX512.
Intel chips automatically downclock on an AVX-512 heavy workload, hence the word "offset". Check your facts. If you overclock, you almost certainly have set an AVX-512 offset. If you didn't, your overclock would not be stable for AVX 512 workloads.
Still only 18 cores though. The lowest tier Threadripper 3000 is 24 cores. And that's rumoured to be under $900 so that's 1/3rd more cores for $100 more. There's no way, even if you go fully thermonuclear meltdown OC on the 18 core part, it'll be better value in terms of performance per dollar.
As per Anandtech's own article. Now if they mentioned it was a rumour, then oops, but this looks a lot like an official AMD slide to me, and notice it says "premiering with 24 cores". Not saying there's gonna be more or less, but this slide clearly shows 24 cores is a definite.
Now of course, I got this news from Anandtech, so if they mentioned it was a rumour, then oops.
You do realize that 28 core part is actually two 14 core cpu's stapled together on the same die right? because of the thermal load you are actually better of getting a dual CPU motherboard and buying two separate 14 core intel parts.
Stapling the two CPU's together results in huge thermal issues and the dual die part is actually worse in almost every way than getting that two cpu motherboard and two separate cpu's. And if IIRC buying two separate CPU's and a two processor motherboard was even cheaper.
Hence the price decrease. When your competitor starts at 24 cores, has up to a 10% higher IPC, and really close clock speeds, you have to price your chip to sell.
SaturnusDK: 16 Core for 750USD and 24 cores for Under 900USD? Does not compute bro AMD charges 250USD premium for 4 extra cores on top of 12 cores in 500USD 3900x 24 core will be 1200USD +/-
AMD's 16 core part is not a Threadripper part. People forget this. I suspect entry level Threadripper parts to creep up in price given their performance advantage, but we don't yet know with any certainty what the stack will look like. The 8 core Threadripper variant IIRC only carried a $100 price premium over it's Ryzen counterpart, however. That is worth noting.
The 10 core Intel still has 48 CPU PCIe lanes + 24 on chipset. AVX512 Real Turbo boost [Lets be honest here, what AMD sells as single core turbo is nothing but a scam, when it clocks up for one second and drops down to its all core turbo it has no value, Intels Turbo performs properly, no such issues] Real Overclocking And what no less important for me, excellent emulator performance. Now add to it that to this 3900x is sold for premium by scalpers and always out of stock, same thing going to happen with 3950x. In this case Intels price vs AMD street price looks insanely better
Threadripper has 64 PCIE lanes right from the CPU, not counting anything from the chipset. You have apparent bias against AMD so I'm tempted not to post, but I'll bite: What you say is false, single and dual core workloads have seen 4.6-4.7 GHz on many 3800X chips for example, some 3800X chips hit 4.7 GHz on all cores (as evidenced by GB4 JSON data.) The 3900X has some issues that needed to be worked out, but the ABBA AGESA release fixed many of those issues, and many users are hitting 4.7 GHz.
Also, clock speed doesn't really matter as AMD trounces Intel in the majority of workloads anyway...hence the reason for the sudden price decrease above.
Mr.Vegas and lets be honest here, intels TDP rating is nothing but a scam, when a cpu is listed as using X watts, but under usage, can use 50-150 watts MORE, what do you call that ? " Intels Turbo performs properly, no such issues " providing the cooling is sufficient, maybe thats why intel doesnt include a cooler with its higher end parts. " Real Overclocking " again IF cooling is sufficient. " 3900x is sold for premium by scalpers and always out of stock " must be just where you are, as i have seen them in stock regularly at the local comp stores i go to. " In this case Intels price vs AMD street price looks insanely better " not really, as you seem to forget, intels cpus dont come with a cooler, so add $50 + to the price for intels cpus.
16 core consumer zen 2 presumably has the frequency and IPC advantage with equivalent cooling, except for turbo 3.0 vs single core boost. 3950x has higher single core compared to Intel HEDT though.
And the winner is... consumers. Isn't competition a great thing. Still over priced and 165w seems low, unless thats using intels latest fake power ratings.
Also "16-core Ryzen 9 3950X at $749, which will offer 16 PCIe 4.0 lanes" Is it losing 8 lanes? I thought they all had 24. 16 for GPU, 4 for m.2 storage, and 4 for the chip-set bridge.
My bad, it's 16 for PCIe slots. 4 for storage, 4 for chipset. I pretty much always just count the PCIe lanes for slots (because that's what it used to mean)
Intel TDP rating is at base clock running complex workload. Many people people still don't get it apparently and are surprised when they see the TDP at boost clock. Sure it is less straightforward but it is not lying. About price affter we will see AMD HEDT offering we will judge 16 pci-e always was the case for non HEDT desktop ryzen chips. You got
16 pci-e from CPU directly connected to GPU and 4 pci-e link connected to the X570 chipset which adds additional 8 platform PCI-E, you just fell for another trick of AMD marketing. I swear they getting worse than intel in that regard.
The Zen 2 Ryzen CPUs (3000 series, but not APUs) have 24 PCI-E 4.0 links. 16 for the GPU, 4 for the chipset link and 4 that are most often used for NVMe drives. X570 has 16x PCI-E, you're thinking of the last-gen Ryzen CPUs.
LOL If history serves, "competition" may be short-lived as Intel will undercut (see "predatory pricing") its competitor and, soon enough, it will enjoy milking its customers again.
From wikipedia on "predatory pricing": "Predatory pricing is considered anti-competitive in many jurisdictions and is illegal under some competition laws. However, it can be difficult to prove that prices dropped because of deliberate predatory pricing, rather than legitimate price competition. In any case, competitors may be driven out of the market before the case is ever heard."
what r u talking about? price per transistor has dropped 1 milion times in the last 50 years. have u heard of innovation?
also intel did no such thing as drive amd out with predatory pricing. it used its volume and node advantage to drive out amd. it has no such advantage over TSMC
For kids who're reading this in the Future for research; Intel used all kind of scammy tactics including bribing Dell and the likes to keep stuffing Intel's CPUs at a major discount just so they could drop AMD CPUs.
You are conveniently ignoring plenty of idiocies on AMD side including starving itself of cash by purchase of ATI… (And betting on pure nonsense called Bulldozer)
Also Intel had always massive margins thanks to vertical integration. (People forget that for some years during Pentium 4 era AMD was more expensive then Intel - Black Edition says hi)
Everything you're referencing in your first sentence happened after Intel successfully staved off competition with its predatory pricing schemes.
You couldn't get a Dell system with an AMD chip in it throughout the Athlon 64 / OG Opteron era despite the obvious superiority of the AMD chips, and it was entirely because Intel bribed Dell (and others) to buy from them alone. AMD's subsequent poor decision making has nothing to do with that.
Enthusiasts of that era are also aware that AMD charged more for their chips when they had a definitive performance lead - that isn't news. They'll probably do it again with Zen 3 if Intel haven't caught up by then.
" You are conveniently ignoring plenty of idiocies on AMD side " intel has had its own, and as spunjji mentioned, it cost intel a billion or so in a settlement to AMD, seems you are conveniently ignoring that. " Also Intel had always massive margins thanks to vertical integration" BS, its also be cause they were over charging for their cpus, look at the prices for the 9900 series vs 109xx series. " People forget that for some years during Pentium 4 era AMD was more expensive then Intel " yes. because the performance was there. just like intel has been doing for the last 5 or so years.
Intel was found guilty in court and made to pay AMD for their illegal practices. You can't argue this.
It was predatory pricing - Intel would offer incentives to not offer any AMD alternatives, and as Intel was the market leader with name recognition, even companies as big as Dell were unable to do anything about it - if they dared to offer AMD systems, they would lose the Intel kick-backs and no longer be able to compete on price for Intel systems.
At the time AMD had the better tech, and was slowly clawing market share, but Intel was a household name and no major company could afford to bet their entire business on AMD and give up completely on their Intel lineup.
Its not really competition thought, because that is also suppose to bring forth innovation which is not really happening as well. DIfferent prices, boost core and a AMD chips that was planned years ago.
Indeed, only consumers win here. Regarding PCIE: People forget how to count. It's also PCIE 4.0 vs PCIE 3.0. This actually means that the regular Ryzen chips have more PCIE bandwidth than HEDT Intel chips. An embarassment for Intel.
This is a solid lineup. X299 owners with A) lower core count CPUs B) need for higher clocks C) need for more RAM
May find this upgrade path compelling.
Personally, as a 7960x owner I’m less tempted by this upgrade path and will wait to see what Threadripper 3000 offers. If AMD will sell me 64 cores with decent all-core clocks at a price that doesn’t require a second mortgage, that’s going to be hard to beat.
Well, about the RAM situation - the new CPUs do not offer more channels or slots. It's just that intel has removed the artificial limitation and can now use 32GB DIMMs. Compare this to the ThreadRippers which support 1TB from the very beginning. Sure, we'll never have 128GB UDIMM DDR4 modules with which to implement it but at least we are not butt fu...d by the manufacturer for no apparent reason!
BTW, I'm pretty sure that the initial TR 3xxx lineup will not include 64 core chip. There's no need for it. A 24 core TR 3 will wipe the floor with anything intel has to offer. Including their 28 core joke of a room heater.
I haven't seen a Threadripper motherboard claim > 256GB RAM support yet. Are you sure about this?
I disagree about the 28-core i9 - AVX workloads still run best on Intel, and clockspeed still matters. That being said, a 64-core Threadripper would absolutely beat this chip and hence why I am interested. I agree it is unlikely that a 64-core chip will join the ranks of Threadripper 3000 chips at launch, but I hope it will arrive eventually at a price I can justify.
Quote: "AMD has officially stated that the Threadripper CPUs can support up to 1 TB of DRAM, although on close inspection it requires 128GB UDIMMs, which max out at 16GB currently."
It is not the first time this has happened. About a decade ago there were some strange but JEDEC compliant modules (I think 8GB DDR2 DIMMs but I may be wrong about that) that would work on AMD processors without any adjustment but not work on intel. Aparently, indel used to, and probably still does, limit the supported sizes below the actual technical limitations. If you've learned nothing about intel in the last 5-10 years, that's your problem.
On the other issue - 24 core TR 3000 will offer competitive performance with the 28 core Xeon W at a quarter of the price. A 32 core part will send it into oblivion. I'm pretty sure about that.
There was a bug in the DDR3 controllers of Sandy/Ivy/Haswell memory controller that didn't permit them to work with the largest capacity unregistered DIMMs. Broadwell, the last major DDR3 only chip from Intel, did fix this issue.
Most of the time, the memory controllers are designed against JEDEC spec but are only validated with that is currently on the market. Thus going beyond the official max capacity generally works but is not officially supported. The exception to this has been the last few "generations" from Intel, especially on server, where memory capacity limits have been used for market segmentation.
Now that they're almost in the same price category I'd like to see the 10920x vs the 3900x clock for clock (and stock for stock). Can Intel justify the extra $200?
What is "stock" these days? "Stock" behavior on Intel depends a lot of on the motherboard, as they can define boosting behavior to match their VRM capabilities, which can have a huge impact.
Also, the Intel HEDT lineup of course has other advantages - which may not matter to everyone, compared to a 3900x. More PCIe lanes, more memory channel, AVX512.
EliteRetard " Now that they're almost in the same price category I'd like to see the 10920x vs the 3900x clock for clock " but you are now comparing a HEDT cpu, against a mainstream cpu, wait till threadripper 3 comes out, and then compare those 2 cpus.
Unless you are actually bottle-necked by sequential transfer speeds, and not IOPS or random access speed (which is far more common), then this won't matter for the next 3 years.
If you are in that select group, there is options for you as well. Not every piece of hardware has to necessarily fit everyone.
Shhh, let the children wave their sequential speed e-peens around while they drink from the torrent of marketing piss that the SSD manufacturers continue to spray. I bet these same kids also unironically cry about Intel's TDP marketing being misleading.
shh, some arrogant aholes among us like spending more for less because intel told them its good enough, because you can't fully benefit now you don't need to extend the life of your investment in the future.
Then get a ThreadRipper board today. A month ago you could but a mainboard plus a 12 core TR1920X for $500 combined! And you have 3 NVMe slots. If that's not enough, you can plug M.2 NVMe drives in regular PCIe slots with an adaptor. There are also relatively cheap 4x adaptors that work on the TR because it support 4x bifurcation. You can have 7 NVMe drives and still have 1 x16 and 2 x8 slots available.
My thoughts exactly. Cutting prices by 50% is great, but it also means they've been overcharging their customers ridiculous amounts in the past few years, which is not cool at all. I'd call it immoral even. Yeah sure, when you have no competition, you can set the price wherever you want it to be. But you could still rein yourself in a little bit in how much you charge your loyal customers. Now that they've cut prices by 50%, it means they've had zero interest in their customers and 100% interest in filling their own coffers. Also, the value of used Intel HEDT CPUs just halved due to this new pricing announcement. People trying to sell their 1-2 years old 2 000 dollar CPUs might be a bit miffed about this.
The rumor is that the new Cascade HEDT parts cost more to make than Intel is charging with this 50% price cut and Intel is selling these at a loss to try to slow AMD down.
The chiplet design of the TR3 and Ryzen 3 actually makes it cheaper for AMD to make CPU's than intel can with their monolithic design. The result is AMD can handle lower pricing than Intel and still make money.
rahvin " The rumor is that the new Cascade HEDT parts cost more to make than Intel is charging with this 50% price cut and Intel is selling these at a loss to try to slow AMD down. " um, yea, no, intels investors and shareholders would not let that happen as it would mean they would also be losing money.
Nope, 14 nm is a very mature process for Intel at this stage.
The only sort of 'loss' for Intel is that a water 18 core chips could have been wafter of 28 core parts that sell for higher margins. Intel does have to be picky about what does go through there fabs as they are at capacity with far too many products occupying the 14 nm node. The 10 nm delays have really, really hurt them.
" Nope, 14 nm is a very mature process for Intel at this stage. " doesnt matter, there is no way intel would resort to losing money like rahvin says, to " slow amd down " as was said shareholders and investors wouldnt allow it.
Socket 2066 is a mess as this is yet *another* PCIe lane config for motherboard makers need to account for. The only way to leverage the extra lane is gonna be via a new motherboard (with the possible exception of some X299 that have launched the last few months). The new price points are nice but something Intel should have done a year ago (ditto for the extra PCIe lanes and memory capacity support).
The sad thing is that I would expect performance per clock improvements when the hardware security mitigations come into play.
Looking good! Who would have thought another Intel 14nm respin would be exciting! Pity they arent going to re-release some cheaper 2066 Xeon-W chips now Xeon-W has moved to another socket. Ive got a nice Xeon-W machine that would have loved a ~$1000 drop in 18 core upgrade :P
I do not understand one thing - Turbo limits out of factory aka Intel recommended vs Motherboard enhancement whining, so much noise about the damn TDP, as if the K series or X series Unlocked processors are going to run like puny crap like Laptops which have Power Limits hardcoded in EC and BIOS like Apple Trashbook Pro or thin and light junk cTDP BGA garbage.
Intel got away with it but the user is getting MAX perf OOTB or they can customize it, why sandbang it with bullshit limits and whining.
*K or X are being run in an ITX mini box size of a lunch box and have crippled cooling, that user should really be educated or not worth owning such Processors. I have UV and Turbo OC with more Current Limits which allows me to run higher Turbo clocks and I did that manually, which got me 700CBR15 score, vs a 600OOTB
It's not about "bullshit limits and whining" - it's about Intel giving us honest estimates of power consumption under load. I think most of us are fine with them drawing more power to hit the top speeds on many cores, but it'd be nice if they gave a useful estimate of what that power draw would be before purchase, instead of them hiding behind the meaningless "165W TDP".
Also, funny point - the problem with the "Trashbook Pro" isn't a hardcoded power limit, it's the lack of one. When the 6 core i9 CPU came along and blew past its spec, it triggered protections elsewhere in power delivery (and cooling). Blaming Apple for Intel's CPU operating way outside spec is a bit of a reach.
Bear in mind that I still think Apple screwed the pooch, both by making their devices unnecessarily thin and by failing to catch that issue in qualification.
TDP's should be maximum output to make heatsink selection easier.
Putting the TDP of the chip as some middle of the road TDP and concealing the max TDP only makes it harder for consumers for no gain other than marketing. It's stupid and foolish and Intel should be called out for doing it.
What Intel is doing is what happens when Marketing decides on what technical specifications to reveal. Your average consumer isn't concerned at all if a chip has a higher TDP, so this is only a lie targeting enthusiasts who actually care to know the real TDP.
> Bear in mind that I still think Apple screwed the pooch, both by making their devices unnecessarily thin and by failing to catch that issue in qualification.
You assume Apple does any sort of testing, as opposed to throwing their overpriced s**t over the wall and letting their hordes of drones drown out any complaints with "IT JUST WORKS!!!!!"
I hope their LGA3647 socket W 3175X sees same price cuts and gets the damn Dominus Extreme into more audience. But it's a shame that X299 saw so many Processors while Z390 is being discarded again, sad. I wanted to build a PC this Fall, wanted Win7 and top class Perf with old and new games, Z390 Dark was my build with 2070Super but now with Z490 on the horizon Intel doesn't inspire confidence, though these processors with X299 Dark at low cost makes it superb given this time the Mesh keeps up with the Ring bus 9700K at-least.
Plenty of time to wait for 3900X and 3950X in Stock and price drop for X570 Aorus Extreme vs X299 or Z390 or Z490.. ;)
Not anymore. Samsung 32GB DDR4 2666 DIMMs (M378A4G43MB1-CTD) have been available via online retailers for a while now for under $150USD. I've been running 4 @ 3000mhz CL16 (with minimal effort) in a Z390 motherboard with an 8700k for a few weeks now. There's also an ECC, but still UDIMM, version available for about an extra $100USD.
Looks like in one year they changed max supported RAM from 64GB to 128 and after AMD catched that they changed it to 256GB. Takes them 1 second to change some artificial restriction in the spec.
Since 64bit OS can support almost 1 billion more than that all that looks like they want to play this dirty game with the consumers for a long time.
Well it's been pretty standard to only validate up to launch, so if no 8x32GB UDIMM kit was available then it's not official but it'll probably work - I saw someone running 4x32GB on a B350 mobo. The standard is supposed to be DDR4, all memory modules should work except for when they don't...
Agreed. Only 'supporting' combinations available and validated at Launch has been standard for decades - which makes sense. Stating support for artificial/unproven future combinations could easily result in lots of confusion and issues.
I strongly disagree. That's what standards are for. If the board does not support JEDEC-compliant module then the board is not JEDEC-compliant.You are so used to being screwed that you no longer notice.
BTW, since I wrote several posts in this thread and I may come across as AMD fanboy, I'm pretty mad at them for disabling PCIe 4.0 on non-X570 boards. I would assume it works on the first x16 on many mainboards. Let the customers decide if it works for them - it was never promised to them anyway.
kobblestown but there are also ram sticks that are not jedec spec, but work just fine. " I'm pretty mad at them for disabling PCIe 4.0 on non-X570 boards. I would assume it works on the first x16 on many mainboards. " and what would you have done if the board you have, isn't capable at running at PCIe 4 speeds ??? my guess, you would be screaming at AMD, and calling them liers and such, which is why they are not allowing it on non x570 boards, as it wouldn't be guaranteed, and even board to board could have different results. AMD is just avoiding a big headache by doing this.
and what would you have done if the board you have, isn't capable at running at PCIe 4 speeds ??? my guess, you would be screaming at AMD
You guess to much. And why would you even suggest such a thing? I am a grownup and I should be treated like one. They could, for instance, have a warning in the BIOS option that enables it. I already have warnings that say that they void my warranty if I click "Agree". Isn't that the same kind of thing only more benign since it doesn't do actual damage?
Of course, I understand why they did it. I just don't agree with them doing it. And I'm rightfully mad at them. They get the upper hand and they start behaving like Intel. That's why it's wrong to be any company's fanboy. You just buy the best product at the moment you need it regardless of the company. That's how market economy is supposed to work. I was buying intel for 10 years and now I can finally buy and recommend AMD again. Let's see how long this lasts.
kobblestown, still, warning or not, there would STILL be people screaming at AMD, and complaining that while their board doesnt work @ PCIe 4, they have 2 other friends who have boards that do. thats the point.. they are saving them selves from a PR nightmare, thats all there is to it, and while you feel it is wrong, look at it that way. there is just too many variables involved here to allow it on non x570 boards.
@Korguz, It's the same with overclocking. I don't see your point. No one ever promised PCIe 4.0 support on prev-gen boards. In the same way, no one ever promises specific results from overclocking. Yet, it's acceptable to allow it.
and i dont see your point about not agreeing with amd about not suppporting pcie 4 on non x570. over all, its easier to overclock a cpu, as there is some OC headroom, but with a board, there are more variable involved. hence the term silicon lottery. who knows, maybe the board makers tested some of their boards, and it was too inconsistent, and told amd, and they dropped support...
"a number of fingers will be pointed at AMD as having made this happen" Saying that that is an understatement is an understatement itself. Would there even exist someone not believing that AMD made that happen?
Vs 14C (I guess at same performance) AMD is a bit cheaper, has cheaper boards available and comes with lower TDP. PCIe is roughly comparable, depending on your needs (24x PCIe 4.0 vs 48x PCIe 3.0), while Intel wins in memory capacity and bandwidth. Plus AMD has TR to compete with that.
2 Questions to the OP: 1) Is there a x299 chipset refresh incoming? 2) No matter the answer to question 1, will there be a refresh of mobos with x299? I mean right now it lacks WiFi6, native USB 3.1 Gen 2
Its a great time for those in need of many cores, for those who never use more than 4 cores/8 threads nothing has changed. True many people can benefit from these monster products, but for a simple gamer it does not have any use at all. And i mean by that also that nothing released the last 2 years made me decide to buy any of the games which came out. Its kinda all old stuff in a new jacket. And worse almost all of them force you to go online which i actually refuse todo. I like strategy and once in a while i jump onto the good old games which where fun and did not annoy me to the edge of insanity. And lets be honest the RTX hype which nvidia is trying to make is not goign to be a thing for many years to come. So as long as theirs hardly anything usefull todo with these rtx stuff i stay far away from it. It just shows people are enormous happy when they got a new play-thing without a real function and can brag they got it. Intel does jump on the same bandwagon now as well, and by that will make loads of people happy, especially those which actually make use of the many cores, like streamers and graphical folks. The only reason why i am going to change my system in the coming 4 years is because its too limited in terms of available nvme slots and sata slots which i want to use. I constant build new system images and rar and zip tons of files so you could say i could benefit from these monster cpu's as well. But actually i am not impressed my old 6700k still does it darn fast and this work needs much faster drives more than more threads. If one could prove otherwise i would gladly look at the numbers but i am pretty sure that the premium price will not make up the tiny speed profit it makes. We are so sucking into the latest hypes that we loose common sense is my opinion. Sure both companies make impressive products but sadly AMD decided to screw customers who want to play games and did not release a lower core model with higher clocks instead they did the opposite. Which is totally bonkers i rather have a 4 core going much higher in clockspeed than a 16 core monster. But again AMD wants to make the most profit as Intel does and they choose to only give higher clocks on the most expenssive more core cpu's. So no gains at all for normal players i was already stomped that this already old cpu does beat its replacing siblings in speed often. As long as we do not look at multi core performance. So again no reason to switch till i really had enough of the limits of this mainbord.
The thing about multiple cores is that nowadays there is relatively low cost to it; they can be turned off when not required. Because of this, it's no longer worth doing single-core CPUs at all. Most AMD dies are quad or even eight core. So... why sell a die as two or even four fast cores if they could sell it as four fast cores *and* four slower ones, unless there are flaws in the smaller one?
The answer is, they don't. At least not until they have a bunch of flawed dies that *have* to be sold like that. Athlons are available, for example - and they have become relatively cheap compared to past dual-cores - but they are released late because there isn't the stock behind it before then.
Unfortunately flawed dies are often flawed in general and may not have any fast cores. *Some* of them might go faster but if there's only a few it may not make sense to make it a separate product.
If AMD were selling more chips you might see more specialist products. But think about what you are saying: you see value in those high-speed cores. So the price isn't likely to be *that* much smaller than that of a CPU with more cores, some of which go at a high speed. It's driven by value.
Your 6700K probably takes up a fair amount of power too. Something like a Ryzen 5 3600 will demolish it for power efficiency under a given load. This is something you may not care about, but others do. Again, it's probably not *that* much better for this purpose to have fewer cores because they can be turned off (indeed, it's often more efficient to "race to completion" and then turn off).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
162 Comments
Back to Article
twtech - Tuesday, October 1, 2019 - link
This is more or less what I thought we were going to see when Intel was claiming a 2.09x cost efficiency improvement in this class of processors vs. the prior generation.Drumsticks - Tuesday, October 1, 2019 - link
18 cores at $979 feels like the best product in that stack. A $800 14C versus the $750 16C Ryzen seems like a difficult matchup, maybe held up by a slight frequency advantage, but a 10C at $600 doesn't seem like it will hold up very well against a 12C $500 Ryzen 9. Threadripper is going to give AMD some seriously impressive pricing power.BillyONeal - Tuesday, October 1, 2019 - link
There are still a number of features Intel has (like vTune and AVX512).milkywayer - Tuesday, October 1, 2019 - link
Bravo AMD for the continuous kicking in Intel's rear. The 10 core cpu now is down to $600 from $990. Now that's some fruitful competition.plonk420 - Wednesday, October 2, 2019 - link
sure Intel has a single thread advantage, but i've been enjoying my $450 16 core Threadripper 1950X since last feb when i accidentally won an auction for the board at ~$190Flunk - Wednesday, October 2, 2019 - link
They did... not so much anymore.emn13 - Friday, October 4, 2019 - link
That single-thread advantage may play out differently here; traditionally intel HEDT chips haven't been nearly as great as their plain old desktop K processors in practice; but there are features this time around (e.g. turboboost 3.0) that suggest it might be a little better this time. And we have pretty little info on what (zen 2) threadripper will be like; at least not in the kind of details you'd need to really predict stuff like single-core perf.I'd wait and see.
evernessince - Tuesday, October 1, 2019 - link
As far as I know, Intel doesn't have 512 bit AVX registers so it has to do 2 passes for 512 just like AMD does. The difference being, AMD processors don't need an AVX offset like Intel processors do.hansmuff - Tuesday, October 1, 2019 - link
AMD does simply not have any part of AVX-512 implemented. It can't do it at this stage. Intel CPUs implement different parts of AVX-512 at various market segments. There are AVX-512 sets of instructions that make no sense in a consumer CPU. Also the hardware implementation of AVX-512 requires a lot of die space, so it's not even feasible to have all instructions on all chips.But the takeaway here is that no, AMD does not do AVX-512 at all and Intel does it selectively.
kobblestown - Wednesday, October 2, 2019 - link
In their initial implementation of AVX512 intel had two lines of processors - 2 AV512 units for the higher end and a single AVX512 unit for the lower end. In both cases one of the units comes from the fusion of both AVX2 units and there is an additional (or there isn't, depending on model) a dediacted avx512 unit.So for these lower end ones with the single AVX512 unit the situation is identical to how AMD used to do AVX2 with both SSE units. I assume that's what the OP had in mind. However, as far as I can tell, all these Core X processors *do* have two AVX512 units so the only disadvantage will be the lower clocks when they are in use (and lack of software that uses them).
In any case, I think this is something AMD can easily compensate for with higher core count. The rest is up to price.
SanX - Wednesday, October 2, 2019 - link
Can anyone of you guys show advantage of AVX512 on linear algebra as the simplest and most used application. The only cited example showing boost with AVX512 is own Ian Cutress' 3D particle movement test showing doubtful 300%. Our PIC code boost was just 10-20%. Take this simple code for dense matrix solution AX=B with Intel's MKL library which now supports AVX512 and run it on any Intel and any AMD multicoreProgram LinearAlgebra
implicit none
integer :: i,j,neq,nrhs=1,lda,ldb, info
real*8,allocatable :: A(:,:),b(:)
integer, allocatable :: piv(:)
Integer count_0, count_1, count_rate, count_max
do neq=1000,20000,1000
lda=neq; ldb=neq
allocate(A(neq,neq),b(neq),piv(neq))
call random_number(A)
call random_number(b)
Call system_clock(count_0, count_rate, count_max)
CALL dgesv (nEq,nrhs,A,ldA,piv, b, ldb, info)
Call system_clock(count_1, count_rate, count_max)
Write (*, '(1x,A,i6,A,2x,F8.3,A)') 'nEqu = ',nEq,' ', &
dble(count_1-count_0)/count_rate, ' s'
deallocate(A,b,piv)
end do
end program
SanX - Wednesday, October 2, 2019 - link
This test will also show if dual, quad, six, etcetc channel RAM is of any importance for the massespeevee - Thursday, October 3, 2019 - link
Damn, Fortran, I almost forgot you!extide - Wednesday, October 2, 2019 - link
No, it's not the same as AMD on the lower end units that use the two AVX2 units. They gang together two 256bit wide vector units to do the 512 bits at once, not half on one clock and the second half on another clock. Zen 1 had only a single 128 bit wide unit and ran it twice to get avx2 256 bit work done.eek2121 - Wednesday, October 2, 2019 - link
Far too many people overestimate how often AVX-512 is used. Outside of benchmarks and very specific use cases, The average user isn't going to use AVX-512 much, if at all. If AMD considered it to be an important factor, they would have implemented it. The fact that Zen 2 is able to beat Intel on an IPC basis in the majority of applications shows that they are correct in that assessment. If you exclude overclocking they are also pretty close clock-per-clock on the higher end parts. I expect the 3950X is going to eat the Core i9-10980XE alive. I think Intel is aware of this as well.nevcairiel - Thursday, October 3, 2019 - link
New instruction sets suffer naturally from the chicken and egg problem. If no CPU supports it, noone writes code for it, and if there is no code, other CPUs won't adopt it.The same has been said for AVX, AVX2, and probably also some of the SSE's. Then today AVX2 actually quite important, as it gets wide use in video decoding from eg. YouTube already. I fully expect the same to happen with AVX512 eventually, it'll just take a generation or two of CPUs.
I wouldn't say that AMD is "better" in their decision. Their full AVX/AVX2 support honestly came a bit too late, only appearing earlier this year.
evernessince - Sunday, October 6, 2019 - link
It makes sense that Intel does it selectively. Given that AVX 256 causes a huge spike in power draw and heat output I can only imagine what AVX 512 does.nevcairiel - Wednesday, October 2, 2019 - link
This was only true for I believe one CPU in the original SKL-X lineup, the higher-up models had dedicated AVX512 ports.The real problem with AVX512 is the downclock. AVX512 can still be incredibly strong, if the workload is a lot of pure math, and the AVX512 units can get busy for a prolonged time. But if you have light math mixed with other stuff, then the downclock can cost too much performance.
We'll have to see if they managed to tune this behavior a bit in this generation.
abufrejoval - Wednesday, October 2, 2019 - link
Just to help me understand: You're saying, that a single AVX512 process brings down the other 17 cores, not just the "control core" for that AVX512 workload?That could be a bit of a problem with mixed loads on these high-turbo CPUs, probably much less so for HPC workloads that instruction set was originally aimed at.
And potentially quite an acceptability issue for Ice Lake notebook chips, when an AVX512 optimized inference workloads would cause a "freeze" at the interaction end.
dgingeri - Wednesday, October 2, 2019 - link
AVX takes a LOT of power, even running on just one core, and much of how the Intel chips boost is due to power budget. So, yes, a single core running AVX can cause the other cores to quit boosting and drop to the default clock, depending on the overall budget, and definitely cause them to quit boosting as high. Multiple cores running AVX workloads would increase the likelihood of the other cores quitting their boost clocks as well.nevcairiel - Thursday, October 3, 2019 - link
Only the core running AVX512 load will clock down, the others are unaffected - as long as the power budget is not affected.But even with one core slowing down, it takes a couple milliseconds to clock back up once its done with AVX512, and milliseconds are an eternity in CPU world.
Oliseo - Thursday, October 3, 2019 - link
"Only the core running AVX512 load will clock down, the others are unaffected - as long as the power budget is not affected."This simply isn't true at all. Take the i9 9900k, it has a turbo of 5ghz on two cores, but only if the other cores aren't used. As soon as the other cores are used, that turbo drops.
It's exactly the same for AVX as well.
porina - Wednesday, October 2, 2019 - link
On the AVX offset, Zen 2 isn't immune from the higher power usage with AVX code. In my testing of 3600 and 3700X at stock, both hit the PPT limit of 88W and power throttle so you get lower clocks than if running non-AVX code. Basically AMD have a better limiter than using offset, but it is still a limiter.eek2121 - Wednesday, October 2, 2019 - link
You can't compare Zen 2 to Intel chips in this regard. An Intel chip can easily run close to 200 watts on AVX. They have completely different implementations. Also, an AVX heavy workload on my 1950X throttles the 1950X below the 3400 MHz base clock (3349 MHz, and to think everyone is complaining about the Zen2 clocks being slightly off...lol).Bulat Ziganshin - Wednesday, October 2, 2019 - link
>Intel doesn't have 512 bit AVX registers so it has to do 2 passes for 512 just like AMD doesAVX-512 improves performance on Intel cpus up to 2x, and that's all you need to know
eek2121 - Wednesday, October 2, 2019 - link
Not really, no.Korguz - Wednesday, October 2, 2019 - link
Bulat Ziganshin " AVX-512 improves performance on Intel cpus up to 2x, and that's all you need to know " prove itKevin G - Thursday, October 3, 2019 - link
The performance increase can actually go beyond a 2x increase with AVX-512:https://www.anandtech.com/show/14664/testing-intel...
Not only is the register width doubled to 512 bit wide, there are more of them including new mask registers. Depending the algorithm and compiler, going beyond a 2x performance increase is indeed possible depending on where the previous bottleneck was in the code. However, such gains are certainly not common.
Oliseo - Thursday, October 3, 2019 - link
"AVX-512 improves performance on Intel cpus up to 2x, and that's all I know"nevcairiel - Wednesday, October 2, 2019 - link
In comparison to Ryzen 3000, there is of course also more memory channel and PCIe lanes, and in comparison to low-end Thread Rippers likely going to be a big ST advantage.Personally thats why I liked Intel HEDT CPUs in the past (ie. first SKL-X), they offered strong MT and strong ST in one package, while with most other choices you had to pick, either get MT (from TR or similar), or ST from Intel consumer CPUs.
On an actual workstation (where I favor the PCIe lanes and memory throughput), in contrast to servers etc, both ST and MT are still very important, as different applications require different things.
We'll have to see how the new TR is positioned in ST load as well.
eek2121 - Wednesday, October 2, 2019 - link
Threadripper has more PCIE lanes (44 for Intel vs 64 for TR). TR for Zen/Zen+ had quad channel memory, which is the same as Intel parts. You have things backwards. Zen 2 is rumored to use a new socket for at least 2 of it's parts, if not all of them. So expect the above to be the minimum spec for Zen 2 Threadrippers. We could very well see 8 channel memory and 128 lanes of PCIE similar to EPYC for the Zen 2 refresh (or we could end up with the same 2, 64 config).nevcairiel - Thursday, October 3, 2019 - link
I obviously compared lanes and memorty to Ryzen, not TR, as the first 5 words in my comment clearly state, but thanks for playing.TR was mentioned later for its potential single-thread deficits, but we'll have to see how that turns out in TR 3000.
Irata - Thursday, October 3, 2019 - link
Actually, Ryzen 2 does not have less PCIe bandwidth than Cascade X - 24 PCIe 4 = 48 PCIe 3.The fact that Intel's high end platform is now being compared to AMD's mainstream platform also shows one reason why prices were lowered.
jakky567 - Wednesday, October 2, 2019 - link
Presumably threadripper 3000 takes both ST/MT crowns. The clock speeds tend to be quite high and the increased power budget helps.9700k or the 9900k are the only chips with sometimes superior single threaded performance to the 3900x.
You can't maintain those sort of clocks monolithic.
Korguz - Wednesday, October 2, 2019 - link
nevcairiel " In comparison to Ryzen 3000, there is of course also more memory channel and PCIe lanes, and in comparison to low-end Thread Rippers likely going to be a big ST advantage. "too bad you need to compare these cpus to threadripper NOT ryzen 3000.
nevcairiel - Thursday, October 3, 2019 - link
I compare to what I wish to compare to. What would you like to compare a 12-core or 16-core Ryzen to then?Also rumors say that TR will start with 24-cores, possibly to not conflict with 16-core Ryzen. 24-core TR and 10-18 Core Intel HDT are different market segments, honestly.
Korguz - Thursday, October 3, 2019 - link
and that would be where you are wrong. comparing hedt to mainstream is like comparing a porsche 911 rurba to a boxter."What would you like to compare a 12-core or 16-core Ryzen to then? " oh i dont know... the 9900k/x maybe ??
they are not in different markets.. they are both hedt platforms
Sahrin - Wednesday, October 2, 2019 - link
Unfortunately as we’ve seen the AVX512 advantage washes out with Intel’s core throttling. Can’t cut global frequency 30% and expect to win benchmarks.psychobriggsy - Wednesday, October 2, 2019 - link
The other issue is that when you have twice as many CPU cores in your processor, AVX256 at full speed can perform as well as AVX512 on the processor with fewer cores. Intel recently added AVX256 to a video encoding library that was previously smashing it with AVX512 on Intel, and ThreadRipper suddenly became more than competitive (seen at Phoronix).Which means again that a developer might as well spend their time optimising for AVX256 which is far more common than AVX512.
imaheadcase - Wednesday, October 2, 2019 - link
Um but who throttles a CPU at full load..no one.eek2121 - Wednesday, October 2, 2019 - link
Intel chips automatically downclock on an AVX-512 heavy workload, hence the word "offset". Check your facts. If you overclock, you almost certainly have set an AVX-512 offset. If you didn't, your overclock would not be stable for AVX 512 workloads.Oliseo - Thursday, October 3, 2019 - link
Physics decides that, not you.Korguz - Thursday, October 3, 2019 - link
Oliseo you dont know what avx offset is do you ??jakky567 - Wednesday, October 2, 2019 - link
Possibly, it really depends on how well they bin them. AVX 512 has limited use case. I have no experience with vtune though.SaturnusDK - Tuesday, October 1, 2019 - link
Still only 18 cores though. The lowest tier Threadripper 3000 is 24 cores. And that's rumoured to be under $900 so that's 1/3rd more cores for $100 more. There's no way, even if you go fully thermonuclear meltdown OC on the 18 core part, it'll be better value in terms of performance per dollar.SaturnusDK - Tuesday, October 1, 2019 - link
*for $100 less evenquorm - Tuesday, October 1, 2019 - link
Who's reported that TR3 starts at 16 cores?quorm - Tuesday, October 1, 2019 - link
Oops, I meant 24 cores.SaturnusDK - Tuesday, October 1, 2019 - link
AMDeek2121 - Wednesday, October 2, 2019 - link
This is false.Xyler94 - Wednesday, October 2, 2019 - link
First Threadripper will be 24 cores in November. But the rest of the stack we don't knoweek2121 - Wednesday, October 2, 2019 - link
If you believe the rumors, yes.Korguz - Wednesday, October 2, 2019 - link
you able to provide a source either way, Xyler94 or eek2121 ?Xyler94 - Friday, October 4, 2019 - link
https://images.anandtech.com/doci/14895/TR3_Nov.pn...As per Anandtech's own article. Now if they mentioned it was a rumour, then oops, but this looks a lot like an official AMD slide to me, and notice it says "premiering with 24 cores". Not saying there's gonna be more or less, but this slide clearly shows 24 cores is a definite.
Now of course, I got this news from Anandtech, so if they mentioned it was a rumour, then oops.
Xyler94 - Friday, October 4, 2019 - link
Link to article: https://www.anandtech.com/show/14895/amd-next-gen-...twtech - Wednesday, October 2, 2019 - link
That's why I was also expecting to see some variant of the 28-core die make an appearance here too, possibly around the $2.5k price point.But, the W3275 has actually increased in price over the 3175 to around $4k, so it appears it won't be this generation, at least.
rahvin - Wednesday, October 2, 2019 - link
You do realize that 28 core part is actually two 14 core cpu's stapled together on the same die right? because of the thermal load you are actually better of getting a dual CPU motherboard and buying two separate 14 core intel parts.Stapling the two CPU's together results in huge thermal issues and the dual die part is actually worse in almost every way than getting that two cpu motherboard and two separate cpu's. And if IIRC buying two separate CPU's and a two processor motherboard was even cheaper.
extide - Wednesday, October 2, 2019 - link
No, it's not. The 28-core part is a monolithic 28-core die.eek2121 - Wednesday, October 2, 2019 - link
Hence the price decrease. When your competitor starts at 24 cores, has up to a 10% higher IPC, and really close clock speeds, you have to price your chip to sell.Mr.Vegas - Wednesday, October 2, 2019 - link
SaturnusDK: 16 Core for 750USD and 24 cores for Under 900USD?Does not compute bro
AMD charges 250USD premium for 4 extra cores on top of 12 cores in 500USD 3900x
24 core will be 1200USD +/-
eek2121 - Wednesday, October 2, 2019 - link
AMD's 16 core part is not a Threadripper part. People forget this. I suspect entry level Threadripper parts to creep up in price given their performance advantage, but we don't yet know with any certainty what the stack will look like. The 8 core Threadripper variant IIRC only carried a $100 price premium over it's Ryzen counterpart, however. That is worth noting.Mr.Vegas - Wednesday, October 2, 2019 - link
The 10 core Intel still has 48 CPU PCIe lanes + 24 on chipset.AVX512
Real Turbo boost [Lets be honest here, what AMD sells as single core turbo is nothing but a scam, when it clocks up for one second and drops down to its all core turbo it has no value, Intels Turbo performs properly, no such issues]
Real Overclocking
And what no less important for me, excellent emulator performance.
Now add to it that to this 3900x is sold for premium by scalpers and always out of stock, same thing going to happen with 3950x.
In this case Intels price vs AMD street price looks insanely better
eek2121 - Wednesday, October 2, 2019 - link
Threadripper has 64 PCIE lanes right from the CPU, not counting anything from the chipset. You have apparent bias against AMD so I'm tempted not to post, but I'll bite: What you say is false, single and dual core workloads have seen 4.6-4.7 GHz on many 3800X chips for example, some 3800X chips hit 4.7 GHz on all cores (as evidenced by GB4 JSON data.) The 3900X has some issues that needed to be worked out, but the ABBA AGESA release fixed many of those issues, and many users are hitting 4.7 GHz.Also, clock speed doesn't really matter as AMD trounces Intel in the majority of workloads anyway...hence the reason for the sudden price decrease above.
Korguz - Wednesday, October 2, 2019 - link
Mr.Vegasand lets be honest here, intels TDP rating is nothing but a scam, when a cpu is listed as using X watts, but under usage, can use 50-150 watts MORE, what do you call that ? " Intels Turbo performs properly, no such issues " providing the cooling is sufficient, maybe thats why intel doesnt include a cooler with its higher end parts.
" Real Overclocking " again IF cooling is sufficient.
" 3900x is sold for premium by scalpers and always out of stock " must be just where you are, as i have seen them in stock regularly at the local comp stores i go to.
" In this case Intels price vs AMD street price looks insanely better " not really, as you seem to forget, intels cpus dont come with a cooler, so add $50 + to the price for intels cpus.
jakky567 - Wednesday, October 2, 2019 - link
16 core consumer zen 2 presumably has the frequency and IPC advantage with equivalent cooling, except for turbo 3.0 vs single core boost. 3950x has higher single core compared to Intel HEDT though.YB1064 - Wednesday, October 2, 2019 - link
The 165 W TDP claimed seems rather low, if SkylakeX in-use measurements were anything to go by.psychobriggsy - Wednesday, October 2, 2019 - link
TDP for Intel is at base clocks. You'd best have a water cooler if you want decent long-lived turbo clocks with these processors.Marlin1975 - Tuesday, October 1, 2019 - link
And the winner is... consumers. Isn't competition a great thing. Still over priced and 165w seems low, unless thats using intels latest fake power ratings.Also "16-core Ryzen 9 3950X at $749, which will offer 16 PCIe 4.0 lanes" Is it losing 8 lanes? I thought they all had 24. 16 for GPU, 4 for m.2 storage, and 4 for the chip-set bridge.
Ian Cutress - Tuesday, October 1, 2019 - link
My bad, it's 16 for PCIe slots. 4 for storage, 4 for chipset. I pretty much always just count the PCIe lanes for slots (because that's what it used to mean)peevee - Thursday, October 3, 2019 - link
All X570 motherboards claim PCIE4 in 16+4 configuration on slots. I guess one of the M.2 slots is shared.Eliadbu - Tuesday, October 1, 2019 - link
Intel TDP rating is at base clock running complex workload. Many people people still don't get it apparently and are surprised when they see the TDP at boost clock. Sure it is less straightforward but it is not lying. About price affter we will see AMD HEDT offering we will judge16 pci-e always was the case for non HEDT desktop ryzen chips. You got
Eliadbu - Tuesday, October 1, 2019 - link
16 pci-e from CPU directly connected to GPU and 4 pci-e link connected to the X570 chipset which adds additional 8 platform PCI-E, you just fell for another trick of AMD marketing. I swear they getting worse than intel in that regard.Flunk - Tuesday, October 1, 2019 - link
The Zen 2 Ryzen CPUs (3000 series, but not APUs) have 24 PCI-E 4.0 links. 16 for the GPU, 4 for the chipset link and 4 that are most often used for NVMe drives. X570 has 16x PCI-E, you're thinking of the last-gen Ryzen CPUs.https://www.anandtech.com/show/14525/amd-zen-2-mic...
Hul8 - Tuesday, October 1, 2019 - link
All 24 lanes are usable as PCIe, if you omit the chipset. (Like the A300 "chipset" in some prebuilts.)Karmena - Wednesday, October 2, 2019 - link
Are you sure about "complex load"? What is that? Definitely not something that includes AVX.evernessince - Tuesday, October 1, 2019 - link
Intel rates their CPUs at base clock only so yeah, their TDP is extremely misleading.dysonlu - Tuesday, October 1, 2019 - link
LOL If history serves, "competition" may be short-lived as Intel will undercut (see "predatory pricing") its competitor and, soon enough, it will enjoy milking its customers again.From wikipedia on "predatory pricing": "Predatory pricing is considered anti-competitive in many jurisdictions and is illegal under some competition laws. However, it can be difficult to prove that prices dropped because of deliberate predatory pricing, rather than legitimate price competition. In any case, competitors may be driven out of the market before the case is ever heard."
azfacea - Wednesday, October 2, 2019 - link
what r u talking about? price per transistor has dropped 1 milion times in the last 50 years. have u heard of innovation?also intel did no such thing as drive amd out with predatory pricing. it used its volume and node advantage to drive out amd. it has no such advantage over TSMC
milkywayer - Wednesday, October 2, 2019 - link
Go away Troll.For kids who're reading this in the Future for research; Intel used all kind of scammy tactics including bribing Dell and the likes to keep stuffing Intel's CPUs at a major discount just so they could drop AMD CPUs.
Klimax - Wednesday, October 2, 2019 - link
You are conveniently ignoring plenty of idiocies on AMD side including starving itself of cash by purchase of ATI… (And betting on pure nonsense called Bulldozer)Also Intel had always massive margins thanks to vertical integration. (People forget that for some years during Pentium 4 era AMD was more expensive then Intel - Black Edition says hi)
Spunjji - Wednesday, October 2, 2019 - link
Everything you're referencing in your first sentence happened after Intel successfully staved off competition with its predatory pricing schemes.You couldn't get a Dell system with an AMD chip in it throughout the Athlon 64 / OG Opteron era despite the obvious superiority of the AMD chips, and it was entirely because Intel bribed Dell (and others) to buy from them alone. AMD's subsequent poor decision making has nothing to do with that.
Enthusiasts of that era are also aware that AMD charged more for their chips when they had a definitive performance lead - that isn't news. They'll probably do it again with Zen 3 if Intel haven't caught up by then.
The_Assimilator - Wednesday, October 2, 2019 - link
You gotta let go of the past, man.Korguz - Wednesday, October 2, 2019 - link
" You are conveniently ignoring plenty of idiocies on AMD side " intel has had its own, and as spunjji mentioned, it cost intel a billion or so in a settlement to AMD, seems you are conveniently ignoring that." Also Intel had always massive margins thanks to vertical integration" BS, its also be cause they were over charging for their cpus, look at the prices for the 9900 series vs 109xx series.
" People forget that for some years during Pentium 4 era AMD was more expensive then Intel " yes. because the performance was there. just like intel has been doing for the last 5 or so years.
azfacea - Thursday, October 3, 2019 - link
"Intel used all kind of scammy tactics" of course they did. because they had Fab volume that allowed them to do that. They can't do that to TSMC.has nothing to do predatory pricing
Father Time - Monday, October 21, 2019 - link
Intel was found guilty in court and made to pay AMD for their illegal practices. You can't argue this.It was predatory pricing - Intel would offer incentives to not offer any AMD alternatives, and as Intel was the market leader with name recognition, even companies as big as Dell were unable to do anything about it - if they dared to offer AMD systems, they would lose the Intel kick-backs and no longer be able to compete on price for Intel systems.
At the time AMD had the better tech, and was slowly clawing market share, but Intel was a household name and no major company could afford to bet their entire business on AMD and give up completely on their Intel lineup.
imaheadcase - Wednesday, October 2, 2019 - link
Its not really competition thought, because that is also suppose to bring forth innovation which is not really happening as well. DIfferent prices, boost core and a AMD chips that was planned years ago.eek2121 - Wednesday, October 2, 2019 - link
Indeed, only consumers win here. Regarding PCIE: People forget how to count. It's also PCIE 4.0 vs PCIE 3.0. This actually means that the regular Ryzen chips have more PCIE bandwidth than HEDT Intel chips. An embarassment for Intel.techguymaxc - Tuesday, October 1, 2019 - link
This is a solid lineup. X299 owners withA) lower core count CPUs
B) need for higher clocks
C) need for more RAM
May find this upgrade path compelling.
Personally, as a 7960x owner I’m less tempted by this upgrade path and will wait to see what Threadripper 3000 offers. If AMD will sell me 64 cores with decent all-core clocks at a price that doesn’t require a second mortgage, that’s going to be hard to beat.
kobblestown - Wednesday, October 2, 2019 - link
Well, about the RAM situation - the new CPUs do not offer more channels or slots. It's just that intel has removed the artificial limitation and can now use 32GB DIMMs. Compare this to the ThreadRippers which support 1TB from the very beginning. Sure, we'll never have 128GB UDIMM DDR4 modules with which to implement it but at least we are not butt fu...d by the manufacturer for no apparent reason!BTW, I'm pretty sure that the initial TR 3xxx lineup will not include 64 core chip. There's no need for it. A 24 core TR 3 will wipe the floor with anything intel has to offer. Including their 28 core joke of a room heater.
techguymaxc - Wednesday, October 2, 2019 - link
I haven't seen a Threadripper motherboard claim > 256GB RAM support yet. Are you sure about this?I disagree about the 28-core i9 - AVX workloads still run best on Intel, and clockspeed still matters. That being said, a 64-core Threadripper would absolutely beat this chip and hence why I am interested. I agree it is unlikely that a 64-core chip will join the ranks of Threadripper 3000 chips at launch, but I hope it will arrive eventually at a price I can justify.
techguymaxc - Wednesday, October 2, 2019 - link
In fact, QVL for most X399 motherboards only mentions 128GB kits at most, same as X299.kobblestown - Wednesday, October 2, 2019 - link
https://www.anandtech.com/print/11697/the-amd-ryze...Section: Top Trumps: DRAM and ECC
Quote: "AMD has officially stated that the Threadripper CPUs can support up to 1 TB of DRAM, although on close inspection it requires 128GB UDIMMs, which max out at 16GB currently."
It is not the first time this has happened. About a decade ago there were some strange but JEDEC compliant modules (I think 8GB DDR2 DIMMs but I may be wrong about that) that would work on AMD processors without any adjustment but not work on intel. Aparently, indel used to, and probably still does, limit the supported sizes below the actual technical limitations. If you've learned nothing about intel in the last 5-10 years, that's your problem.
On the other issue - 24 core TR 3000 will offer competitive performance with the 28 core Xeon W at a quarter of the price. A 32 core part will send it into oblivion. I'm pretty sure about that.
Kevin G - Thursday, October 3, 2019 - link
There was a bug in the DDR3 controllers of Sandy/Ivy/Haswell memory controller that didn't permit them to work with the largest capacity unregistered DIMMs. Broadwell, the last major DDR3 only chip from Intel, did fix this issue.Most of the time, the memory controllers are designed against JEDEC spec but are only validated with that is currently on the market. Thus going beyond the official max capacity generally works but is not officially supported. The exception to this has been the last few "generations" from Intel, especially on server, where memory capacity limits have been used for market segmentation.
EliteRetard - Tuesday, October 1, 2019 - link
Now that they're almost in the same price category I'd like to see the 10920x vs the 3900x clock for clock (and stock for stock). Can Intel justify the extra $200?nevcairiel - Wednesday, October 2, 2019 - link
What is "stock" these days? "Stock" behavior on Intel depends a lot of on the motherboard, as they can define boosting behavior to match their VRM capabilities, which can have a huge impact.Also, the Intel HEDT lineup of course has other advantages - which may not matter to everyone, compared to a 3900x. More PCIe lanes, more memory channel, AVX512.
Korguz - Wednesday, October 2, 2019 - link
nevcairiel your comparing HEDT to mainstream, quite the difference there.Korguz - Wednesday, October 2, 2019 - link
EliteRetard " Now that they're almost in the same price category I'd like to see the 10920x vs the 3900x clock for clock " but you are now comparing a HEDT cpu, against a mainstream cpu, wait till threadripper 3 comes out, and then compare those 2 cpus.peevee - Friday, October 4, 2019 - link
No point. 3900x is quite comparable.Korguz - Friday, October 4, 2019 - link
no its not.. diferent platform.. different features... different I/O capabilities....svan1971 - Tuesday, October 1, 2019 - link
HEDT = pcie 4.0hubick - Wednesday, October 2, 2019 - link
This. For anyone with an I/O bound workload (me), PCIe 3.0 is now a non-starter.azfacea - Wednesday, October 2, 2019 - link
exactly. building a HEDT system now and expecting to use for 3 years without pcie 4 is total joke.For workstation SSD, even the pcie 4.0 x4 link will be a bottleneck when this comes out next year.
https://www.anandtech.com/show/14728/phison-previe...
nevcairiel - Wednesday, October 2, 2019 - link
Unless you are actually bottle-necked by sequential transfer speeds, and not IOPS or random access speed (which is far more common), then this won't matter for the next 3 years.If you are in that select group, there is options for you as well. Not every piece of hardware has to necessarily fit everyone.
The_Assimilator - Wednesday, October 2, 2019 - link
Shhh, let the children wave their sequential speed e-peens around while they drink from the torrent of marketing piss that the SSD manufacturers continue to spray. I bet these same kids also unironically cry about Intel's TDP marketing being misleading.svan1971 - Friday, October 4, 2019 - link
shh, some arrogant aholes among us like spending more for less because intel told them its good enough, because you can't fully benefit now you don't need to extend the life of your investment in the future.kobblestown - Wednesday, October 2, 2019 - link
Then get a ThreadRipper board today. A month ago you could but a mainboard plus a 12 core TR1920X for $500 combined! And you have 3 NVMe slots. If that's not enough, you can plug M.2 NVMe drives in regular PCIe slots with an adaptor. There are also relatively cheap 4x adaptors that work on the TR because it support 4x bifurcation. You can have 7 NVMe drives and still have 1 x16 and 2 x8 slots available.Arbie - Tuesday, October 1, 2019 - link
You haven't "rallied" Intel nomenclature, you've "railed" against it. Probably a spellcheck error.Ryan Smith - Tuesday, October 1, 2019 - link
Thanks!The_Assimilator - Wednesday, October 2, 2019 - link
Nah, more an "AnandTech doesn't have editors or even basic proofreading" error.Flunk - Tuesday, October 1, 2019 - link
These seem like a pretty good deal, but I'm mildly disgusted by how overpriced the last generation was.dysonlu - Tuesday, October 1, 2019 - link
Disgusted should be the first reaction for this.Kepe - Wednesday, October 2, 2019 - link
My thoughts exactly. Cutting prices by 50% is great, but it also means they've been overcharging their customers ridiculous amounts in the past few years, which is not cool at all. I'd call it immoral even. Yeah sure, when you have no competition, you can set the price wherever you want it to be. But you could still rein yourself in a little bit in how much you charge your loyal customers. Now that they've cut prices by 50%, it means they've had zero interest in their customers and 100% interest in filling their own coffers.Also, the value of used Intel HEDT CPUs just halved due to this new pricing announcement. People trying to sell their 1-2 years old 2 000 dollar CPUs might be a bit miffed about this.
Spunjji - Wednesday, October 2, 2019 - link
Yup. The overpricing was always obvious, but now they've made it *ugly*.rahvin - Wednesday, October 2, 2019 - link
The rumor is that the new Cascade HEDT parts cost more to make than Intel is charging with this 50% price cut and Intel is selling these at a loss to try to slow AMD down.The chiplet design of the TR3 and Ryzen 3 actually makes it cheaper for AMD to make CPU's than intel can with their monolithic design. The result is AMD can handle lower pricing than Intel and still make money.
Korguz - Wednesday, October 2, 2019 - link
rahvin " The rumor is that the new Cascade HEDT parts cost more to make than Intel is charging with this 50% price cut and Intel is selling these at a loss to try to slow AMD down. "um, yea, no, intels investors and shareholders would not let that happen as it would mean they would also be losing money.
Kevin G - Thursday, October 3, 2019 - link
Nope, 14 nm is a very mature process for Intel at this stage.The only sort of 'loss' for Intel is that a water 18 core chips could have been wafter of 28 core parts that sell for higher margins. Intel does have to be picky about what does go through there fabs as they are at capacity with far too many products occupying the 14 nm node. The 10 nm delays have really, really hurt them.
Korguz - Thursday, October 3, 2019 - link
their investors and shareholders will still be upset. wouldnt you be ??Qasar - Thursday, October 3, 2019 - link
" Nope, 14 nm is a very mature process for Intel at this stage. " doesnt matter, there is no way intel would resort to losing money like rahvin says, to " slow amd down " as was said shareholders and investors wouldnt allow it.The_Assimilator - Wednesday, October 2, 2019 - link
Yes, because Intel was holding guns to those consumers' heads and forcing them to buy Intel HEDT CPUs. WHEN WILL INTEL'S EVIL REIN END?Kevin G - Tuesday, October 1, 2019 - link
Socket 2066 is a mess as this is yet *another* PCIe lane config for motherboard makers need to account for. The only way to leverage the extra lane is gonna be via a new motherboard (with the possible exception of some X299 that have launched the last few months). The new price points are nice but something Intel should have done a year ago (ditto for the extra PCIe lanes and memory capacity support).The sad thing is that I would expect performance per clock improvements when the hardware security mitigations come into play.
danielfranklin - Tuesday, October 1, 2019 - link
Looking good!Who would have thought another Intel 14nm respin would be exciting!
Pity they arent going to re-release some cheaper 2066 Xeon-W chips now Xeon-W has moved to another socket.
Ive got a nice Xeon-W machine that would have loved a ~$1000 drop in 18 core upgrade :P
shabby - Tuesday, October 1, 2019 - link
Now do the same for the non-hedt chipsQuantumz0d - Tuesday, October 1, 2019 - link
I do not understand one thing - Turbo limits out of factory aka Intel recommended vs Motherboard enhancement whining, so much noise about the damn TDP, as if the K series or X series Unlocked processors are going to run like puny crap like Laptops which have Power Limits hardcoded in EC and BIOS like Apple Trashbook Pro or thin and light junk cTDP BGA garbage.Intel got away with it but the user is getting MAX perf OOTB or they can customize it, why sandbang it with bullshit limits and whining.
Quantumz0d - Tuesday, October 1, 2019 - link
*K or X are being run in an ITX mini box size of a lunch box and have crippled cooling, that user should really be educated or not worth owning such Processors. I have UV and Turbo OC with more Current Limits which allows me to run higher Turbo clocks and I did that manually, which got me 700CBR15 score, vs a 600OOTBSpunjji - Wednesday, October 2, 2019 - link
It's not about "bullshit limits and whining" - it's about Intel giving us honest estimates of power consumption under load. I think most of us are fine with them drawing more power to hit the top speeds on many cores, but it'd be nice if they gave a useful estimate of what that power draw would be before purchase, instead of them hiding behind the meaningless "165W TDP".Also, funny point - the problem with the "Trashbook Pro" isn't a hardcoded power limit, it's the lack of one. When the 6 core i9 CPU came along and blew past its spec, it triggered protections elsewhere in power delivery (and cooling). Blaming Apple for Intel's CPU operating way outside spec is a bit of a reach.
Bear in mind that I still think Apple screwed the pooch, both by making their devices unnecessarily thin and by failing to catch that issue in qualification.
rahvin - Wednesday, October 2, 2019 - link
TDP's should be maximum output to make heatsink selection easier.Putting the TDP of the chip as some middle of the road TDP and concealing the max TDP only makes it harder for consumers for no gain other than marketing. It's stupid and foolish and Intel should be called out for doing it.
What Intel is doing is what happens when Marketing decides on what technical specifications to reveal. Your average consumer isn't concerned at all if a chip has a higher TDP, so this is only a lie targeting enthusiasts who actually care to know the real TDP.
The_Assimilator - Wednesday, October 2, 2019 - link
> Bear in mind that I still think Apple screwed the pooch, both by making their devices unnecessarily thin and by failing to catch that issue in qualification.You assume Apple does any sort of testing, as opposed to throwing their overpriced s**t over the wall and letting their hordes of drones drown out any complaints with "IT JUST WORKS!!!!!"
bji - Wednesday, October 2, 2019 - link
Then don't buy their products. And don't post your drivel here. No loss to Apple, and no loss to Anandtech readers.Quantumz0d - Wednesday, October 2, 2019 - link
I hope their LGA3647 socket W 3175X sees same price cuts and gets the damn Dominus Extreme into more audience. But it's a shame that X299 saw so many Processors while Z390 is being discarded again, sad. I wanted to build a PC this Fall, wanted Win7 and top class Perf with old and new games, Z390 Dark was my build with 2070Super but now with Z490 on the horizon Intel doesn't inspire confidence, though these processors with X299 Dark at low cost makes it superb given this time the Mesh keeps up with the Ring bus 9700K at-least.Plenty of time to wait for 3900X and 3950X in Stock and price drop for X570 Aorus Extreme vs X299 or Z390 or Z490.. ;)
twtech - Wednesday, October 2, 2019 - link
They have a new W3275 already available, but instead of becoming cheaper, it's more expensive.mooninite - Wednesday, October 2, 2019 - link
Wow!!!! Something besides 1 gigabit Ethernet! Hell has frozen over!Atari2600 - Wednesday, October 2, 2019 - link
Will the Ryzen 3950X really support 128GB of RAM?I thought AM4 was limited to 64 GB...?
Slash3 - Wednesday, October 2, 2019 - link
AM4 boards can use 128GB with four 32GB DIMMs.rahvin - Wednesday, October 2, 2019 - link
Which are almost impossible to buy at retail. :)CallumS - Wednesday, October 2, 2019 - link
Not anymore. Samsung 32GB DDR4 2666 DIMMs (M378A4G43MB1-CTD) have been available via online retailers for a while now for under $150USD. I've been running 4 @ 3000mhz CL16 (with minimal effort) in a Z390 motherboard with an 8700k for a few weeks now. There's also an ECC, but still UDIMM, version available for about an extra $100USD.SanX - Wednesday, October 2, 2019 - link
Looks like in one year they changed max supported RAM from 64GB to 128 and after AMD catched that they changed it to 256GB. Takes them 1 second to change some artificial restriction in the spec.Since 64bit OS can support almost 1 billion more than that all that looks like they want to play this dirty game with the consumers for a long time.
Kjella - Wednesday, October 2, 2019 - link
Well it's been pretty standard to only validate up to launch, so if no 8x32GB UDIMM kit was available then it's not official but it'll probably work - I saw someone running 4x32GB on a B350 mobo. The standard is supposed to be DDR4, all memory modules should work except for when they don't...CallumS - Wednesday, October 2, 2019 - link
Agreed. Only 'supporting' combinations available and validated at Launch has been standard for decades - which makes sense. Stating support for artificial/unproven future combinations could easily result in lots of confusion and issues.kobblestown - Wednesday, October 2, 2019 - link
I strongly disagree. That's what standards are for. If the board does not support JEDEC-compliant module then the board is not JEDEC-compliant.You are so used to being screwed that you no longer notice.BTW, since I wrote several posts in this thread and I may come across as AMD fanboy, I'm pretty mad at them for disabling PCIe 4.0 on non-X570 boards. I would assume it works on the first x16 on many mainboards. Let the customers decide if it works for them - it was never promised to them anyway.
Korguz - Wednesday, October 2, 2019 - link
kobblestown but there are also ram sticks that are not jedec spec, but work just fine." I'm pretty mad at them for disabling PCIe 4.0 on non-X570 boards. I would assume it works on the first x16 on many mainboards. " and what would you have done if the board you have, isn't capable at running at PCIe 4 speeds ??? my guess, you would be screaming at AMD, and calling them liers and such, which is why they are not allowing it on non x570 boards, as it wouldn't be guaranteed, and even board to board could have different results. AMD is just avoiding a big headache by doing this.
kobblestown - Thursday, October 3, 2019 - link
and what would you have done if the board you have, isn't capable at running at PCIe 4 speeds ??? my guess, you would be screaming at AMDYou guess to much. And why would you even suggest such a thing? I am a grownup and I should be treated like one. They could, for instance, have a warning in the BIOS option that enables it. I already have warnings that say that they void my warranty if I click "Agree". Isn't that the same kind of thing only more benign since it doesn't do actual damage?
Of course, I understand why they did it. I just don't agree with them doing it. And I'm rightfully mad at them. They get the upper hand and they start behaving like Intel. That's why it's wrong to be any company's fanboy. You just buy the best product at the moment you need it regardless of the company. That's how market economy is supposed to work. I was buying intel for 10 years and now I can finally buy and recommend AMD again. Let's see how long this lasts.
Korguz - Thursday, October 3, 2019 - link
kobblestown, still, warning or not, there would STILL be people screaming at AMD, and complaining that while their board doesnt work @ PCIe 4, they have 2 other friends who have boards that do. thats the point.. they are saving them selves from a PR nightmare, thats all there is to it, and while you feel it is wrong, look at it that way. there is just too many variables involved here to allow it on non x570 boards.kobblestown - Monday, October 7, 2019 - link
@Korguz, It's the same with overclocking. I don't see your point. No one ever promised PCIe 4.0 support on prev-gen boards. In the same way, no one ever promises specific results from overclocking. Yet, it's acceptable to allow it.Korguz - Wednesday, October 9, 2019 - link
and i dont see your point about not agreeing with amd about not suppporting pcie 4 on non x570. over all, its easier to overclock a cpu, as there is some OC headroom, but with a board, there are more variable involved. hence the term silicon lottery. who knows, maybe the board makers tested some of their boards, and it was too inconsistent, and told amd, and they dropped support...The_Assimilator - Wednesday, October 2, 2019 - link
So tell me, why doesn't AMD support more than 128GB then?deil - Wednesday, October 2, 2019 - link
Hell froze.ZoZo - Wednesday, October 2, 2019 - link
"a number of fingers will be pointed at AMD as having made this happen"Saying that that is an understatement is an understatement itself. Would there even exist someone not believing that AMD made that happen?
Spunjji - Wednesday, October 2, 2019 - link
I'd like to think that was a pointed understatement - no question that they've changed Intel's entire strategy!umano - Wednesday, October 2, 2019 - link
Now the ryzen 3950x is not that relevant unless amd lowers its priceZizy - Wednesday, October 2, 2019 - link
Vs 14C (I guess at same performance) AMD is a bit cheaper, has cheaper boards available and comes with lower TDP. PCIe is roughly comparable, depending on your needs (24x PCIe 4.0 vs 48x PCIe 3.0), while Intel wins in memory capacity and bandwidth.Plus AMD has TR to compete with that.
eva02langley - Wednesday, October 2, 2019 - link
HEDT vs Mainstream...Next stupid statement please...
psychobriggsy - Wednesday, October 2, 2019 - link
The AM4 platform, even with X570, is vastly cheaper than Intel HEDT.Sure, it doesn't offer as many PCIe lanes, but at least they're PCIe 4.
That's got to be factored in to the equation.
Korguz - Wednesday, October 2, 2019 - link
psychobriggsy like others here, you are comparing HEDT against mainstream. to different platformsabufrejoval - Wednesday, October 2, 2019 - link
I wonder if they finally enable ECC on these, given the competitive situation.The Xeon markup has become thankfully rather neglible at the low end these days, but to run billions of bits without some safety net seems foolhardy.
Mr.Vegas - Wednesday, October 2, 2019 - link
2 Questions to the OP:1) Is there a x299 chipset refresh incoming?
2) No matter the answer to question 1, will there be a refresh of mobos with x299? I mean right now it lacks WiFi6, native USB 3.1 Gen 2
demu67 - Wednesday, October 2, 2019 - link
Not really any new chipset, but new revisions of motherboards with X299X moniker.https://www.tomshardware.com/news/gigabyte-x299x-m...
peevee - Thursday, October 3, 2019 - link
So, Intel was forced to drop slash prices in half. Thanks, AMD!bobmarja - Thursday, October 3, 2019 - link
<A href="http://barisgezer.com">dewaqq</a> must be the same site like yoursbobmarja - Thursday, October 3, 2019 - link
http://barisgezer.com">dewaqqQasar - Thursday, October 3, 2019 - link
can we get a delete and ban on these add spammers??bronan - Sunday, October 6, 2019 - link
Its a great time for those in need of many cores, for those who never use more than 4 cores/8 threads nothing has changed.
True many people can benefit from these monster products, but for a simple gamer it does not have any use at all. And i mean by that also that nothing released the last 2 years made me decide to buy any of the games which came out. Its kinda all old stuff in a new jacket.
And worse almost all of them force you to go online which i actually refuse todo.
I like strategy and once in a while i jump onto the good old games which where fun and did not annoy me to the edge of insanity.
And lets be honest the RTX hype which nvidia is trying to make is not goign to be a thing for many years to come. So as long as theirs hardly anything usefull todo with these rtx stuff i stay far away from it.
It just shows people are enormous happy when they got a new play-thing without a real function and can brag they got it.
Intel does jump on the same bandwagon now as well, and by that will make loads of people happy, especially those which actually make use of the many cores, like streamers and graphical folks.
The only reason why i am going to change my system in the coming 4 years is because its too limited in terms of available nvme slots and sata slots which i want to use.
I constant build new system images and rar and zip tons of files so you could say i could benefit from these monster cpu's as well. But actually i am not impressed my old 6700k still does it darn fast and this work needs much faster drives more than more threads.
If one could prove otherwise i would gladly look at the numbers but i am pretty sure that the premium price will not make up the tiny speed profit it makes.
We are so sucking into the latest hypes that we loose common sense is my opinion.
Sure both companies make impressive products but sadly AMD decided to screw customers who want to play games and did not release a lower core model with higher clocks instead they did the opposite. Which is totally bonkers i rather have a 4 core going much higher in clockspeed than a 16 core monster. But again AMD wants to make the most profit as Intel does and they choose to only give higher clocks on the most expenssive more core cpu's.
So no gains at all for normal players i was already stomped that this already old cpu does beat its replacing siblings in speed often. As long as we do not look at multi core performance.
So again no reason to switch till i really had enough of the limits of this mainbord.
GreenReaper - Sunday, October 6, 2019 - link
The thing about multiple cores is that nowadays there is relatively low cost to it; they can be turned off when not required. Because of this, it's no longer worth doing single-core CPUs at all. Most AMD dies are quad or even eight core. So... why sell a die as two or even four fast cores if they could sell it as four fast cores *and* four slower ones, unless there are flaws in the smaller one?The answer is, they don't. At least not until they have a bunch of flawed dies that *have* to be sold like that. Athlons are available, for example - and they have become relatively cheap compared to past dual-cores - but they are released late because there isn't the stock behind it before then.
Unfortunately flawed dies are often flawed in general and may not have any fast cores. *Some* of them might go faster but if there's only a few it may not make sense to make it a separate product.
If AMD were selling more chips you might see more specialist products. But think about what you are saying: you see value in those high-speed cores. So the price isn't likely to be *that* much smaller than that of a CPU with more cores, some of which go at a high speed. It's driven by value.
Your 6700K probably takes up a fair amount of power too. Something like a Ryzen 5 3600 will demolish it for power efficiency under a given load. This is something you may not care about, but others do. Again, it's probably not *that* much better for this purpose to have fewer cores because they can be turned off (indeed, it's often more efficient to "race to completion" and then turn off).