Does this mean Intel is going to accept lower margins on its data center hardware in order to chase the growth that exists in that segment and to try to stem off competition?
You are correct, D. Bryant has said they expect operating margin the shrink from in the high-40 % to low-40%. The EMIB technology to slip the die in chiplets is a really, really big deal to achieve this data center first strategy, which in itself is already a monumental and underrated change of business. Compare this to TSMC and Samsung, who go mobile first. Intel is now doing opposite.
You are wrong. Intel will go mobile (CORE M and 15W line) and datacenter first with 10nm. LIke have done on 14nm. Nothing is changed. This time Intel refuse to delay datacenter SKUs and adopt the multidie approach to stay ahead the competition in spite the impossibility to manufacture large die SKUs at the beginning of the new node ramp. EMIB was patented for this in 2014. I don't see any monumental change of business, so sorry
Diane Bryant has said that this change will start with 10nm++, and then 7nm as the first full node. This is, she said, because it takes many years to build these products, so the development for the first 10nm products are already well under way.
Intel didn't go data center first with 14nm. 14nm Xeon E5s didn't come out until the beginning of 2016 whereas 14nm was used on other products (lower power, mobility parts) as early as late 2014. Mobile Core i processors used it in early 2015 and desktop Core i processors used it in the middle of 2015.
This is true but witeken is saying that they're doing the opposite of mobile-first competition. Intel is definitely shifting priorities, but they are still going to put a lot of effort towards mobile. So if mobile is their number two priority, I'd hardly say that's the "opposite" of Sammy et al.
Either way every time I see one of those Intel cloud/server commercials I can't help but think Naples must concern them a little.
To me this looks like a desperate action, not a careful plan. It suggests that they expect 10nm yields to be so bad for a while that they simply will not be able to ship mobile volumes, even if they repeat the pathetic multi-quarter extended rollout of Broadwell, one SKU at a time.
Even assuming EMIB works wonderfully and delivers something like what it claims, client delivers twice the revenue of data center, and there's much more value to a better performing chip (faster single-threaded, and lower power) in mobile than in data center (ie more people will pay more for it).
They're going data center first because they HAVE TO, not because they want to.
You're talking about Power 9 I believe? The problem here is that it's very hard to get the ecosystem going without affordable CPUs, it'll remain a niche product unless developers can actually get their hands on them.
That plus the discussion about using EMIB to cobble together smaller dies for server parts, as well as the fact that Intel let Fab 42 sit after billions in construction rather than installing 10nm production equipment, leads me to believe that 10nm has been a bit of a disaster for Intel. 14nm was obviously a struggle at first and things didn't get any easier it appears.
Why not to dump your ASML stock instead??? Do you really believe GloFo, TSMC, Samsung are better than Intel ?? No, all they are in a desperation state because these new fine nodes do not works correctly in production without EUV.
Look at GloFo, it will stay on 14nm for other three years, Look at TSMC, their fake 7nm process will be a boutique low yields one that only Apple will want (if ever want it). Look at Samsung, their 10nm is a minor step over a very conservative 14nm and 7nm is only a dream three/four years late in the future. Not to mention all these foundries avoid to manufacture big dies under sub 14nm processes and GPUs will stay on 16nm for a lot.
What evidence do you have about TSMC? You're throwing random adjectives together, and you're not even doing it correctly. The node that is supposed to be "fake" and "only Apple wants" is 10nm --- that's what people like you were saying last year, that it will be a "light" node that's soon abandoned for 7nm.
Meanwhile that supposedly despised 20nm TSMC node (where we heard the exact same thing) is doing just fine, serving a constituency that want the best they can get without going to FinFET and double-patterned metal.
The problem you have is, in a way, the problem Intel has. You're both locked into a simplistic model of the world where only the leading edge counts. TSMC (and I expect the same is true of GloFo and Samsung, though I track them less closely), by serving a large range of constituencies, finds it hard to misfire. Any particular process has value for SOME sort of users, and can hang around for a long long time serving that particular set of use cases. But Intel is locked into a very PARTICULAR set of use cases, and so is much more constrained in how they can derive value from a process that doesn't exactly match their needs. Foundries are omnivores, Intel lives on a single type food and nothing else.
Of course Intel is trying to become a foundry, but they seem so confused as to what the goals of this are that they are unlikely to be successful. (Are they willing to allow competitors to create products that are better than Intel? Would they allow Apple to produce an ARM SoC for Apple desktops? Would they allow AMD to use their foundry?). And apart from this confusion of goals, there are technical issues. The other foundries have a huge body of design material and standard cells with which customers are familiar. Intel provides the Intel way of doing things, which is not especially compelling when you are not part of that tradition. There's a reason they've had such limited foundry pick-up so far, and there seems no sign that that's going to change.
Gondalf actually has some points here .. believe it or not. Firstly, everyone uses ASML gear to make this stuff, Intel, GloFo, TSMC, you name it. Also, TSMC has said they are going to NOT use any form of EUV for their first gen 7nm process -- which means it will be multi-patterned to kingdom come. We all know big die GPU's won't be coming first on new processes from here on out most likely..
What do those points have to do with my criticism? My complaint, and the comment, was with "their fake 7nm process will be a boutique low yields one that only Apple will want (if ever want it)."
The subsequent statements are just as indefensible. We have NO IDEA what the yields are for these processes, so no-one outside the foundries is in a position to make claims like "all these foundries avoid to manufacture big dies under sub 14nm processes and GPUs will stay on 16nm for a lot" Might be true, might not, who knows? Even if the GPUs stay at 14/16nm for a while, that may have nothing to do with yield (and so die size) and may reflect the cost of masks vs expected sales.
"Process-architecture-optimization" seemed like something conjured up by the marketing department from the beginning, especially considering the marginal gains with Kaby Lake.
It actually looked like something conjured up by the marketing department on the spot when they were left with no options. The marketing had it easy at Intel before. Engineering always delivered, there was no competition, life was great. All of a sudden engineering stopped delivering (for whatever reasons), competition started flaring up at least in leaked slides and teasers, and Intel's marketing department had a job to do for the first time in years. You can see for yourself how bad the lack of practice was...
Intel spend 2.5x more on R&D than AMD's gross revenue. It's not like they're suddenly cutting their R&D, they just need to shift what they spend their money on. There's also hope for async CPUs. Intel has said in the past that die shrinks are so much easier and faster than making async CPUs. Maybe we'll start to see 10ghz+ CPUs in the future.
There are diminishing returns. Also, AMD doesn't have to pay for newer fab technology, that falls on Global Foundries. Granted, in the past when GF struggled they were hobbled by it as well. But the point is, you can't compare Intel R&D directly to AMD's because a lot of Intel R&D is on process, and that R&D should be compared to GF.
"it makes us wonder where exactly Intel can promise future performance or efficiency gains on the design unless they start implementing microarchitecture changes." Just replacing bad Tim on the die for soldered CPUs will return them above average yearly thermal efficiency gains.
Sounds like. to me Intel laid off to many of it's seasoned engineers over the last few years and left to many college grads on staff and are left with very few real problem solvers and this is why they are in such a rut now. It might be a good idea for them to hire them back if there are any wanting to come back. At the same time lay off the handy capped thinkers. Then maybe they might be able to get out of the rut they are in.
It all depends on your point of view. For AMD, Ryzen will be a huge step forward, but thats not hard looking at recent AMD CPUs otherwise.
But in the overall grand scheme, will it beat Intel? I wouldn't think so, at best it may end up close to equal, Intel will lower prices some, and nothing is really lost.
"Intel will lower prices some, and nothing is really lost"
Unclear. Intel is a machine that only operates well because the profits can pay for so much man-power designing the next gen CPU designs and processes. If that river of money stops, the whole machine seizes up. (If you love Intel so clearly you cannot see this, consider the exact same thing in the context of Apple --- once again a river of cash allows Apple to create superb custom hardware that generates more cash, but if the money flow stops, no more A14, A15, A16...)
Intel could sustain lowering a few prices at the edge, but they're being hit on all sides now. Assuming Ryzen is at least adequate, that competes on the low-end. ARM is competing on a slightly different low-end in things like Chromebooks. To try to limit ARM servers even getting a toe-hold, Intel has had to ship things like Xeon-D and Xeon-E3, and to try to juggle the feature set to prevent wealthier customers from wanting to buy them. Meanwhile ARM has said all along that it did not expect serious server chips until 2017, that everything earlier was basically for bring-up and eco-system rollout. So when the 2017/2018 ARM server cores come out, Intel's going to, what? Drop the price of Xeon-D even lower and make it even more competitive with Xeon-E5's?
And of course the biggest competitor of all --- if Koby Lake, then Canon Lake are only pathetic improvements on the past, then the FREE Skylake (or even Haswell or Broadwell) I have already is good enough to stick with...
Intel cannot afford to lower prices across the ENTIRE product line. But that's the corner they're being backed into.
(A secondary issue which is difficult to predict is: can they execute? They've had a sequence of really obvious annual fsckups, from Haswell's broken HWTM to the endless Broadwell rollout, to the Fab42 delay, to the recent Avoton hardware flaw.
This is a hard business, and everyone makes mistakes occasionally, but Intel seem to be making serious, very public, mistakes that are hard to correct, at a level way beyond everyone else. Does this mean - the marketers/financiers are in control, not giving the engineers enough time/resources? OR - that the complexity of the x86 design has hit the limits of human comprehensibility and pretty much any attempt to improve anything over here results in a (perhaps subtle and slow-to-surface) problem over there? Both are obviously problematic for the future... )
People stupidly look at the past and extrapolate into the future. Intel had a good run, but became lazy and fat. When the ground started giving way under their feet they realized in a panic that they were nowhere near nimble enough to recover in a market with so many moving parts and with almost all of them moving against them.
Regarding the process node slide showing that “Intel will have enjoyed a lead of ~3 years when competitors launch 10nm process”, I think that the key phrase is “will have enjoyed” (past tense).
This chart, to me, makes NO SENSE as the comparison is now of the same nodes. Intel claims to have had a three year advantage but they're not comparing the same process node 14 <> 10.
TSMC and to a lesser extent GlobalFoundaries are a bit cheeky calling their processes 16/14 nm; both are considerably larger than Intel's 14 nm. GF is skipping 10 nm for 7 nm in around 2019; their 7 and Intel's should be comparable.
Intel is claiming that their 14nm process has held a logic cell area advantage over competitors' 14nm and 16nm processes. This is true. However, when Samsung and TSMC release 10nm, those companies will have a density lead (until Intel launches its 10nm).
Remember those 10 nm nodes are only half nodes though. Much of those chips will be 14 nm still, just as 14/16 nm use 20 nm. Indeed, it's likely that 10 nm will use 20 nm sections.
Only Intel nodes and GF's future 7 nm node are full nodes.
No that's not true. TSMC 10nm doubles density vs 16nm, so definitely a full node. 7nm is a bit less than a full node.
Also it's not just density that matters - 14/16nm FinFET was a full node due to significant performance and power improvements, not in terms of density. TSMC 12nm is a halfnode as it is a tweaked 16nm process.
It's shameless to publish that process slide and not point out the big fat lie. Intel starts production on 10nm this year and ships in volumes next year, if there are no complications. TSMC starts 7nm production in Q4 and ships in volume in Q1-Q2, more Q2 really. Intel's lead is a few months at best, you know it, we know it, but you decide to just publish the lie.
Shameless indeed. Not to mention the "density" claim from the entire blogsphere. It turns out that Ryzen chips on Samsung/GF 14nm will have smaller surface area than equivalent Intel parts, all while having larger L2. In the matter of fact, equivalent size of L2 and L3 cache on Samsung/GF 14nm is significantly smaller.
I'm putting money that size isn't the only false claim. Leakage? hmmm.
This lead claim by Intel, and those reporting on it is getting ridiculous. Intel process nodes are no where near as superior as everyone claims to be.
Of particular note is the 6t SRAM which is 0.0806 mm[s]2[/s] vs competitor A's 0.0588 mm[s]2[/s]. The fact that the L2 and L3 cache have better density in Zen despite the universally larger features sizes in the same table is clearly not a process advantage, but likely a difference in layout. It is entirely possible that Intel chose to use a less dense layout to increase speed and/or decrease latency.
Intel is most certainly painting their process in the best light possible, but lets not pretend that TSMC, Samsung, GF, Et al aren't doing the same thing. It is quite the rosy theory that Intel's 14nm process is equal to Samsung's 10nm. However, it would also be a mistake to assume that Samsung's 14nm process is equal to Intel's 14nm process.
That all said, process nodes have ceased bringing the kind of frequency gains that they once did. Efficiency gains are also being offset by increased leakage on smaller process nodes. Even cost savings aren't quite as great due to higher defect rates. Getting more transistors on a die does allow for some nice performance and/or efficiency gains, but when defect rates limit your die size, then similar transistor gains can be achieved on the previous process node at the cost of more die area. The point is, while being several process nodes behind, as AMD remains until Zen launches, is still a significant disadvantage, being a single or half a process node behind doesn't have the same meaning that it once had.
"That all said, process nodes have ceased bringing the kind of frequency gains that they once did. " That's not true. Compare Apple's performance A7: 1.3GHz 28nm A8: 1.4GHz 20nm A9: 1.8GHz 16nm FF A10: 2.3GHz 16nm+ FF
A9 and A10 are respectable frequency improvements allowed in part by process improvements. A8 does not look like a great frequency improvement, but it was an overall 25% performance improvement (somewhat allowed by higher density allows for better micro-architecture), and it took longer to hit throttling temperatures than the A8.
I'd say the issue is not that process doesn't allow for improved single thread performance; it's that different actors have different goals and constraints. Intel's biggest constraints are that it has SUCH a long lead time between the start of a design and when it ships that they have zero agility. So they can start down a path (say eight years) where it looks like the right track is - keep reducing power for mobile and desktop, but don't care much about performance because there are no competitors - optimize the design for the server market and minimally port that down to mobile desktop (again because there are no competitors)
But when the world changes ( a] Apple showing that single-threaded performance isn't yet dead b] long delays and slippages in process rollout) they're screwed because they can't deviate from that path. So they've built up a marketing message around an expected performance cadence, and didn't have a backup marketing message prepared.
They're also (to be fair) providing constantly improving performance through increasing turbo frequencies, and being able to maintain turbo for longer BUT once again, their marketing is so fscked up that they're unable to present that message to the world. They have not built up a corpus of benchmarks that show the real-world value and performance of turbo, and scrambling to do so today would look fake.
So I would not say the fault is that process improvement no longer deliver performance improvements. I'd say that Intel is a dysfunctional organization that has optimized its process nodes and its designs for the wrong things, is unable to change its direction very fast, and is unable to even inform the public and thus sell well the improvements it has been capable of delivering.
So you are suggesting that Apple, in it's 4th or 5th year of making CPUs, would have the same number of low hanging fruit as Intel in it's 40th year (or whatever)? I think you WAY oversimplified this.
I'm suggesting that Intel has bad incentives in place, and terrible strategic planning. Apple already matches Intel performance at the lower power levels, and is likely to extend that all the way up to the non-K CPUs with the A10X. And that, as you say, after just a few years of CPU design.
Doesn't that suggest that one of these companies is being pretty badly run? I've explained in great detail why it is Intel --- how they planned to exploit the gains of better process was dumb, and then they had no backup plan when even those process gains became unavailable.
You're claiming what? That CPU design has reached its pinnacle, and that the ONLY way to further exploit improved process is through more cores and slightly lower energy/operation? What would it take to change your mind? To change my mind, I'd have to see Apple's performance increases start to tail off once they're matching Intel. Since they're already at Intel levels, that means their performance increase has to start tailing off to <10% or so every year starting with the A11. Do YOU think that's going to happen?
IBM, to take another company, has likewise not hit any sort of performance wall as process has improved. They've kept increasing their single-threaded performance for POWER even as they've done the other usual server things like add more cores. They've not increased frequency much since 65nm, but they have done a reasonable job (much better than Intel) of increasing IPC.
Once again, you can spin this as "IBM was behind Intel, so they still have room to grow" and once again, that might be true --- none of us knows the future. But the pattern of the recent past is clear: it is INTEL that has had performance largely frozen even as process has improved, not everyone else. It has not yet been demonstrated that everyone else will slow to a crawl once they exceed Intel's single-threaded performance levels.
By stalling so much since Sandy Bridge, Intel really gave its competitors a lot of chance to catch up. Plenty of people see no reason to upgrade who have a Sandy Bridge processor and it's six years later!
"6t SRAM which is 0.0806 mm[s]2[/s] vs competitor A's 0.0588 mm[s]2[/s]" Isn't 0.0806 > 0.0588 ! Competitor A is actually smaller per cell, therefore more dense and possible smaller features. The other reason why the overall cache area can increase is by having smaller lines (and more of them), higher n-way and thus more complex selection&load circuitry, more bits for error correcting, more precode and bits for sharing/consistancy. What really matters is, how the perform with real workloads.
If intel can get 15% better single threaded performance on the top 6 core 12 thread mainstream i7 vs the 7700k kaby lake then I don't care if it's on the same node. That's a good jump for intel. A 5GHz OC 6 core 12 thread mainstream cpu at the 350 mark is what intel needs to not look completely terrible in multi threaded apps vs ryzen at this price point. AMD's pressure is already pushing intel into gear. Thank you AMD for making intel release its better technology.
I still will be holding out using my 4.13ghz 6 core 12 thread i7-980x 32nm LGA 1366 gulftown platform with 12GB of 2400mhz cas 9 DDR3 triple channel ram with a recently replace GPU to a used sapphire tri-X OC edition with some custom work (repasted with coollaboratory liquid metal ultra tim and a custom larger 3 slot aluminum heat sink with copper heat pipes and larger heat sinks for the vrms and better contact with the ram, with 2x nfp12 120mm noctua fans cooling the gpu open air style ) with all but 64 stream processors software unlocked (all but 1 CU fully unlocked) giving it 4032 of 4096 stream processors running at 1140Mhz HBM at stock 500MHz and the storage has been upgraded with a Samsung 850 pro 1TB SATA III SSD in addition to the 4TB HGST 7200 RPM drive.
It's crazy to think that with just a few minor upgrades and a good deal on a custom GPU made by a friend (since he upgraded) This almost exactly 7 year old PC can still run everything amazing with max details at my 1920x1080 monitor resolution. I never turn down any settings for any games and I get smooth game play and this system is freaking 7 years old with only upgrades done to the storage adding SSD and an upgrade to the graphics card since I got a really cool fancy sapphire fury tri-x OC with custom work done on it that keeps the temps mainly in the 50's while gaming to sometimes low 60's while rly being stressed. I can actually stable overclock to 1190mhz but than the temps hits mid to high 70's so I just keep it at the mild 1140mhz overclock and enjoy silent fluid gaming on a 7 year old desktop. <---- This is why PC sales are slowing down. It's so bad that my pci-e 2.0 express system is not being replaced until pci-e 4.0 comes out in 2-3 years. So that's pretty bad CPU improvements are so bad I was able to skip an entire pci-e generation with a pc that'll be 10 years old by then and still playing games smooth.
And yes I actually prefer the 1080p resolution because I prefer Vertical alignment panels and their native 3000:1 contrast ratio and no glows from the corners on a black screen like ips glow and its middling 1000:1 contrast ratio. I like the 144hz free sync Samsung curved monitor with 1080p resolution and quantum dot tech that allows more colors to be shown and less banding while keeping the 3000:1 static CR and faster pixels and overall response time with its 144hz panel and no tearing with free sync. The deeper blacks make everything else look brighter and crisper and I think this is a bigger benefit than a 1440p IPS resolution monitor.
The only thing's I miss on my PC are native sata III, native USB 3.0, m2 slots, sata express, NVME storage type c USB ports that can run 100 watts of power and 10 gbps of speed, USB 3,1 gen 2 in general, thunderbolt and DDR4 and DMI 3.0 connected chipset and maybe 5 and 10 gbps Ethernet that is starting to come on higher end z270 boards It's about time 10 gbps Ethernet starts coming standard in mainstream with the Ethernet port just working at 10/100/1000/10000 auto detected what's needed. It's a shame when you can get 3x3 or 4x4 5ghz wireless ac transmitters and receivers and pump out a higher bandwidth than 1gbps Ethernet. They need to quit holding back 10 gbps wired networks especially now with fast ssd's capable of using the 10gbps local speed to transfer files on your local network.
But I can't bring myself to buy a pc and pci-e 4.0 is around the corner and is a milestone in computing as that will also bring about DMI 4.0 upgrade from the chipset for even faster read and writes to peripherals not connected directly to cpu. And allow all these m2 devices that arent saturating pcie 3.0x4 to be used on 4.0 x2 lanes instead of 4 lanes packing even more expansion slots to the point that all out storage is connected to the motherboard and literally the only thing connected away from the mobo will be the PSU with everything else fitting right on the mobo for some efficient builds.
I'm not a PC gamer but I've got a 7-year-old Core 2 Duo laptop that still feels snappy, thanks to RAM and SSD upgrades. 10% IPC increase per generation translates to nothing much for regular users. I think Intel will need to start renting out its fabs if it only has data centers as its main market, with PC and laptop sales slowing and Intel's mobile presence being non-existent.
If you had used the words me, my or I any more times I might think you wrote this post just to brag about your wonderful PC.....Other than saying how incredulous you are with your system's ability to handle modern games, nothing you said had anything to do with the article. You should just copy and paste this on all the forums you can find and maybe someone will affirm for you what a great purchasing decision you made in 2010.
They can't. The same slide showed Kaby Lake as 15% faster than Skylake. Kaby Lake has no IPC improvements over Skylake at all, and only a 5% boost in clock speeds at the high end.
Coffee Lake appears to be just another iteration of Skylake, and clock speeds on the 6C chips will very likely be slower than current 4C, so single-threaded performance will go down.
Coffee Lake will at least make 6C chips mainstream, but don't look for anything significant from Intel until 2020.
How is it possible they have enjoyed a 3yr lead at 10nm when samsung is putting out a phone or tablet in a month with 10nm. Even from their own chart there wasn't a 3yr lead vs. other's 14nm. Their own chart actually shows the other guys have a lead on them at 10nm...LOL. They will likely be beaten to 7nm also. If they hadn't thrown away 12bil+ on mobile losses, they would have spent that on R&D at fabs.
This is pretty much the same as AMD spending on consoles etc instead of CORE products like CPU/GPU (which for Intel is FABS which leads to great chips).
Because there is no standard on what is "10nm". Their chart is based on areal transistor density, I think. They are claiming that competitors 10nm processes will have about the same density as their 14nm node. There are other metrics to consider as far as effectiveness of a process, but if their chart is right then in terms of that one metric they have a 3 year lead.
As 'jjj' pointed out, TSMC will be out with 7 nm by Q4 of this year and (realistically) will be shipping in volumes of next year. TSMC's 7 nm is equivalent to Intel's 10 nm. So Intel's lead is maybe half a year at best.
How can TSMC be on 7nm next year when they don't even have 10nm out yet?
People are so funny: at the same time saying that Moore's Law is dead, but then believing TSMC will hop from one node to the next in a matter of a few quarters. The only high volume 7nm you're going to see from TSMC in 2018 will likely just be the iPhone, just like 20nm in 2014.
TSMC have delivered on their publicly announced roadmap every year over the past few years. Please tell us what they have promised (ie PUBLICLY announced, not what you read on some rumors site) and not delivered.
It's nothing to do with their past performance. It's the fact they're claiming there's a clear route down to 3 nm, while everyone else says 7 nm is going to be a real challenge with no clear path after. Both can't be right.
10nm in on the map, 7nm is on the map. 5nm counts as long term research with no date attached to it yet. Nothing about 3nm.
The nearest TSMC has said to what you claim is that they are NEGOTIATING about ONE DAY buying a piece of land that will EVENTUALLY (no earlier than 2022) produce chips that will at some (perhaps later) point be 5nm and 3nm. That's hardly a claim of a "clear route"... http://asia.nikkei.com/Business/Companies/Taiwan-s...
"TAIPEI — Taiwan Semiconductor Manufacturing Co. (TSMC) said that it plans to build its next fab for chips made at the 5-nm to 3-nm technology node as early as 2022 as it aims for industry leadership.
"“Taiwan’s minister of science and technology (Yang Hung-duen) met TSMC a few months ago, so we took the opportunity to present to him our future plans,” said director of corporate communications Elizabeth Sun, confirming reports in the local press citing Yang. “We wanted him to know that we need a piece of land, because the other science parks in Taiwan are pretty full.”
"EUV Still Uncertain The company said that it is still undecided on whether it will adopt extreme ultraviolet (EUV) lithography for 5 nm and 3 nm.
“Our current plan is to use EUV extensively for 5 nm,” Sun said. “That’s under the assumption that EUV can be ready.”
The company said that it will ramp 7 nm in 2017, followed by 5 nm in 2019, to support smartphones and high-end mobile products with new features, including virtual reality and augmented reality."
--so, their Director of Communications is claiming a path to 3 nm, while at the same time admitting that they need EUV, but it not be ready. As I said, full of shit.
Why is TSMC's 7nm is equivalent to Intel's 10nm? That statement is in direct opposition to the reality of the current state of fabrication processes. There is very little chance they are "equivalent". If they are equivalent under some particular metric then tell us which metric you mean.
What does this actually mean? It means they have the same DIMENSIONS, not that they have the same PERFORMANCE. Obviously the details are different, from the shape of the fins to the materials used, to the quality of the design and layout algorithms. Is one better than the other? Depends on what you prioritize in defining "better". Certainly of the two premier CPU design teams in the world - Intel hits higher frequencies (and thereby higher single-threaded performance) on their process - Apple hits substantially better performance at low powers on TSMC's process.
If Apple one day gets round to releasing the mythical ARM-based Mac (and so has a power budget of, say, 65W or so to play with, rather than the 12W or so of an iPad SoC [or the 130W or so of a K-class Intel CPU] we might get a more apples-to-apples comparison of just what Apple's design and TSMC's process can do with a higher power budget.
"What does this actually mean? It means they have the same DIMENSIONS, not that they have the same PERFORMANCE."
Yes, so therefore they are not "equivalent" because to be "equivalent" they must have the same EVERYTHING, or at least EVERYTHING THAT MATTERS.
Even Intel didn't go so far to try to say their 14nm and TSMC's 10nm were equivalent, and you presumably do not even work in the marketing department for TSMC.
That's a good link. As we say, TSMC's 7 nm is equivalent (in all meaningful senses) to Intel's 10 nm. Intel is manufacturing 10 nm now. Let's see if TSMC make good on their promise of their 7 next year, when they've only begun their 10 this year. I'm not saying it's impossible, just that it's unlikely. And TSMC will not make another node jump, down to what they call 5 NM, in just two years in 2019. That's not going to happen.
AMD used their revenue from consoles to keep them afloat while they put the finishing touches on Zen and Polaris. I agree Intel could have repurposed the cash they wasted on mobile but if you think AMD doing consoles was bad for them, I'm glad you aren't managing my money.
So how many more billions of dollars is the industry going to dump into EUV/silicon before they actually throw money at graphene or black phosphorous or something?
It takes a lot of years for things to go from discovery, to research to development to production. Diane Bryant said it took 16 years for silicon photonics to go through all of this.
The last few nodes have shown little performance increases but big efficiency increases. Efficiency matters far more in the data centre than it does in your laptop, where the screen is using most of the power most of the time.
As Herb Sutter pointed out over 10 years ago, the free lunch is over. We are not going to get any significant performance improvements per core any more. There are too many constraints on CPU speed now and none of these can be removed by sheer number of transistors: there is switching speed which does not increase any more with the new process, there is memory latency etc. The only way vendors can now sell their chips as "performing better than the competition" is by providing more cores, more threads per core (see IBM POWER8 and POWER9) and ensuring these work their best even under full load. Providing better and faster interconnects is also an option. But this also means that application will not get any faster unless they tap into multi threaded execution environment or make more use of connected accelerators, which is often tricky. This means very little for you average gaming desktop machine, since the only accelerator you need is a GPU, and the interconnect (i.e. ePCI) is rarely a bottleneck for your typical application i.e. game. I would say that this category of CPUs had already reached a plateau. Which is good news, because that means the fight had moved to data centers where competition to Intel is long overdue.
Well, the screen using more power than the CPU block is a fairly recent development due to said efficiency gains. There was a time not that long ago where that was definitely not the case.
On my Compaq 486DX2 @ 66MHz and my Texas Instruments Pentium @ 90 MHz, the screen was the highest demand component and neither screen was higher than 640x480 or larger than 12 inches. The screen's power consumption is absolutely not a new problem. Both of those laptops were passively cooled which speaks to their lack of TDP and, by inference, lack of electrical power demand. The same was probably true of my monochrome panel 386SX @16Mhz which was my very first laptop.
I'm surprised there were tools to measure power consumption during the Mesozoic Era. Kidding aside are you just spouting crap or were you that hardcore even back then? Serious question despite my irreverent tone.
Yes, in those days we didn't have digital multimeters or even electricity. Why I remember having to light the hallway lamps with a smoldering bit of wood from the fireplace and we only had the fireplace installed to replace the pit on the ground surrounded by rocks that we used to huddle around to stay warm.
No really, I didn't do any of my own measurements. However battery life increased pretty dramatically when the laptops were running off external monitors with the lids closed. I was usually pulling about an hour and thirty minutes on the P90 and was closer to 2:20 on an external screen.
There were quite a few publications that supported that assertion, print magazines like Boot (before it became Maximum PC) and Computer Shopper (remember those huge phonebook sized catalog/magazines?) both made that claim about active matrix LCD screens when the technology was new and replacing those smeary old passive matrix models. I specifically remember reading about it back in those days and debating about whether or not I should get the Texas Instruments Travelmate because of its active matrix screen. Not that passive matrix LCD backlights were that much more efficient...they weren't.
So I think what we're saying is... Intel will begin 10 nm production this year, but it won't be used in their next consumer products, unlike 14 nm which went into low-power consumer stuff first. The next gen of consumer products will be on 14 nm again (they'd be better off not launching a product every year).
Instead, some currently unknown chip designed for data centres will make use of it. Probably some small/wimpy-cored multiple-dies-on-one-chip-module thing, because yields will be low. Right? I would guess it will be the next gen of the Xeon D.
On top of that, the rumours and leaks contradict. The 8th generation products could be code named Cannonlake or Coffe Lake, or even both. We don't know. We don't know when they'll launch, nor which power class will launch first, what it will be called, and how many cores will be present. Me thinks Zen is shaking things up.
I guess we'll have to wait for some consistency to develop in the rumours, or even wait for official announcements from Intel!
Actually some of them make sense. Coffee Lake will be 8th gen consumer >15W on 14 nm, as previously rumoured. It launches late this year. Intel is aiming for 15% performance gains and they'll have up to 6 cores.
Cannonlake is 8th gen consumer <15W on 10 nm. It was thought this would release late this year. What we appear to be seeing is it being pushed back so the first 10 nm dies can go into a currently unknown data centre product.
7 nm (called 5 nm if you're TSMC) will come around 2020, probably with EUV, and possibly might be the last node shrink for a very long time as it will be so expensive to design, develop and produce chips at 7 nm.
So why are die shrinks still so important exactly? I get why it was important in the past, but at this point, with things already so small, where do the diminishing returns begin? Considering shrinking things further has been hugely problematic, what exactly is there to be gained?
Basically because all the competitors are still doing shrinks, if you don't do them too, you fall behind. I mean there is a lot more to it than that, but that's really the simple answer -- it is to stay competitive.
At a TECHNICAL level, die shrinks are less important than in the past. Performance improvements mostly derive from material improvements and new transistor designs, not from the fact that the lithography is drawing smaller features. BUT You can't just roll out each new improvement (a different high-K material, a new idea for annealing contact metals, a higher-aspect ration fin, etc) one at a time as they get perfected. Designs are optimized for a particular process and don't expect that random small aspects of that process keep changing every few months. SO All the improvements over the past year or so are kept in the lab, forced to play nice together, then rolled out simultaneously as a new "node". Sometimes this node comes with smaller features (eg TSMC 20nm to 16nm), sometimes it "just" reflects material and transistor design improvements (+ and ++ nodes like TSMC 16nm+).
.......................................
There's a sort of weird ignorance+snobbery on the internet (though god knows what the people involved have to be snobbish about...) that thinks these "+" nodes are not "real" improvements. This CAN be the case, but they can also be substantial improvements (as in the case of the 16nm+ node). Partly there's an issue of just how much was improved by the various tweaks; partly there's an issue of how well prepared designers were for the new node and so could take advantage of it. The foundry customers seem to be well informed about future plans ahead of time, and to do an adequate job of exploiting new designs. Whereas Intel seems to have stumbled into the "Optimization, Optimization2, ...?" scheme unprepared and with no backup plan, and their designs have such a long lead time that they have not been able to really exploit the process changes (regardless of this "Optimization" claim).
So the "+" that's the Kaby Lake 14nm+FF node appears to be essentially - the exact same CPU design - pretty much the exact same process - JUST a slight relaxing of how close some transistors are to each other, meaning that they don't interfere with each other as much, and so allowing for a minor boost in frequency. This is obviously a completely different (and vastly less impressive) sort of "optimization" than the sort of optimization that has Apple improving frequency by 30%, performance by 50%, and reducing energy substantially when they move from 16nmFF to 16nm+FF on TSMC.
But, as I said, that's the difference between a planned and well-executed constant stream of improvement, and a mad-scramble for something, anything, when things don't go the way you planned. Intel can't be faulted for having their processes delayed in their introduction --- issues happen. They CAN be faulted in apparently have absolutely zero back-up plans in the event that something might go wrong. I'm damn sure that, eg, both TSMC and Apple have backup plans B, C, and D in the event that something unexpected happens to their schedules.
Apple is taking stock ARM IP and tweaking it to widen things up. Apple is not home to the Gods of Engineering as you repeatedly claim. Apple's CPUs are nowhere near as complex as Intel's, and their instruction set is nowhere near as versatile as X86. Get off your knees and stop worshipping at the altar of Apple, the innovation brought by Intel dwarfs the "rectangular device with rounded corners" crap that Apple does.
No sign of diminishing returns yet. Each shrink still provides performance and efficiency gains. It's just that complexity of design and production is increasing exponentially with each node nowadays, and thus cost. There's every chance consumers won't pay the prices necessary for 5 and 3 nm to be built.
"An image posted by Dick James of Siliconics from the livestream shows Intel expects to have a three-year process node advantage when its competitors (Samsung, TSMC) start launching 10nm: "
So let's see. In the REAL world - Apple on TSMC is already a match for Intel at equivalent power. So much for that 3yr node advantage. Where exactly do we see it pay off?
- Intel was telling us at the Koby Lake launch about their new comfort fit transistors. So WTF is it? If logic density is the most important metric possible, why did they go backwards with their "relaxed" KL transistor layout?
- What we expect in a month or two is an A10X manufactured on TSMC 10nm which is likely a reasonable match for pretty much any non-K Intel core. (If we assume previous scaling we'd expect this to be at around 3.4GHz, with 25% IPC advantage over intel).
- Next year (maybe Q1, maybe as late as Q3) we expect A11X on TSMC's 7nm.
So yeah, sure, TSMC 16nm+ is not as dense as Intel 14nm. And sure TSMC's 7nm will not be as dense as Intel's 10nm. BUT - Intel seem unable to extract a performance advantage from their process - TSMC will be shipping real A11Xs in significant volumes on 7nm at the same time that Intel will be shipping god knows what on 10nm, in volumes that appear to be calculated to make the slow slow slow rollout of Broadwell look like a rocket.
If this is the best story Intel can tell its investors, good luck those of you stuck with them (either via stock possession or unable to switch chips).
I'm not interested in re-litigating this. If you consider it still a unproved proposition, there's nothing I can do to cure your ignorance. A9X was comparable with Intel over a year ago: http://www.anandtech.com/show/9766/the-apple-ipad-... A10 was a 50% improvement over A9, and we'd expect at least the same sort of improvement for A10X over A10, with a likely additional 20% boost or so from the transition to 10nm.
Your linked article has the conclusion that intel's fastest core-m is significantly faster. So it does nothing to bolster your argument, except to say it may be close but not in front, on an iPad vs MacBook.
To then say. Well we don't have benchmarks but Apples stated improvements mean they are faster.. Needs citation.
I thought people had figured out that synthetic benchmarks heavily dependent on single thread performance had little value? They're interesting, but they don't answer fundamental questions about performance across architectures and nodes.
Until we see an Apple Ax chip running x265 or Agisoft photo scan or something else, you know, real, we can't make comparisons.
I completely disagree we can't make any comparisons at all - benchmarks do give a pretty good idea what you can expect. Note video codecs and image processing say nothing about CPU performance given they are typically done by dedicated hardware or the GPU.
Single-threaded performance most definitely is quite fundamental. Multi-threaded performance is easy, remember most phones have at least 8 cores nowadays.
I never thought people would become fanboys of IC foundries, of all things.
Not sure I should play along but... have you got any sources for your statements? For example, Apple on TSMC being competitive with Intel? I wasn't aware of any product overlap. It's a Core M inside a Mac Book, after all.
And TSMC 10 nm being used for an A10X? Er, no. As reported here, it's all being used for the Qualcomm 835 going into the Samsung S8 launching in April. What new Apple device is due to launch this side of September?
"Oops, it's the Samsung foundries being used for the 835, not TSMC." No shit...
So your point is? TSMC have not stated that they're producing the A10X (of course not, they're not going to piss off their largest customer). What they HAVE said is that 10nm was in risk production last year, and is expected to deliver commercial products and TSMC revenue in Q1 this year.
Umm, this article doesn't really address the confusion about multiple architecures that seem the be released at the same time. If Intel's 8 gen core is on 14nm, what is 10nm Cannon lake? 9th gen? And what 8 gen core products are being released?
I'm more interested in the mobile parts of Coffee Lake, the U series will finally be 4 cores, and more importantly the H series will have 6 cores !!... I hope they really come in 2018...
Why is Coffee Lake missing from this article? Preliminary information about post-Kaby Lake already said process will diverge. Cannon Lake 10nm will be released side-by-side with Coffee Lake 14nm.
Only high margin markets (ex: laptops) will move to 10nm (Cannon Lake). Lower margin markets (ex: mainstream desktops) will remain on 14nm node (Coffee Lake). That will make four (!) 14nm iterations in a row (Broadwell Skylake, Kaby Lake, Coffee Lake). Yuck!
The only saving grace for Coffee Lake 14nm is increased core count for mainstream desktops. Higher core counts from Intel are only available in enthusiast ($$$) lines (ex: Broadwell 14nm). Coffee Lake 14nm will bring higher core counts (ex: 6 core/12 thread, or higher) down from enthusiast desktops to mainstream desktops. Intel will only be about 9 months behind AMD Ryzen for 8 core/16 thread mainstream desktops when Coffee Lake 14nm is launched. I hope it is more than a recycled Broadwell 14nm.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
124 Comments
Back to Article
Yojimbo - Thursday, February 9, 2017 - link
"Data centers first to new nodes".Does this mean Intel is going to accept lower margins on its data center hardware in order to chase the growth that exists in that segment and to try to stem off competition?
witeken - Friday, February 10, 2017 - link
You are correct, D. Bryant has said they expect operating margin the shrink from in the high-40 % to low-40%. The EMIB technology to slip the die in chiplets is a really, really big deal to achieve this data center first strategy, which in itself is already a monumental and underrated change of business. Compare this to TSMC and Samsung, who go mobile first. Intel is now doing opposite.Gondalf - Friday, February 10, 2017 - link
You are wrong. Intel will go mobile (CORE M and 15W line) and datacenter first with 10nm. LIke have done on 14nm. Nothing is changed. This time Intel refuse to delay datacenter SKUs and adopt the multidie approach to stay ahead the competition in spite the impossibility to manufacture large die SKUs at the beginning of the new node ramp. EMIB was patented for this in 2014.I don't see any monumental change of business, so sorry
witeken - Friday, February 10, 2017 - link
Diane Bryant has said that this change will start with 10nm++, and then 7nm as the first full node. This is, she said, because it takes many years to build these products, so the development for the first 10nm products are already well under way.Yojimbo - Friday, February 10, 2017 - link
Intel didn't go data center first with 14nm. 14nm Xeon E5s didn't come out until the beginning of 2016 whereas 14nm was used on other products (lower power, mobility parts) as early as late 2014. Mobile Core i processors used it in early 2015 and desktop Core i processors used it in the middle of 2015.Alexvrb - Saturday, February 11, 2017 - link
This is true but witeken is saying that they're doing the opposite of mobile-first competition. Intel is definitely shifting priorities, but they are still going to put a lot of effort towards mobile. So if mobile is their number two priority, I'd hardly say that's the "opposite" of Sammy et al.Either way every time I see one of those Intel cloud/server commercials I can't help but think Naples must concern them a little.
nils_ - Wednesday, February 15, 2017 - link
Don't forget about Broadwell-DE in mid-late 2015, which pretty much killed most ARM offerings in the crib.name99 - Friday, February 10, 2017 - link
To me this looks like a desperate action, not a careful plan. It suggests that they expect 10nm yields to be so bad for a while that they simply will not be able to ship mobile volumes, even if they repeat the pathetic multi-quarter extended rollout of Broadwell, one SKU at a time.Even assuming EMIB works wonderfully and delivers something like what it claims, client delivers twice the revenue of data center, and there's much more value to a better performing chip (faster single-threaded, and lower power) in mobile than in data center (ie more people will pay more for it).
They're going data center first because they HAVE TO, not because they want to.
iwod - Friday, February 10, 2017 - link
This implies either IBM Power 8 is really going to be a threat or AMD Zen is finally catching up.Nagorak - Sunday, February 12, 2017 - link
Or both. Intel's performance lead suddenly looks to be in serious danger.nils_ - Wednesday, February 15, 2017 - link
You're talking about Power 9 I believe? The problem here is that it's very hard to get the ecosystem going without affordable CPUs, it'll remain a niche product unless developers can actually get their hands on them.revanchrist - Thursday, February 9, 2017 - link
14nm Coffee Lake and 10nm Cannonlake?Cygni - Thursday, February 9, 2017 - link
That's what the leaked documents showed.That plus the discussion about using EMIB to cobble together smaller dies for server parts, as well as the fact that Intel let Fab 42 sit after billions in construction rather than installing 10nm production equipment, leads me to believe that 10nm has been a bit of a disaster for Intel. 14nm was obviously a struggle at first and things didn't get any easier it appears.
Morawka - Thursday, February 9, 2017 - link
wow time to dump intel stock... they gonna be on 10nm for 4 yearsddriver - Thursday, February 9, 2017 - link
TOMORROW - tick, tock, tock, tock, tock, tock, tock, tock, tock, tooooooooooooooooooocksmilingcrow - Friday, February 10, 2017 - link
BoomBurntMyBacon - Friday, February 10, 2017 - link
Looks like their next process node will be Dynamite!...
I'll see myself out.
Gondalf - Friday, February 10, 2017 - link
Why not to dump your ASML stock instead???Do you really believe GloFo, TSMC, Samsung are better than Intel ?? No, all they are in a desperation state because these new fine nodes do not works correctly in production without EUV.
Look at GloFo, it will stay on 14nm for other three years, Look at TSMC, their fake 7nm process will be a boutique low yields one that only Apple will want (if ever want it). Look at Samsung, their 10nm is a minor step over a very conservative 14nm and 7nm is only a dream three/four years late in the future. Not to mention all these foundries avoid to manufacture big dies under sub 14nm processes and GPUs will stay on 16nm for a lot.
name99 - Friday, February 10, 2017 - link
What evidence do you have about TSMC? You're throwing random adjectives together, and you're not even doing it correctly. The node that is supposed to be "fake" and "only Apple wants" is 10nm --- that's what people like you were saying last year, that it will be a "light" node that's soon abandoned for 7nm.Meanwhile that supposedly despised 20nm TSMC node (where we heard the exact same thing) is doing just fine, serving a constituency that want the best they can get without going to FinFET and double-patterned metal.
The problem you have is, in a way, the problem Intel has. You're both locked into a simplistic model of the world where only the leading edge counts.
TSMC (and I expect the same is true of GloFo and Samsung, though I track them less closely), by serving a large range of constituencies, finds it hard to misfire. Any particular process has value for SOME sort of users, and can hang around for a long long time serving that particular set of use cases. But Intel is locked into a very PARTICULAR set of use cases, and so is much more constrained in how they can derive value from a process that doesn't exactly match their needs. Foundries are omnivores, Intel lives on a single type food and nothing else.
Of course Intel is trying to become a foundry, but they seem so confused as to what the goals of this are that they are unlikely to be successful. (Are they willing to allow competitors to create products that are better than Intel? Would they allow Apple to produce an ARM SoC for Apple desktops? Would they allow AMD to use their foundry?).
And apart from this confusion of goals, there are technical issues. The other foundries have a huge body of design material and standard cells with which customers are familiar. Intel provides the Intel way of doing things, which is not especially compelling when you are not part of that tradition. There's a reason they've had such limited foundry pick-up so far, and there seems no sign that that's going to change.
extide - Friday, February 10, 2017 - link
Gondalf actually has some points here .. believe it or not. Firstly, everyone uses ASML gear to make this stuff, Intel, GloFo, TSMC, you name it. Also, TSMC has said they are going to NOT use any form of EUV for their first gen 7nm process -- which means it will be multi-patterned to kingdom come. We all know big die GPU's won't be coming first on new processes from here on out most likely..name99 - Friday, February 10, 2017 - link
What do those points have to do with my criticism? My complaint, and the comment, was with "their fake 7nm process will be a boutique low yields one that only Apple will want (if ever want it)."The subsequent statements are just as indefensible. We have NO IDEA what the yields are for these processes, so no-one outside the foundries is in a position to make claims like "all these foundries avoid to manufacture big dies under sub 14nm processes and GPUs will stay on 16nm for a lot"
Might be true, might not, who knows? Even if the GPUs stay at 14/16nm for a while, that may have nothing to do with yield (and so die size) and may reflect the cost of masks vs expected sales.
ssj4Gogeta - Thursday, February 9, 2017 - link
"Process-architecture-optimization" seemed like something conjured up by the marketing department from the beginning, especially considering the marginal gains with Kaby Lake.lilmoe - Friday, February 10, 2017 - link
What gains? FF blocks that were available on Snapdragon and Exynos chips for years? lolclose - Friday, February 10, 2017 - link
It actually looked like something conjured up by the marketing department on the spot when they were left with no options. The marketing had it easy at Intel before. Engineering always delivered, there was no competition, life was great.All of a sudden engineering stopped delivering (for whatever reasons), competition started flaring up at least in leaked slides and teasers, and Intel's marketing department had a job to do for the first time in years. You can see for yourself how bad the lack of practice was...
jimjamjamie - Friday, February 10, 2017 - link
Now it's Process-Optimisation-Optimisation-Persistencenemoshotyany - Friday, February 10, 2017 - link
So their strategy is now POOP?shabby - Friday, February 10, 2017 - link
It was always poop to begin with.bcronce - Friday, February 10, 2017 - link
Intel spend 2.5x more on R&D than AMD's gross revenue. It's not like they're suddenly cutting their R&D, they just need to shift what they spend their money on. There's also hope for async CPUs. Intel has said in the past that die shrinks are so much easier and faster than making async CPUs. Maybe we'll start to see 10ghz+ CPUs in the future.Nagorak - Sunday, February 12, 2017 - link
There are diminishing returns. Also, AMD doesn't have to pay for newer fab technology, that falls on Global Foundries. Granted, in the past when GF struggled they were hobbled by it as well. But the point is, you can't compare Intel R&D directly to AMD's because a lot of Intel R&D is on process, and that R&D should be compared to GF.Nexing - Thursday, February 9, 2017 - link
"it makes us wonder where exactly Intel can promise future performance or efficiency gains on the design unless they start implementing microarchitecture changes."Just replacing bad Tim on the die for soldered CPUs will return them above average yearly thermal efficiency gains.
fanofanand - Friday, February 10, 2017 - link
You only get that band-aid once, and then the next year you won't get such gains. See where Intel's stress comes from?rocky12345 - Thursday, February 9, 2017 - link
Sounds like. to me Intel laid off to many of it's seasoned engineers over the last few years and left to many college grads on staff and are left with very few real problem solvers and this is why they are in such a rut now. It might be a good idea for them to hire them back if there are any wanting to come back. At the same time lay off the handy capped thinkers. Then maybe they might be able to get out of the rut they are in.Stochastic - Friday, February 10, 2017 - link
Citation needed.GreenMeters - Thursday, February 9, 2017 - link
Has Intel learned something that points to Ryzen being a dud, or at least a big disappointment?nevcairiel - Friday, February 10, 2017 - link
It all depends on your point of view. For AMD, Ryzen will be a huge step forward, but thats not hard looking at recent AMD CPUs otherwise.But in the overall grand scheme, will it beat Intel? I wouldn't think so, at best it may end up close to equal, Intel will lower prices some, and nothing is really lost.
name99 - Friday, February 10, 2017 - link
"Intel will lower prices some, and nothing is really lost"Unclear. Intel is a machine that only operates well because the profits can pay for so much man-power designing the next gen CPU designs and processes. If that river of money stops, the whole machine seizes up. (If you love Intel so clearly you cannot see this, consider the exact same thing in the context of Apple --- once again a river of cash allows Apple to create superb custom hardware that generates more cash, but if the money flow stops, no more A14, A15, A16...)
Intel could sustain lowering a few prices at the edge, but they're being hit on all sides now. Assuming Ryzen is at least adequate, that competes on the low-end. ARM is competing on a slightly different low-end in things like Chromebooks.
To try to limit ARM servers even getting a toe-hold, Intel has had to ship things like Xeon-D and Xeon-E3, and to try to juggle the feature set to prevent wealthier customers from wanting to buy them.
Meanwhile ARM has said all along that it did not expect serious server chips until 2017, that everything earlier was basically for bring-up and eco-system rollout. So when the 2017/2018 ARM server cores come out, Intel's going to, what? Drop the price of Xeon-D even lower and make it even more competitive with Xeon-E5's?
And of course the biggest competitor of all --- if Koby Lake, then Canon Lake are only pathetic improvements on the past, then the FREE Skylake (or even Haswell or Broadwell) I have already is good enough to stick with...
Intel cannot afford to lower prices across the ENTIRE product line. But that's the corner they're being backed into.
(A secondary issue which is difficult to predict is: can they execute? They've had a sequence of really obvious annual fsckups, from Haswell's broken HWTM to the endless Broadwell rollout, to the Fab42 delay, to the recent Avoton hardware flaw.
This is a hard business, and everyone makes mistakes occasionally, but Intel seem to be making serious, very public, mistakes that are hard to correct, at a level way beyond everyone else.
Does this mean
- the marketers/financiers are in control, not giving the engineers enough time/resources? OR
- that the complexity of the x86 design has hit the limits of human comprehensibility and pretty much any attempt to improve anything over here results in a (perhaps subtle and slow-to-surface) problem over there?
Both are obviously problematic for the future...
)
prisonerX - Sunday, February 12, 2017 - link
People stupidly look at the past and extrapolate into the future. Intel had a good run, but became lazy and fat. When the ground started giving way under their feet they realized in a panic that they were nowhere near nimble enough to recover in a market with so many moving parts and with almost all of them moving against them.Technewsicologist - Thursday, February 9, 2017 - link
Regarding the process node slide showing that “Intel will have enjoyed a lead of ~3 years when competitors launch 10nm process”, I think that the key phrase is “will have enjoyed” (past tense).creed3020 - Friday, February 10, 2017 - link
This chart, to me, makes NO SENSE as the comparison is now of the same nodes. Intel claims to have had a three year advantage but they're not comparing the same process node 14 <> 10.Complete glossing over of the facts.
Meteor2 - Friday, February 10, 2017 - link
TSMC and to a lesser extent GlobalFoundaries are a bit cheeky calling their processes 16/14 nm; both are considerably larger than Intel's 14 nm. GF is skipping 10 nm for 7 nm in around 2019; their 7 and Intel's should be comparable.Technewsicologist - Friday, February 10, 2017 - link
Intel is claiming that their 14nm process has held a logic cell area advantage over competitors' 14nm and 16nm processes. This is true. However, when Samsung and TSMC release 10nm, those companies will have a density lead (until Intel launches its 10nm).Meteor2 - Sunday, February 12, 2017 - link
Remember those 10 nm nodes are only half nodes though. Much of those chips will be 14 nm still, just as 14/16 nm use 20 nm. Indeed, it's likely that 10 nm will use 20 nm sections.Only Intel nodes and GF's future 7 nm node are full nodes.
Wilco1 - Sunday, February 12, 2017 - link
No that's not true. TSMC 10nm doubles density vs 16nm, so definitely a full node. 7nm is a bit less than a full node.Also it's not just density that matters - 14/16nm FinFET was a full node due to significant performance and power improvements, not in terms of density. TSMC 12nm is a halfnode as it is a tweaked 16nm process.
Technewsicologist - Sunday, February 12, 2017 - link
I am of the opinion that 14nm/16nm, 10nm, and 7nm are full nodes. Each offers about a 0.7x density scaling over its predecessor.Meteor2 - Thursday, February 16, 2017 - link
That's the front end of line, i.e. ignoring the back end of line. Here's a good explanation:http://wccftech.com/intel-losing-process-lead-anal...
gurok - Sunday, February 12, 2017 - link
Actually, that's future perfect tense.lopri - Friday, February 10, 2017 - link
Process > Architecture > Optimization > Optimization > .. > (oops another) Optimization?jjj - Friday, February 10, 2017 - link
It's shameless to publish that process slide and not point out the big fat lie.Intel starts production on 10nm this year and ships in volumes next year, if there are no complications.
TSMC starts 7nm production in Q4 and ships in volume in Q1-Q2, more Q2 really.
Intel's lead is a few months at best, you know it, we know it, but you decide to just publish the lie.
lilmoe - Friday, February 10, 2017 - link
Shameless indeed. Not to mention the "density" claim from the entire blogsphere. It turns out that Ryzen chips on Samsung/GF 14nm will have smaller surface area than equivalent Intel parts, all while having larger L2. In the matter of fact, equivalent size of L2 and L3 cache on Samsung/GF 14nm is significantly smaller.I'm putting money that size isn't the only false claim. Leakage? hmmm.
This lead claim by Intel, and those reporting on it is getting ridiculous. Intel process nodes are no where near as superior as everyone claims to be.
BurntMyBacon - Friday, February 10, 2017 - link
Not sure how the density ends up higher when every reported feature size measurement ends up larger.http://www.eetimes.com/document.asp?doc_id=1331317...
Of particular note is the 6t SRAM which is 0.0806 mm[s]2[/s] vs competitor A's 0.0588 mm[s]2[/s]. The fact that the L2 and L3 cache have better density in Zen despite the universally larger features sizes in the same table is clearly not a process advantage, but likely a difference in layout. It is entirely possible that Intel chose to use a less dense layout to increase speed and/or decrease latency.
Intel is most certainly painting their process in the best light possible, but lets not pretend that TSMC, Samsung, GF, Et al aren't doing the same thing. It is quite the rosy theory that Intel's 14nm process is equal to Samsung's 10nm. However, it would also be a mistake to assume that Samsung's 14nm process is equal to Intel's 14nm process.
That all said, process nodes have ceased bringing the kind of frequency gains that they once did. Efficiency gains are also being offset by increased leakage on smaller process nodes. Even cost savings aren't quite as great due to higher defect rates. Getting more transistors on a die does allow for some nice performance and/or efficiency gains, but when defect rates limit your die size, then similar transistor gains can be achieved on the previous process node at the cost of more die area. The point is, while being several process nodes behind, as AMD remains until Zen launches, is still a significant disadvantage, being a single or half a process node behind doesn't have the same meaning that it once had.
name99 - Friday, February 10, 2017 - link
"That all said, process nodes have ceased bringing the kind of frequency gains that they once did. "That's not true.
Compare Apple's performance
A7: 1.3GHz 28nm
A8: 1.4GHz 20nm
A9: 1.8GHz 16nm FF
A10: 2.3GHz 16nm+ FF
A9 and A10 are respectable frequency improvements allowed in part by process improvements. A8 does not look like a great frequency improvement, but it was an overall 25% performance improvement (somewhat allowed by higher density allows for better micro-architecture), and it took longer to hit throttling temperatures than the A8.
I'd say the issue is not that process doesn't allow for improved single thread performance; it's that different actors have different goals and constraints.
Intel's biggest constraints are that it has SUCH a long lead time between the start of a design and when it ships that they have zero agility. So they can start down a path (say eight years) where it looks like the right track is
- keep reducing power for mobile and desktop, but don't care much about performance because there are no competitors
- optimize the design for the server market and minimally port that down to mobile desktop (again because there are no competitors)
But when the world changes (
a] Apple showing that single-threaded performance isn't yet dead
b] long delays and slippages in process rollout)
they're screwed because they can't deviate from that path. So they've built up a marketing message around an expected performance cadence, and didn't have a backup marketing message prepared.
They're also (to be fair) providing constantly improving performance through increasing turbo frequencies, and being able to maintain turbo for longer BUT once again, their marketing is so fscked up that they're unable to present that message to the world. They have not built up a corpus of benchmarks that show the real-world value and performance of turbo, and scrambling to do so today would look fake.
So I would not say the fault is that process improvement no longer deliver performance improvements. I'd say that Intel is a dysfunctional organization that has optimized its process nodes and its designs for the wrong things, is unable to change its direction very fast, and is unable to even inform the public and thus sell well the improvements it has been capable of delivering.
fanofanand - Friday, February 10, 2017 - link
So you are suggesting that Apple, in it's 4th or 5th year of making CPUs, would have the same number of low hanging fruit as Intel in it's 40th year (or whatever)? I think you WAY oversimplified this.name99 - Friday, February 10, 2017 - link
I'm suggesting that Intel has bad incentives in place, and terrible strategic planning.Apple already matches Intel performance at the lower power levels, and is likely to extend that all the way up to the non-K CPUs with the A10X. And that, as you say, after just a few years of CPU design.
Doesn't that suggest that one of these companies is being pretty badly run? I've explained in great detail why it is Intel --- how they planned to exploit the gains of better process was dumb, and then they had no backup plan when even those process gains became unavailable.
You're claiming what? That CPU design has reached its pinnacle, and that the ONLY way to further exploit improved process is through more cores and slightly lower energy/operation?
What would it take to change your mind? To change my mind, I'd have to see Apple's performance increases start to tail off once they're matching Intel. Since they're already at Intel levels, that means their performance increase has to start tailing off to <10% or so every year starting with the A11. Do YOU think that's going to happen?
IBM, to take another company, has likewise not hit any sort of performance wall as process has improved. They've kept increasing their single-threaded performance for POWER even as they've done the other usual server things like add more cores. They've not increased frequency much since 65nm, but they have done a reasonable job (much better than Intel) of increasing IPC.
Once again, you can spin this as "IBM was behind Intel, so they still have room to grow" and once again, that might be true --- none of us knows the future. But the pattern of the recent past is clear: it is INTEL that has had performance largely frozen even as process has improved, not everyone else. It has not yet been demonstrated that everyone else will slow to a crawl once they exceed Intel's single-threaded performance levels.
Nagorak - Monday, February 13, 2017 - link
By stalling so much since Sandy Bridge, Intel really gave its competitors a lot of chance to catch up. Plenty of people see no reason to upgrade who have a Sandy Bridge processor and it's six years later!tygrus - Friday, February 10, 2017 - link
"6t SRAM which is 0.0806 mm[s]2[/s] vs competitor A's 0.0588 mm[s]2[/s]"Isn't 0.0806 > 0.0588 !
Competitor A is actually smaller per cell, therefore more dense and possible smaller features. The other reason why the overall cache area can increase is by having smaller lines (and more of them), higher n-way and thus more complex selection&load circuitry, more bits for error correcting, more precode and bits for sharing/consistancy. What really matters is, how the perform with real workloads.
Laststop311 - Friday, February 10, 2017 - link
If intel can get 15% better single threaded performance on the top 6 core 12 thread mainstream i7 vs the 7700k kaby lake then I don't care if it's on the same node. That's a good jump for intel. A 5GHz OC 6 core 12 thread mainstream cpu at the 350 mark is what intel needs to not look completely terrible in multi threaded apps vs ryzen at this price point. AMD's pressure is already pushing intel into gear. Thank you AMD for making intel release its better technology.Laststop311 - Friday, February 10, 2017 - link
I still will be holding out using my 4.13ghz 6 core 12 thread i7-980x 32nm LGA 1366 gulftown platform with 12GB of 2400mhz cas 9 DDR3 triple channel ram with a recently replace GPU to a used sapphire tri-X OC edition with some custom work (repasted with coollaboratory liquid metal ultra tim and a custom larger 3 slot aluminum heat sink with copper heat pipes and larger heat sinks for the vrms and better contact with the ram, with 2x nfp12 120mm noctua fans cooling the gpu open air style ) with all but 64 stream processors software unlocked (all but 1 CU fully unlocked) giving it 4032 of 4096 stream processors running at 1140Mhz HBM at stock 500MHz and the storage has been upgraded with a Samsung 850 pro 1TB SATA III SSD in addition to the 4TB HGST 7200 RPM drive.It's crazy to think that with just a few minor upgrades and a good deal on a custom GPU made by a friend (since he upgraded) This almost exactly 7 year old PC can still run everything amazing with max details at my 1920x1080 monitor resolution. I never turn down any settings for any games and I get smooth game play and this system is freaking 7 years old with only upgrades done to the storage adding SSD and an upgrade to the graphics card since I got a really cool fancy sapphire fury tri-x OC with custom work done on it that keeps the temps mainly in the 50's while gaming to sometimes low 60's while rly being stressed. I can actually stable overclock to 1190mhz but than the temps hits mid to high 70's so I just keep it at the mild 1140mhz overclock and enjoy silent fluid gaming on a 7 year old desktop. <---- This is why PC sales are slowing down. It's so bad that my pci-e 2.0 express system is not being replaced until pci-e 4.0 comes out in 2-3 years. So that's pretty bad CPU improvements are so bad I was able to skip an entire pci-e generation with a pc that'll be 10 years old by then and still playing games smooth.
And yes I actually prefer the 1080p resolution because I prefer Vertical alignment panels and their native 3000:1 contrast ratio and no glows from the corners on a black screen like ips glow and its middling 1000:1 contrast ratio. I like the 144hz free sync Samsung curved monitor with 1080p resolution and quantum dot tech that allows more colors to be shown and less banding while keeping the 3000:1 static CR and faster pixels and overall response time with its 144hz panel and no tearing with free sync. The deeper blacks make everything else look brighter and crisper and I think this is a bigger benefit than a 1440p IPS resolution monitor.
The only thing's I miss on my PC are native sata III, native USB 3.0, m2 slots, sata express, NVME storage type c USB ports that can run 100 watts of power and 10 gbps of speed, USB 3,1 gen 2 in general, thunderbolt and DDR4 and DMI 3.0 connected chipset and maybe 5 and 10 gbps Ethernet that is starting to come on higher end z270 boards It's about time 10 gbps Ethernet starts coming standard in mainstream with the Ethernet port just working at 10/100/1000/10000 auto detected what's needed. It's a shame when you can get 3x3 or 4x4 5ghz wireless ac transmitters and receivers and pump out a higher bandwidth than 1gbps Ethernet. They need to quit holding back 10 gbps wired networks especially now with fast ssd's capable of using the 10gbps local speed to transfer files on your local network.
But I can't bring myself to buy a pc and pci-e 4.0 is around the corner and is a milestone in computing as that will also bring about DMI 4.0 upgrade from the chipset for even faster read and writes to peripherals not connected directly to cpu. And allow all these m2 devices that arent saturating pcie 3.0x4 to be used on 4.0 x2 lanes instead of 4 lanes packing even more expansion slots to the point that all out storage is connected to the motherboard and literally the only thing connected away from the mobo will be the PSU with everything else fitting right on the mobo for some efficient builds.
Breit - Friday, February 10, 2017 - link
You used liquid metal TIM with an aluminium heat sink on the GPU? Good luck with that... oOGothmoth - Friday, February 10, 2017 - link
get a job... really who do you think cares about your outdated rig?you have way to much time to write such a sermon.
Achaios - Friday, February 10, 2017 - link
Omg Gothmog, I rofl'd.serendip - Friday, February 10, 2017 - link
I'm not a PC gamer but I've got a 7-year-old Core 2 Duo laptop that still feels snappy, thanks to RAM and SSD upgrades. 10% IPC increase per generation translates to nothing much for regular users. I think Intel will need to start renting out its fabs if it only has data centers as its main market, with PC and laptop sales slowing and Intel's mobile presence being non-existent.A5 - Friday, February 10, 2017 - link
1080p sucks. You should get a new monitor and then see how happy you are with that system.fanofanand - Friday, February 10, 2017 - link
If you had used the words me, my or I any more times I might think you wrote this post just to brag about your wonderful PC.....Other than saying how incredulous you are with your system's ability to handle modern games, nothing you said had anything to do with the article. You should just copy and paste this on all the forums you can find and maybe someone will affirm for you what a great purchasing decision you made in 2010.PixyMisa - Friday, February 10, 2017 - link
They can't. The same slide showed Kaby Lake as 15% faster than Skylake. Kaby Lake has no IPC improvements over Skylake at all, and only a 5% boost in clock speeds at the high end.Coffee Lake appears to be just another iteration of Skylake, and clock speeds on the 6C chips will very likely be slower than current 4C, so single-threaded performance will go down.
Coffee Lake will at least make 6C chips mainstream, but don't look for anything significant from Intel until 2020.
Meteor2 - Sunday, February 12, 2017 - link
The 6C chips will just be bigger dies as process yields improve.Meteor2 - Sunday, February 12, 2017 - link
I agree whole-heartedly with your original comment. AMD is finally providing competition.TheJian - Friday, February 10, 2017 - link
How is it possible they have enjoyed a 3yr lead at 10nm when samsung is putting out a phone or tablet in a month with 10nm. Even from their own chart there wasn't a 3yr lead vs. other's 14nm. Their own chart actually shows the other guys have a lead on them at 10nm...LOL. They will likely be beaten to 7nm also. If they hadn't thrown away 12bil+ on mobile losses, they would have spent that on R&D at fabs.This is pretty much the same as AMD spending on consoles etc instead of CORE products like CPU/GPU (which for Intel is FABS which leads to great chips).
Yojimbo - Friday, February 10, 2017 - link
Because there is no standard on what is "10nm". Their chart is based on areal transistor density, I think. They are claiming that competitors 10nm processes will have about the same density as their 14nm node. There are other metrics to consider as far as effectiveness of a process, but if their chart is right then in terms of that one metric they have a 3 year lead.Mondozai - Friday, February 10, 2017 - link
Not really.As 'jjj' pointed out, TSMC will be out with 7 nm by Q4 of this year and (realistically) will be shipping in volumes of next year. TSMC's 7 nm is equivalent to Intel's 10 nm. So Intel's lead is maybe half a year at best.
witeken - Friday, February 10, 2017 - link
How can TSMC be on 7nm next year when they don't even have 10nm out yet?People are so funny: at the same time saying that Moore's Law is dead, but then believing TSMC will hop from one node to the next in a matter of a few quarters. The only high volume 7nm you're going to see from TSMC in 2018 will likely just be the iPhone, just like 20nm in 2014.
Michael Bay - Friday, February 10, 2017 - link
Maybe in their dreams. TSMC nanometers are exactly like Samsung`s.Meteor2 - Friday, February 10, 2017 - link
TSMC are, basically, full of shit.name99 - Friday, February 10, 2017 - link
TSMC have delivered on their publicly announced roadmap every year over the past few years.Please tell us what they have promised (ie PUBLICLY announced, not what you read on some rumors site) and not delivered.
Meteor2 - Friday, February 10, 2017 - link
It's nothing to do with their past performance. It's the fact they're claiming there's a clear route down to 3 nm, while everyone else says 7 nm is going to be a real challenge with no clear path after. Both can't be right.name99 - Friday, February 10, 2017 - link
Try to distinguish between what TSMC ACTUALLY say and what various rumors, so-called journalists, fan-boys, and straight-out nuts claim...Here's TSMC's official position:
http://www.tsmc.com/english/dedicatedFoundry/techn...
10nm in on the map, 7nm is on the map. 5nm counts as long term research with no date attached to it yet. Nothing about 3nm.
The nearest TSMC has said to what you claim is that they are NEGOTIATING about ONE DAY buying a piece of land that will EVENTUALLY (no earlier than 2022) produce chips that will at some (perhaps later) point be 5nm and 3nm.
That's hardly a claim of a "clear route"...
http://asia.nikkei.com/Business/Companies/Taiwan-s...
Meteor2 - Sunday, February 12, 2017 - link
http://www.eetimes.com/document.asp?doc_id=1330971Meteor2 - Sunday, February 12, 2017 - link
"TAIPEI — Taiwan Semiconductor Manufacturing Co. (TSMC) said that it plans to build its next fab for chips made at the 5-nm to 3-nm technology node as early as 2022 as it aims for industry leadership."“Taiwan’s minister of science and technology (Yang Hung-duen) met TSMC a few months ago, so we took the opportunity to present to him our future plans,” said director of corporate communications Elizabeth Sun, confirming reports in the local press citing Yang. “We wanted him to know that we need a piece of land, because the other science parks in Taiwan are pretty full.”
"EUV Still Uncertain
The company said that it is still undecided on whether it will adopt extreme ultraviolet (EUV) lithography for 5 nm and 3 nm.
“Our current plan is to use EUV extensively for 5 nm,” Sun said. “That’s under the assumption that EUV can be ready.”
The company said that it will ramp 7 nm in 2017, followed by 5 nm in 2019, to support smartphones and high-end mobile products with new features, including virtual reality and augmented reality."
--so, their Director of Communications is claiming a path to 3 nm, while at the same time admitting that they need EUV, but it not be ready. As I said, full of shit.
Yojimbo - Friday, February 10, 2017 - link
Why is TSMC's 7nm is equivalent to Intel's 10nm? That statement is in direct opposition to the reality of the current state of fabrication processes. There is very little chance they are "equivalent". If they are equivalent under some particular metric then tell us which metric you mean.name99 - Friday, February 10, 2017 - link
The claim that they are "equivalent" rests on the fact their standard cells (TSMC 7nm and Intel 10nm) have essentially the same dimensions.https://www.semiwiki.com/forum/content/6498-2017-l...
What does this actually mean? It means they have the same DIMENSIONS, not that they have the same PERFORMANCE. Obviously the details are different, from the shape of the fins to the materials used, to the quality of the design and layout algorithms.
Is one better than the other? Depends on what you prioritize in defining "better". Certainly of the two premier CPU design teams in the world
- Intel hits higher frequencies (and thereby higher single-threaded performance) on their process
- Apple hits substantially better performance at low powers on TSMC's process.
If Apple one day gets round to releasing the mythical ARM-based Mac (and so has a power budget of, say, 65W or so to play with, rather than the 12W or so of an iPad SoC [or the 130W or so of a K-class Intel CPU] we might get a more apples-to-apples comparison of just what Apple's design and TSMC's process can do with a higher power budget.
Yojimbo - Friday, February 10, 2017 - link
"What does this actually mean? It means they have the same DIMENSIONS, not that they have the same PERFORMANCE."Yes, so therefore they are not "equivalent" because to be "equivalent" they must have the same EVERYTHING, or at least EVERYTHING THAT MATTERS.
Even Intel didn't go so far to try to say their 14nm and TSMC's 10nm were equivalent, and you presumably do not even work in the marketing department for TSMC.
Meteor2 - Thursday, February 16, 2017 - link
That's a good link. As we say, TSMC's 7 nm is equivalent (in all meaningful senses) to Intel's 10 nm. Intel is manufacturing 10 nm now. Let's see if TSMC make good on their promise of their 7 next year, when they've only begun their 10 this year. I'm not saying it's impossible, just that it's unlikely. And TSMC will not make another node jump, down to what they call 5 NM, in just two years in 2019. That's not going to happen.fanofanand - Friday, February 10, 2017 - link
AMD used their revenue from consoles to keep them afloat while they put the finishing touches on Zen and Polaris. I agree Intel could have repurposed the cash they wasted on mobile but if you think AMD doing consoles was bad for them, I'm glad you aren't managing my money.HollyDOL - Friday, February 10, 2017 - link
Huh, I really don't know what to think about this. Guess before I pass any judgement I'll wait for Ian's review when he gets the chips.Frenetic Pony - Friday, February 10, 2017 - link
So how many more billions of dollars is the industry going to dump into EUV/silicon before they actually throw money at graphene or black phosphorous or something?witeken - Friday, February 10, 2017 - link
It takes a lot of years for things to go from discovery, to research to development to production. Diane Bryant said it took 16 years for silicon photonics to go through all of this.Amandtec - Friday, February 10, 2017 - link
The last few nodes have shown little performance increases but big efficiency increases. Efficiency matters far more in the data centre than it does in your laptop, where the screen is using most of the power most of the time.Bronek - Friday, February 10, 2017 - link
As Herb Sutter pointed out over 10 years ago, the free lunch is over. We are not going to get any significant performance improvements per core any more. There are too many constraints on CPU speed now and none of these can be removed by sheer number of transistors: there is switching speed which does not increase any more with the new process, there is memory latency etc. The only way vendors can now sell their chips as "performing better than the competition" is by providing more cores, more threads per core (see IBM POWER8 and POWER9) and ensuring these work their best even under full load. Providing better and faster interconnects is also an option. But this also means that application will not get any faster unless they tap into multi threaded execution environment or make more use of connected accelerators, which is often tricky. This means very little for you average gaming desktop machine, since the only accelerator you need is a GPU, and the interconnect (i.e. ePCI) is rarely a bottleneck for your typical application i.e. game. I would say that this category of CPUs had already reached a plateau. Which is good news, because that means the fight had moved to data centers where competition to Intel is long overdue.fanofanand - Friday, February 10, 2017 - link
BingoA5 - Friday, February 10, 2017 - link
Well, the screen using more power than the CPU block is a fairly recent development due to said efficiency gains. There was a time not that long ago where that was definitely not the case.BrokenCrayons - Friday, February 10, 2017 - link
On my Compaq 486DX2 @ 66MHz and my Texas Instruments Pentium @ 90 MHz, the screen was the highest demand component and neither screen was higher than 640x480 or larger than 12 inches. The screen's power consumption is absolutely not a new problem. Both of those laptops were passively cooled which speaks to their lack of TDP and, by inference, lack of electrical power demand. The same was probably true of my monochrome panel 386SX @16Mhz which was my very first laptop.fanofanand - Friday, February 10, 2017 - link
I'm surprised there were tools to measure power consumption during the Mesozoic Era. Kidding aside are you just spouting crap or were you that hardcore even back then? Serious question despite my irreverent tone.BrokenCrayons - Friday, February 10, 2017 - link
Yes, in those days we didn't have digital multimeters or even electricity. Why I remember having to light the hallway lamps with a smoldering bit of wood from the fireplace and we only had the fireplace installed to replace the pit on the ground surrounded by rocks that we used to huddle around to stay warm.No really, I didn't do any of my own measurements. However battery life increased pretty dramatically when the laptops were running off external monitors with the lids closed. I was usually pulling about an hour and thirty minutes on the P90 and was closer to 2:20 on an external screen.
There were quite a few publications that supported that assertion, print magazines like Boot (before it became Maximum PC) and Computer Shopper (remember those huge phonebook sized catalog/magazines?) both made that claim about active matrix LCD screens when the technology was new and replacing those smeary old passive matrix models. I specifically remember reading about it back in those days and debating about whether or not I should get the Texas Instruments Travelmate because of its active matrix screen. Not that passive matrix LCD backlights were that much more efficient...they weren't.
Danvelopment - Friday, February 10, 2017 - link
Tick tick tick tick BOOMDanvelopment - Friday, February 10, 2017 - link
Process, Architecture, Optimisation, Optimism, PanicBurntMyBacon - Friday, February 10, 2017 - link
FTW!creed3020 - Friday, February 10, 2017 - link
That is PERFECT.Danvelopment - Friday, February 10, 2017 - link
Here's hoping we don't get to Salt LakeBurntMyBacon - Friday, February 10, 2017 - link
Yeah. That would make people a bit ... salty.Meteor2 - Friday, February 10, 2017 - link
So I think what we're saying is... Intel will begin 10 nm production this year, but it won't be used in their next consumer products, unlike 14 nm which went into low-power consumer stuff first. The next gen of consumer products will be on 14 nm again (they'd be better off not launching a product every year).Instead, some currently unknown chip designed for data centres will make use of it. Probably some small/wimpy-cored multiple-dies-on-one-chip-module thing, because yields will be low. Right? I would guess it will be the next gen of the Xeon D.
Meteor2 - Friday, February 10, 2017 - link
On top of that, the rumours and leaks contradict. The 8th generation products could be code named Cannonlake or Coffe Lake, or even both. We don't know. We don't know when they'll launch, nor which power class will launch first, what it will be called, and how many cores will be present. Me thinks Zen is shaking things up.I guess we'll have to wait for some consistency to develop in the rumours, or even wait for official announcements from Intel!
Meteor2 - Friday, February 10, 2017 - link
Actually some of them make sense. Coffee Lake will be 8th gen consumer >15W on 14 nm, as previously rumoured. It launches late this year. Intel is aiming for 15% performance gains and they'll have up to 6 cores.Cannonlake is 8th gen consumer <15W on 10 nm. It was thought this would release late this year. What we appear to be seeing is it being pushed back so the first 10 nm dies can go into a currently unknown data centre product.
7 nm (called 5 nm if you're TSMC) will come around 2020, probably with EUV, and possibly might be the last node shrink for a very long time as it will be so expensive to design, develop and produce chips at 7 nm.
dstarr3 - Friday, February 10, 2017 - link
So why are die shrinks still so important exactly? I get why it was important in the past, but at this point, with things already so small, where do the diminishing returns begin? Considering shrinking things further has been hugely problematic, what exactly is there to be gained?extide - Friday, February 10, 2017 - link
Basically because all the competitors are still doing shrinks, if you don't do them too, you fall behind. I mean there is a lot more to it than that, but that's really the simple answer -- it is to stay competitive.name99 - Friday, February 10, 2017 - link
Here's what I would say:At a TECHNICAL level, die shrinks are less important than in the past. Performance improvements mostly derive from material improvements and new transistor designs, not from the fact that the lithography is drawing smaller features.
BUT
You can't just roll out each new improvement (a different high-K material, a new idea for annealing contact metals, a higher-aspect ration fin, etc) one at a time as they get perfected. Designs are optimized for a particular process and don't expect that random small aspects of that process keep changing every few months.
SO
All the improvements over the past year or so are kept in the lab, forced to play nice together, then rolled out simultaneously as a new "node". Sometimes this node comes with smaller features (eg TSMC 20nm to 16nm), sometimes it "just" reflects material and transistor design improvements (+ and ++ nodes like TSMC 16nm+).
.......................................
There's a sort of weird ignorance+snobbery on the internet (though god knows what the people involved have to be snobbish about...) that thinks these "+" nodes are not "real" improvements. This CAN be the case, but they can also be substantial improvements (as in the case of the 16nm+ node). Partly there's an issue of just how much was improved by the various tweaks; partly there's an issue of how well prepared designers were for the new node and so could take advantage of it. The foundry customers seem to be well informed about future plans ahead of time, and to do an adequate job of exploiting new designs. Whereas Intel seems to have stumbled into the "Optimization, Optimization2, ...?" scheme unprepared and with no backup plan, and their designs have such a long lead time that they have not been able to really exploit the process changes (regardless of this "Optimization" claim).
So the "+" that's the Kaby Lake 14nm+FF node appears to be essentially
- the exact same CPU design
- pretty much the exact same process
- JUST a slight relaxing of how close some transistors are to each other, meaning that they don't interfere with each other as much, and so allowing for a minor boost in frequency.
This is obviously a completely different (and vastly less impressive) sort of "optimization" than the sort of optimization that has Apple improving frequency by 30%, performance by 50%, and reducing energy substantially when they move from 16nmFF to 16nm+FF on TSMC.
But, as I said, that's the difference between a planned and well-executed constant stream of improvement, and a mad-scramble for something, anything, when things don't go the way you planned.
Intel can't be faulted for having their processes delayed in their introduction --- issues happen. They CAN be faulted in apparently have absolutely zero back-up plans in the event that something might go wrong. I'm damn sure that, eg, both TSMC and Apple have backup plans B, C, and D in the event that something unexpected happens to their schedules.
fanofanand - Friday, February 10, 2017 - link
Apple is taking stock ARM IP and tweaking it to widen things up. Apple is not home to the Gods of Engineering as you repeatedly claim. Apple's CPUs are nowhere near as complex as Intel's, and their instruction set is nowhere near as versatile as X86. Get off your knees and stop worshipping at the altar of Apple, the innovation brought by Intel dwarfs the "rectangular device with rounded corners" crap that Apple does.Meteor2 - Sunday, February 12, 2017 - link
No sign of diminishing returns yet. Each shrink still provides performance and efficiency gains. It's just that complexity of design and production is increasing exponentially with each node nowadays, and thus cost. There's every chance consumers won't pay the prices necessary for 5 and 3 nm to be built.name99 - Friday, February 10, 2017 - link
"An image posted by Dick James of Siliconics from the livestream shows Intel expects to have a three-year process node advantage when its competitors (Samsung, TSMC) start launching 10nm:"
So let's see. In the REAL world
- Apple on TSMC is already a match for Intel at equivalent power. So much for that 3yr node advantage. Where exactly do we see it pay off?
- Intel was telling us at the Koby Lake launch about their new comfort fit transistors. So WTF is it? If logic density is the most important metric possible, why did they go backwards with their "relaxed" KL transistor layout?
- What we expect in a month or two is an A10X manufactured on TSMC 10nm which is likely a reasonable match for pretty much any non-K Intel core. (If we assume previous scaling we'd expect this to be at around 3.4GHz, with 25% IPC advantage over intel).
- Next year (maybe Q1, maybe as late as Q3) we expect A11X on TSMC's 7nm.
So yeah, sure, TSMC 16nm+ is not as dense as Intel 14nm. And sure TSMC's 7nm will not be as dense as Intel's 10nm. BUT
- Intel seem unable to extract a performance advantage from their process
- TSMC will be shipping real A11Xs in significant volumes on 7nm at the same time that Intel will be shipping god knows what on 10nm, in volumes that appear to be calculated to make the slow slow slow rollout of Broadwell look like a rocket.
If this is the best story Intel can tell its investors, good luck those of you stuck with them (either via stock possession or unable to switch chips).
lefty2 - Friday, February 10, 2017 - link
TSMC's 7nm is *exactly* as dense as Intel's 10nm, in terms of metal/silicon pitches anyway: https://www.semiwiki.com/forum/content/6477-iedm-2...fanofanand - Friday, February 10, 2017 - link
Apple is not a "match for Intel at equivalent power" GTFO with that crap. Can't tell if you are just trolling or a paid shill.name99 - Friday, February 10, 2017 - link
I'm not interested in re-litigating this. If you consider it still a unproved proposition, there's nothing I can do to cure your ignorance.A9X was comparable with Intel over a year ago:
http://www.anandtech.com/show/9766/the-apple-ipad-...
A10 was a 50% improvement over A9, and we'd expect at least the same sort of improvement for A10X over A10, with a likely additional 20% boost or so from the transition to 10nm.
doggface - Monday, February 13, 2017 - link
Your linked article has the conclusion that intel's fastest core-m is significantly faster. So it does nothing to bolster your argument, except to say it may be close but not in front, on an iPad vs MacBook.To then say. Well we don't have benchmarks but Apples stated improvements mean they are faster.. Needs citation.
Wilco1 - Saturday, February 11, 2017 - link
Eh? I guess you also still believe Atom is great and competitive? Intel shill?It's a well-known fact that current high-end phones outperform all low power SkyLake laptops and Chromebooks. Eg https://arstechnica.co.uk/gadgets/2017/02/samsung-... has the fastest Chromebook scoring 3230 on Geekbench 3. The current iPhone does 3567 (https://browser.primatelabs.com/geekbench3/8161439...
Now let's see what the A10X does...
Meteor2 - Sunday, February 12, 2017 - link
I thought people had figured out that synthetic benchmarks heavily dependent on single thread performance had little value? They're interesting, but they don't answer fundamental questions about performance across architectures and nodes.Until we see an Apple Ax chip running x265 or Agisoft photo scan or something else, you know, real, we can't make comparisons.
Wilco1 - Sunday, February 12, 2017 - link
Which of the subtests in Geekbench is synthetic exactly? http://www.geekbench.com/doc/geekbench4-cpu-worklo...I completely disagree we can't make any comparisons at all - benchmarks do give a pretty good idea what you can expect. Note video codecs and image processing say nothing about CPU performance given they are typically done by dedicated hardware or the GPU.
Single-threaded performance most definitely is quite fundamental. Multi-threaded performance is easy, remember most phones have at least 8 cores nowadays.
Meteor2 - Thursday, February 16, 2017 - link
Good link, I spent a while looking for that information but only found it for GB3.I still wouldn't rate those workloads as 'real' though, simply because they're canned tests, not actual workloads you might perform daily.
Meteor2 - Friday, February 10, 2017 - link
I never thought people would become fanboys of IC foundries, of all things.Not sure I should play along but... have you got any sources for your statements? For example, Apple on TSMC being competitive with Intel? I wasn't aware of any product overlap. It's a Core M inside a Mac Book, after all.
And TSMC 10 nm being used for an A10X? Er, no. As reported here, it's all being used for the Qualcomm 835 going into the Samsung S8 launching in April. What new Apple device is due to launch this side of September?
Meteor2 - Friday, February 10, 2017 - link
Oops, it's the Samsung foundries being used for the 835, not TSMC.name99 - Friday, February 10, 2017 - link
"Oops, it's the Samsung foundries being used for the 835, not TSMC." No shit...So your point is? TSMC have not stated that they're producing the A10X (of course not, they're not going to piss off their largest customer). What they HAVE said is that 10nm was in risk production last year, and is expected to deliver commercial products and TSMC revenue in Q1 this year.
There is also various circumstantial evidence (not to mention the historical pattern) that suggest what I've said, for example:
https://www.fool.com/investing/2016/09/22/your-fir...
Meteor2 - Sunday, February 12, 2017 - link
I thought you said one shouldn't listen to rumour sites.lefty2 - Friday, February 10, 2017 - link
Umm, this article doesn't really address the confusion about multiple architecures that seem the be released at the same time. If Intel's 8 gen core is on 14nm, what is 10nm Cannon lake? 9th gen? And what 8 gen core products are being released?Meteor2 - Friday, February 10, 2017 - link
See my post above. It looks like Cannonlake and Coffee Lake will have the same architecture, but be be differentiated by power class and process node.Xajel - Sunday, February 12, 2017 - link
I'm more interested in the mobile parts of Coffee Lake, the U series will finally be 4 cores, and more importantly the H series will have 6 cores !!... I hope they really come in 2018...justincranford@hotmail.com - Monday, February 13, 2017 - link
Why is Coffee Lake missing from this article? Preliminary information about post-Kaby Lake already said process will diverge. Cannon Lake 10nm will be released side-by-side with Coffee Lake 14nm.https://en.wikipedia.org/wiki/Coffee_Lake_(CPU)
Only high margin markets (ex: laptops) will move to 10nm (Cannon Lake). Lower margin markets (ex: mainstream desktops) will remain on 14nm node (Coffee Lake). That will make four (!) 14nm iterations in a row (Broadwell Skylake, Kaby Lake, Coffee Lake). Yuck!
The only saving grace for Coffee Lake 14nm is increased core count for mainstream desktops. Higher core counts from Intel are only available in enthusiast ($$$) lines (ex: Broadwell 14nm). Coffee Lake 14nm will bring higher core counts (ex: 6 core/12 thread, or higher) down from enthusiast desktops to mainstream desktops. Intel will only be about 9 months behind AMD Ryzen for 8 core/16 thread mainstream desktops when Coffee Lake 14nm is launched. I hope it is more than a recycled Broadwell 14nm.
cocochanel - Monday, February 13, 2017 - link
Boy, that wafer looks sexy!!!