Comments Locked

22 Comments

Back to Article

  • mrdude - Monday, May 5, 2014 - link

    Anand, any word on when this new x86 core is supposed to hit the market? Is it the same 2016 time frame or afterward?

    AMD has been really light on any sort of details the last couple of years. We really don't even know what to expect in 2015...
  • gostan - Monday, May 5, 2014 - link

    in amd's spacetime, always + 2yrs. So 2016 = ~2018.
  • Elixer - Monday, May 5, 2014 - link

    If you go by the "old" AMD timeline, then it will ship last quarter of 2018, with a thunk.
    It will still be at least 2 years away though, unless they really got their game together.
  • Kevin G - Monday, May 5, 2014 - link

    They've reportedly been working on a new core for awhile - basically shortly after Jim Keller was hired back in 2012. This is the first public admission that their Bulldozer heritage is going to be bulldozed over.
  • The_Assimilator - Monday, May 5, 2014 - link

    "We really don't even know what to expect in 2015..."

    Yes we do - "new" crappy APUs, a lot of statements about how ARM is TEH FUTURE, and at least one delay on aforementioned crappy APUs.
  • Anders CT - Monday, May 5, 2014 - link

    @The_Assimilator

    It would probably be more correct to say :"ARM is TEH PRESENT"
  • TristanSDX - Monday, May 5, 2014 - link

    "Jim Keller added some details on K12. He referenced AMD's knowledge of doing high frequency designs as well as "extending the range" that ARM is in. Keller also mentioned he told his team to take the best of the big and little cores that AMD presently makes in putting together this design. "
    So it is basically Bulldozer based crap. Well done AMD, do some tweak, rename it as 'new core' and try to sell it again. But clients are not that stupid.
  • errorr - Monday, May 5, 2014 - link

    I read that the other way. I think he is talking about high frequency ARM with a bunch of Frankenstein monster parts of IP from bulldozer for a new x86 design.
  • testbug00 - Monday, May 5, 2014 - link

    wonder which company took the best of a great mobile chip and a high frequency chip to make a chip that DESTROYED their competition in every (I believe it was every) category?

    Hm... Intel. Now, Intel executes far better than AMD has (Although, AMD has been getting better, the improvement has not been huge) but, no reason that AMD could not take BD + cats and get a chip that managed to capture positives of both like the original Core chips did.
  • MLSCrow - Monday, June 2, 2014 - link

    TristanSDX - If you take the "best" of the big and little cores, the Bulldozer arch in it's entirety is not something that would make the cut.
  • Alexvrb - Saturday, August 2, 2014 - link

    Not really. Granted, we're going to see at least one more BD-derived design between now and then just to keep things moving. But the new x86 architecture is going to combine what they've learned with all the big core designs (including BD iterations all the way through Excavator) and what they've learned with their small core designs (such as Jaguar and Puma).

    Kim Keller has lead the design of some very successful architectures, so I am very interested to see what they cook up. Too bad we won't really see the results until ~2016.
  • Anders CT - Monday, May 5, 2014 - link

    I had somewhat written of AMD. But their Beema/Mullins design looked to have potential, so maybe they can still surprise. Count me as mildly interested.

    And hedging their bets by doing some arm designs is propably prudent.
  • Tronyeu0802 - Monday, May 5, 2014 - link

    Does this mean that they are making a super chip that can run both Andoid and Window? Or just 2 different cores on one die?
  • mrdude - Monday, May 5, 2014 - link

    The Project SkyBridge announcement was that there will be a single platform that's "pin compatible" between both ARM (A57s) and x86 (Puma+). Presumably this gives AMD the opportunity to reuse IP blocks between the two different chips while allowing OEMs to design a single tablet/device and utilizing either ARM or x86 depending on what they feel is more suitable (think a single tablet with ARM for Android but swapped for x86 for Windows but everything else remaining the same).

    K12 seems to be a custom ARM design, a la Apple and Qualcomm.
  • Gigaplex - Monday, May 5, 2014 - link

    Two separate chips. You can buy either ARM or x86 for the same socket. And x86 already can run Android.
  • Krysto - Monday, May 5, 2014 - link

    ARM should just go fully ARM, and dump the x86 license (yes, I realize they can't quit x86 overnight, but they can deprecate it. Nvidia is on its way to do that anyway).
  • testbug00 - Monday, May 5, 2014 - link

    You mean "AMD should go fully ARM" I hope.

    Anyhow, based on the above correction being true, that is silly. As for NVidia on its way to get rid of x86... Nvidia never made x86 chips, and, if you mean Tegra... Well, Tegra is an abject failure from a fiscal standpoint.
  • jamescox - Tuesday, May 6, 2014 - link

    High-end cpus are becoming increasing irrelevant in the consumer space. They are still needed in the workstation/server space, but how many consumer applications are actually cpu limited? As far as I can tell, most games are gpu limited, even at 1080p. I saw a review somewhere and you had to go back to a Core 2 Duo before performance dropped in any really significant way. I suspect that may be due to the age of the platform (DDR2 + PCIe 1.1) rather than the actual cpu core.

    If you look at CPU cores, they are tiny. Haswell is actually very large compared to low-power ARM cores and such. I think it is only around 14.5 square millimeters for a single core, and I believe that includes the 256KB L2 (hard to find data for individual core). It is definitely time for the cpu to move onto the gpu die. After the caches, the largest component is probably the FPU due to all of the vector extensions (MMX, SSE, etc). As far as I know, Intel is still planing on expanding the vector capabilities in the CPU. This makes little sense to me. If they can tie in the GPU units closely enough, then any vector code should just use the GPU. For the cpu, it seems like just a few low-latency, scalar fp units would be the way to go, and execute all of the vector operations on GPU units.

    We are currently being held back by the form factor, although I have wondered if anyone will make a laptop (or something other than a console) with GDDR5 connected directly to an APU (would probably need to be soldered to board also). Many problems disappear once they can start stacking memory chips in the APU or GPU package. All that is needed for APUs to take over is really high speed interconnect to allow multiple APUs to work together. Nvidia is working on this with nvlink, although only for gpus; they still need an external cpu in the PC/workstation space. Intel is really working on the same technology as AMD, they are just at different points. Intel has a good CPU but not that great of GPU yet. AMD has sufficient CPU performance and good GPU tech. Intel still has the marketing position of having the best single thread performance, but this isn't anywhere near as important as it used to be. Enthusiast probably place too much importance on it.

    For the server space, Intel tried to switch ISA with IA-64, which is now dead. They are stuck with x86, since they failed to make IA-64 fly, and they will not use ARM. The question is, for high-density servers, will x86 be able to match the performance per watt of ARM cores? AMD will be able to put a large number of ARM cores on a server chip, so if they are not too far behind in process tech, this could be a very good solution.

  • lordken - Sunday, April 5, 2015 - link

    @jamescox : you are wrong with your core 2 of being good enough for games assumption. I still have E8400 and it is holding my HD7950 back a bit. There are games where my GPU is at ~50% while both CPU cores at 100% (and low fps so clearly cpu holding back the system)

    Also I dont think that putting RAM on APUs is good idea as that will end customability of chosing how much RAM you want to have in PC. You would be stick with just what is on APU.

    As what i agree with you is, that probably CPU functionality should be moved to GPUs, as they seem to be able to provide much higher computation anyways. Thats actually what I was thinking is going to happen when amd came with APUs, to move GPU from discreet card to "cpu" socket with added cpu functionality.
    Question only remains what to do with SLI/crossfire/scalability. Either to have motherboards with multiple sockets (something like servers) or just drop standard cpu socket and go away with current PCIe slots, allowing pretty nice scalability. You would just add another "GPU/APU" card if you need more power, something like with server blades.
  • ws3 - Tuesday, May 6, 2014 - link

    So Apple released their first 64 bit custom ARM core in 2013 and AMD is going to do the same in 2016?

    That makes them 3 years late. Why should we expect them to produce a class leading device? Sure AMD has experience, but Apple and Qualcomm have a huge head start and proven CPU design teams of their own.
  • Krysto - Monday, May 12, 2014 - link

    I really hope they don't plan on naming them the same way to confuse their customers about each chip's capabilities. Naming both K-12 or Opteron would be very stupid. They need their own brands.
  • The Hardcard - Monday, May 12, 2014 - link

    I don't see how the Bulldozer design inhibits single threaded performance in and of itself. Boosting instruction throughput requires prioritizing it during core design, and allocating the resources needed.

    It would seem to be about the same kind of work and amount of work regardless of the core. I figured that AMD didn't have the resources to pursue all their goals and maybe gambled that certain efforts would work out.

Log in

Don't have an account? Sign up now