Today is when Intel does its third-quarter 2021 financial disclosures, and there’s one little tidbit in the earnings presentation about its upcoming new discrete GPU offerings. The earnings are usually a chance to wave the flag of innovation about what’s to come, and this time around Intel is confirming that its first-generation discrete graphics with the Xe-HPG architecture will be on shelves in Q1 2022.

Intel has slowly been disclosing the features for its discrete gaming graphics offerings. Earlier this year, the company announced the branding for its next-gen graphics, called Arc, and with that the first four generations of products: Alchemist, Battlemage, Celestial, and Druid. It’s easy to see that we’re going ABCD here. Technically at that disclosure, in August 2021, Intel did state that Alchemist will be coming in Q1, the reaffirmation of the date today in the financial disclosures indicates that they’re staying as close to this date as possible.

Intel has previously confirmed that Alchemist will be fully DirectX 12 Ultimate compliant – meaning that alongside RT, it will offer variable-rate shading, mesh shaders, and sampler feedback. This will make it comparable in core graphics features to current-generation AMD and NVIDIA hardware. Although it has taken a few years now to come to fruition, Intel has made it clear for a while now that the company has intended to become a viable third player in the discrete graphics space. Intel’s odyssey, as previous marketing efforts have dubbed it, has been driven primarily by developing the Xe family of GPU microarchitectures, as well as the GPUs based on those architectures. Xe-LP was the first out the door last year, as part of the Tiger Lake family of CPUs and the DG1 discrete GPU. Other Xe family architectures include Xe-HP for servers and Xe-HPC for supercomputers and other high-performance compute environments.

The fundamental building block of Alchemist is the Xe Core. For manufacturing, Intel is turning to TSMC’s N6 process to do it. Given Intel’s Q1’22 release timeframe, Intel’s Alchemist GPUs will almost certainly be the most advanced consumer GPUs on the market with respect to manufacturing technology. Alchemist will be going up against AMD’s Navi 2x chips built on N7, and NVIDIA’s Ampere GA10x chips built on Samsung 8LPP. That said, as AMD can attest to, there’s more to being competitive in the consumer GPU market than just having a better process node. In conjunction with the use of TSMC’s N6 process, Intel is reporting that they’ve improved both their power efficiency (performance-per-watt) and their clockspeeds at a given voltage by 50% compared to Xe-LP. Note that this is the sum total of all of their improvements – process, logic, circuit, and architecture – so it’s not clear how much of this comes from the jump to TSMC N6 from Intel 10SF, and how much comes from other optimizations.

Exactly what performance level and pricing Intel will be pitching its discrete graphics to is currently unknown. The Q1 launch window puts CES (held the first week of January) as a good spot to say something more.

Related Reading

Comments Locked


View All Comments

  • Spunjji - Monday, October 25, 2021 - link

    RDNA 1 and Turing performance with N6 power characteristics would be fine and dandy, TBH, if the price is right. I'm expecting something a little better, though, with some caveats for driver quirks on release.
  • lightningz71 - Friday, October 22, 2021 - link

    Intel has a LOT to prove with respect to their willingness to support these dumpster fire's of branding with long term driver support. We've been burned by Intel abandoning video drivers in the past (not to say that AMD or Nvidia is completely innocent here, but, for most everything, you've gotten at least three good years of support at the very least) and even recently with walking away from any support for KadyLake-G as well as abandoning certain Atom based products in the past within 6 months of release. I'm not going to plunk down one glowing red cent on an intel video product at today's over inflated purchase prices for several years until they can prove a few things:

    1) They can fix a lot of the bugs in the existing Xe driver for Tiger Lake
    2) They can support one of these cards with continually improving drivers for at least 24 months
    3) They can produce enough cards to be relevant to the market and have game designers on board to properly support their cards oddities.
  • cyrusfox - Friday, October 22, 2021 - link

    You mean the kaby lake driver that was updated last month...

    If anything Intel has the best driver support (Windows, Linux,Patched monthly with open support). It also has the most market exposure to bugs as their integrated GPU vastly outnumbering the discrete cards on the market. Software and driver support has always been an Intel strength. With every CPU that is Rocket/Tiger lake and newer coming with an Xe flavor iGPU, they have been working on optimizing for this new GPU archetecture for the last 18 months, excellerated that work by sending free DG1 cards to devs to give them an opportunity to ensure the dev's work will function on the swath of iGPU coming with Xe.

    But Software and driver support in a dry GPU market really is a secondary concern.
    Main issue is supply, if Nvidia and AMD GPUs are still 2x MSRP, Intel can win by simplying supply the amrket at MSRP at a rate they don't sell out.
  • Flying Aardvark - Friday, October 22, 2021 - link

    I'm buying one of these. Never caught a fair price on a 3000 series card. Given I have a decent iGPU (I use an 11900K) capable of 720P gaming on its own, and Intel's iGPU and dGPU will share compute, I'm expecting pretty decent outcome as an upgrade over my 1060FE.
  • Spunjji - Monday, October 25, 2021 - link

    The 11900K iGPU is pretty poor - it's barely faster than the HD stuff on Comet Lake. It's certainly not going to be a meaningful factor in performance in combination with a Xe-HPG GPU.
  • sharath.naik - Friday, October 22, 2021 - link

    What I want from intel is an equivalent for the M1 chip to have a laptop without discrete GPU that can game like having 3080 in it. under 60watts. M1 has already proved that GPU having a separate memory has a huge power and performance penalty.
  • michael2k - Friday, October 22, 2021 - link

    They really can't do that since they are stuck at the equivalent of 7nm right now. That would put them at the same as an Apple A13 on 7nm. If you tried to manufacture an M1M, with it's 55b transistors, at 7nm you would have an 86W part. That and it would be 550 square mm or 5.5 square cm, at best. AMD's threadripper totals 24b transistors and 7.12 square cm of silicon, so it's possible that Intel's version of the M1P would be closer to 15 square cm.
  • name99 - Friday, October 22, 2021 - link

    Do we have any good comparisons as to the "quality" of the existing Xe iGPU's?
    Something that tries, at a technical rather than a tribal level, to match them against Apple, AMD, and nV.

    Obviously each company has a different situation, but one can at least try to look at things like
    "performance" per watt or per sq mm, and see if there are substantial differences; of if there are certain important and interesting tasks for which one is substantially more (or less) performant than the other.
  • Spunjji - Monday, October 25, 2021 - link

    The Xe iGPU has perf/mm that's pretty close to Vega 8 on TSMC 7nm - it's about 33% larger and 25% faster, on average. I don't really know how that compares to Nvidia as they haven't released anything that low in the performance range since Pascal.

    Drivers and system design have been the main impediment to its success as a low-end gaming chip - the difference between Xe performing at its best and worst is nearly 50%.
  • Sahrin - Friday, October 22, 2021 - link

    Their CPU's have been on the shelves since ~1970 but have only been worth buying half the time.

Log in

Don't have an account? Sign up now