Seeing that GK110 Tesla won't likely be shipping in volume until next year, while the Quadro K5000 will be available a few months earlier, I wonder if nVidia supports a "Maximum 1.5" setup pairing the Quadro K5000 with a Fermi based Tesla?
I think if Intel did that (as unlikely as it is to happen), they'd quickly find themselves hauled in front of the US and EU courts and be split up. Getting caught against AMD was one thing, repeating it against NVIDIA is quite another.
I would believe AMD is much more passive and Nvidia being much more aggressive when they need to defend themselves. It has been proven in the past and would bet it would happen again if Intel did something that dumb.
I just can't imagine Nvidia not using it in their high end gaming cards. Developing a Tesla/Quadro only GPU, then developing a separate GPU specifically for high-end gaming would be extraordinarily costly, even if both are based on the same architecture.
Well, I suspect that NVIDIA will use it in a high end gaming card only if and when they need to to compete with AMD, and only if it turns out to be a better option than, say, a shrink or reconfiguration of GK104 (a GK114) for better performance. While development is costly, optimizing the hardware for graphics versus compute can make it worth it.
On another level, look at the price (and thus gross margin) for NVIDIA on those Quadro and Kepler cards. Keeping GK110 in Quadro and Kepler only gets a lot of people doing Compute to fork over the cash, instead of doing GPGPU on the cheap with GeForce cards. I can tell you that there was an audible sigh of disappointment at GTC2012 when audience members asked at the "Inside Kepler" talk whether all these cool new Kepler features aimed at compute (Dynamic Parallelism, Hyper-Q, etc.) were limited to GK110 only and the answer was yes.
On the contrary, I can't imagine Nvidia using it (GK110) in their gaming cards, especially with supply limited as it is and their professional lines being much more profitable.
Is Nvidia trying to show an anachronistic mismatch on purpose? The move visible plane is a Soviet Polikarpov I-16, which was active from the mid-1930s until the end of WW2. the colors are odd and it looks like there are two surface to air missiles.
"Meanwhile Quadro K5000 also brings with it all of the major Fermi family features that we first saw with the GeForce GTX 680, including support for DisplayPort 1.2, 4 display controllers per GPU, PCI-Express 3.0 support, the NVENC H.264 video encoder, and even bindless textures."
You said "major Fermi family features," but did you mean Kepler?
Any idea how they've managed to make PCI-Express GEN3 work with SNB-E on this board and not on the GTX 600 series? Have they managed to respin the GK104 already? Or have they used a bridge chip like on the GTX 690? Or is it just down to the track layout on the PCB?
BTW, I'm aware that its possible to force the driver to run in GEN3 with SNB-E even with the GTX 600 series but in my experience this doesn't actually work properly (slow transfer speeds and system instability) so I can't imagine nVidia would suddenly claim it was supported unless they've actually changed something.
While I don't have the full details, it doesn't look like NVIDIA has actually changed anything. Rather Kepler and SNB-E working correctly with PCIe 3.0 is dependent on some (unknown to us) hardware factor on the host machine. NVIDIA knows what that factor is, and will be qualifying workstations for PCIe 3.0.
As it stands we know that PCIe 3.0 works perfectly fine on some SNB-E systems (namely: ours), so this isn't all that hard to believe.
So logically if a workstation is qualified by nVidia for PCIe 3.0 with the K5000 then we should expect that workstation to also work (in GEN3 mode) with the GTX 600 series...
So far nVidia have repeatedly refused to publish any kind of list of qualified PCIe 3.0 motherboards claiming that results also vary by CPU (among other factors?).
Reading these articles on Kepler products makes me think that sticking to the Fermi products would be a better, less expensive choice. They are well established, and are valuable in many applications. Check out RenderStreamTV on Youtube. This isn't the only video where the power of the mainstream GPU is utilized in an app, but it's a nice one. The consumer and the manufacturer have such different goals. AMD seems to have the consumer in mind a bit more than NVidia.
What I am trying to find out is if there is the possibility of Nvidia releasing a newer, more powerful graphics card later this year or early next year, or if the GTX 690 is it, as far as consumer cards, for the near future?
I haven't purchased a new graphics card for a few years, so I want the best, but most power efficient card that I can buy. The impression I have is that the 680 and the 690 are great cards, but that they don't really provide a "huge" power boost over the previous generation cards or the cards just prior to that generation.
If they are planning to release something that is significantly faster then the 680, I would definitely be interested in that, if it's coming.
The last card that I purchased was the Geforce 8800 GTX. Is there anyone here that had that card or has that card, that can tell me, roughly, how the performance of that card would compare to the peformance of the GTX 680?
Which does not really boost the GPU accelerated envelope - call it dev's failure to optimize for Kepler, or nVidia's gaming oriented RnD...the only thing that saves them in workstation graphics are AMD's driver issues with many OpenCL accelerated applications, and ofc the stubbornness of some software devs to stick with CUDA. Another couple of years of "the same" overpriced Quadros.
122W TDP, less than the 142W of the single slot Quadro 4000, and they can't make a single slot card? We have some various chassis where it would be highly useful if it were single slot.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
29 Comments
Back to Article
ltcommanderdata - Tuesday, August 7, 2012 - link
Seeing that GK110 Tesla won't likely be shipping in volume until next year, while the Quadro K5000 will be available a few months earlier, I wonder if nVidia supports a "Maximum 1.5" setup pairing the Quadro K5000 with a Fermi based Tesla?silverblue - Tuesday, August 7, 2012 - link
I think if Intel did that (as unlikely as it is to happen), they'd quickly find themselves hauled in front of the US and EU courts and be split up. Getting caught against AMD was one thing, repeating it against NVIDIA is quite another.XZerg - Tuesday, August 7, 2012 - link
I would believe AMD is much more passive and Nvidia being much more aggressive when they need to defend themselves. It has been proven in the past and would bet it would happen again if Intel did something that dumb.bigboxes - Wednesday, August 8, 2012 - link
It's funny that yesterday AT posts nVidia's new professional cards, while other sites show AMD's new offererings. I find that passive aggressive.mayankleoboy1 - Tuesday, August 7, 2012 - link
Any news on if GK110 will be present in consumer gaming cards ?dragonsqrrl - Tuesday, August 7, 2012 - link
I just can't imagine Nvidia not using it in their high end gaming cards. Developing a Tesla/Quadro only GPU, then developing a separate GPU specifically for high-end gaming would be extraordinarily costly, even if both are based on the same architecture.johnthacker - Wednesday, August 8, 2012 - link
Well, I suspect that NVIDIA will use it in a high end gaming card only if and when they need to to compete with AMD, and only if it turns out to be a better option than, say, a shrink or reconfiguration of GK104 (a GK114) for better performance. While development is costly, optimizing the hardware for graphics versus compute can make it worth it.On another level, look at the price (and thus gross margin) for NVIDIA on those Quadro and Kepler cards. Keeping GK110 in Quadro and Kepler only gets a lot of people doing Compute to fork over the cash, instead of doing GPGPU on the cheap with GeForce cards. I can tell you that there was an audible sigh of disappointment at GTC2012 when audience members asked at the "Inside Kepler" talk whether all these cool new Kepler features aimed at compute (Dynamic Parallelism, Hyper-Q, etc.) were limited to GK110 only and the answer was yes.
JlHADJOE - Thursday, August 9, 2012 - link
On the contrary, I can't imagine Nvidia using it (GK110) in their gaming cards, especially with supply limited as it is and their professional lines being much more profitable.dragonsqrrl - Thursday, January 31, 2013 - link
hmmm, Geforce Titan... what happened?
RonMLew - Tuesday, August 7, 2012 - link
Is Nvidia trying to show an anachronistic mismatch on purpose? The move visible plane is a Soviet Polikarpov I-16, which was active from the mid-1930s until the end of WW2. the colors are odd and it looks like there are two surface to air missiles.unidntifiedbones - Wednesday, August 8, 2012 - link
Look again, Gee Bee R1, not I-16.Bit of a silly image really.
HisDivineOrder - Wednesday, August 8, 2012 - link
"Meanwhile Quadro K5000 also brings with it all of the major Fermi family features that we first saw with the GeForce GTX 680, including support for DisplayPort 1.2, 4 display controllers per GPU, PCI-Express 3.0 support, the NVENC H.264 video encoder, and even bindless textures."You said "major Fermi family features," but did you mean Kepler?
Ryan Smith - Wednesday, August 8, 2012 - link
Yes, I did. Thank you for that.shawkie - Wednesday, August 8, 2012 - link
Any idea how they've managed to make PCI-Express GEN3 work with SNB-E on this board and not on the GTX 600 series? Have they managed to respin the GK104 already? Or have they used a bridge chip like on the GTX 690? Or is it just down to the track layout on the PCB?BTW, I'm aware that its possible to force the driver to run in GEN3 with SNB-E even with the GTX 600 series but in my experience this doesn't actually work properly (slow transfer speeds and system instability) so I can't imagine nVidia would suddenly claim it was supported unless they've actually changed something.
Ryan Smith - Wednesday, August 8, 2012 - link
While I don't have the full details, it doesn't look like NVIDIA has actually changed anything. Rather Kepler and SNB-E working correctly with PCIe 3.0 is dependent on some (unknown to us) hardware factor on the host machine. NVIDIA knows what that factor is, and will be qualifying workstations for PCIe 3.0.As it stands we know that PCIe 3.0 works perfectly fine on some SNB-E systems (namely: ours), so this isn't all that hard to believe.
shawkie - Wednesday, August 8, 2012 - link
So logically if a workstation is qualified by nVidia for PCIe 3.0 with the K5000 then we should expect that workstation to also work (in GEN3 mode) with the GTX 600 series...So far nVidia have repeatedly refused to publish any kind of list of qualified PCIe 3.0 motherboards claiming that results also vary by CPU (among other factors?).
Ryan Smith - Wednesday, August 8, 2012 - link
Correct, if it were to be qualified with K5000 then it should also work with a manually enabled GTX 600 card.pvrvideoman - Wednesday, August 8, 2012 - link
Reading these articles on Kepler products makes me think that sticking to the Fermi products would be a better, less expensive choice. They are well established, and are valuable in many applications. Check out RenderStreamTV on Youtube. This isn't the only video where the power of the mainstream GPU is utilized in an app, but it's a nice one. The consumer and the manufacturer have such different goals. AMD seems to have the consumer in mind a bit more than NVidia.Rictorhell - Wednesday, August 8, 2012 - link
What I am trying to find out is if there is the possibility of Nvidia releasing a newer, more powerful graphics card later this year or early next year, or if the GTX 690 is it, as far as consumer cards, for the near future?I haven't purchased a new graphics card for a few years, so I want the best, but most power efficient card that I can buy. The impression I have is that the 680 and the 690 are great cards, but that they don't really provide a "huge" power boost over the previous generation cards or the cards just prior to that generation.
If they are planning to release something that is significantly faster then the 680, I would definitely be interested in that, if it's coming.
Rictorhell - Wednesday, August 8, 2012 - link
The last card that I purchased was the Geforce 8800 GTX. Is there anyone here that had that card or has that card, that can tell me, roughly, how the performance of that card would compare to the peformance of the GTX 680?puppies - Wednesday, August 8, 2012 - link
"roughly" the 680 will destroy itRictorhell - Wednesday, August 8, 2012 - link
lol Um....like destroy it 5x times over, 10x, 20x? :)BiggieShady - Wednesday, August 8, 2012 - link
By the power of seaarchh ... http://www.hwcompare.com/12476/geforce-8800-gtx-vs...Ryan Smith - Wednesday, August 8, 2012 - link
The 8800GT is a hair slower than the 8800GTX, but otherwise this should give you a pretty good idea.http://www.anandtech.com/bench/Product/521?vs=555
Rictorhell - Wednesday, August 8, 2012 - link
Thank you, I appreciate the help.dtolios - Wednesday, August 8, 2012 - link
Which does not really boost the GPU accelerated envelope - call it dev's failure to optimize for Kepler, or nVidia's gaming oriented RnD...the only thing that saves them in workstation graphics are AMD's driver issues with many OpenCL accelerated applications, and ofc the stubbornness of some software devs to stick with CUDA. Another couple of years of "the same" overpriced Quadros.johnthacker - Wednesday, August 8, 2012 - link
122W TDP, less than the 142W of the single slot Quadro 4000, and they can't make a single slot card? We have some various chassis where it would be highly useful if it were single slot.ExarKun333 - Saturday, August 11, 2012 - link
You are just ridiculous. Please stop talking.Gadgety - Saturday, December 15, 2012 - link
Looks like Nvidia really want to sell their Maximus, as the FP64 is 1/24 FP32.