The Radeon HD 5970: Completing AMD's Takeover of the High End GPU Market
by Ryan Smith on November 18, 2009 12:00 AM EST- Posted in
- GPUs
Conclusion
There are two things that become very clear when looking at our data for the 5970
- It’s hands down the fastest single card on the market
- It’s so fast that it’s wasted on a single monitor
AMD made a good choice in enabling Crossfire Eyefinity for the 5970, as they have made a card so fast that it basically shoots past everything on the market that isn’t Crysis. All of our action games that aren’t CPU limited do better than 100fps at 2560x1600, and RTSs are doing just under 60fps. The 5970 is without a doubt Overkill (with a capital O) on a single monitor. This will likely change for future games (i.e. STALKER), but on today’s games it’s more power than is necessary to drive even the largest single monitor. The 5970 still offers a good performance boost over the 5870 even with a single monitor, but with the 5870’s outstanding performance, it’s not $200 better.
So that leaves us with Eyefinity. So long as GPUs are outpacing games, AMD needs something to burn up extra performance to give faster cards a purpose, and that’s Eyefinity. Eyefinity is a strain - even 3 smaller monitors can result in more pixels being pushed than a single 2560. Having Crossfire Eyefinity support gives an AMD card the breathing room it needs to offer Eyefinity at playable framerates across a wider spectrum of monitors and games. Given the price of 3 20”+ monitors is going to approach if not exceed the $600 price of the card, the 5970 is the perfect match for Eyefinity gaming at this time.
When AMD originally told us about this card, I was surprised to see that they slapped only a $600 price tag on it. As the fastest of the fast cards, AMD can basically charge up to 2x the price of a 5870 for it, and they didn’t. After seeing the performance data, I understand why. In our benchmarks the 5970 is practically tied with the 5850CF, and a pair of such cards would sell for $600 at this time. I still expect that we’re going to see a performance gap emerge between the cards (particularly if the 5970 is held back by drivers) but right now the $600 price tag is appropriate.
What this does call into question though is what’s better to have: a pair of 5800 series cards, or a 5970. If we assume that the 5970 is equal to a 5850CF in performance and in price, then the differences come down to 3 matters: Heat/noise, power, and Crossfire Eyefinity. The 5970 enjoys lower power usage and it doesn’t need a power supply with 4 PCIe plugs, but the cost is that by compacting this into one card it’s hotter and louder than a 5850CF (which really, is true for all dual-GPU cards). The biggest advantage to the 5970 right now is that it’s the only card to support Crossfire Eyefinity, which means it’s the only card to even consider if you are going to use Eyefinity right now. Ultimately if you can run 2 cards and only will be driving a single monitor, go with the 5850CF, otherwise go with the 5970. And if it’s 2010 and you’re reading this article, check and see if AMD has enabled Crossfire Eyefinity for the 5850CF.
Next, we’re left with the prospects of overclocking the 5970. Only one of our two cards even runs at 5870 speeds (850MHz/1200MHz), and while we're willing to entertain the idea that our 1 cranky card is a fluke, we can't ignore the fact that none of our cards can run a real application at 5870 speeds without throttling. Ultimately our experience with the working card has called into question whether the VRMs on the card are up to the task. Since this is a protection mechanism there’s no risk of damage, but it also means that the card is underperforming. Overclock your 5970 to 5870 speeds if you can bear the extra power/heat/noise, but don’t expect 5870CF results.
Last, that leaves us with the 5870CF, and the 5970CF. Thanks to VRM throttling, there’s still a place in this world for the 5870CF. For a 2-GPU setup, it’s still the best way to go, but keep in mind it comes at a $200 premium and lacks Crossfire Eyefinity support. Meanwhile with the 5970CF, while we didn’t get a chance to test it today, we can safely say that it’s entirely unnecessary for a single-monitor setup. There’s a market out there for $1200 in video cards, but you had better be running 3 30” monitors in Eyefinity mode to make use of it.
114 Comments
View All Comments
SJD - Wednesday, November 18, 2009 - link
Thanks Anand,That kind of explains it, but I'm still confused about the whole thing. If your third monitor supported mini-DP then you wouldn't need an active adapter, right? Why is this when mini-DP and regular DP are the 'same' appart from the actual plug size. I thought the whole timing issue was only relevant when wanting a third 'DVI' (/HDMI) output from the card.
Simon
CrystalBay - Wednesday, November 18, 2009 - link
WTH is really up at TWSC ?Jacerie - Wednesday, November 18, 2009 - link
All the single game tests are great and all, but once I would love to see AT run a series of video card tests where multiple instances of games like EVE Online are running. While single instance tests are great for the FPS crowd, all us crazy high-end MMO players need some love too.Makaveli - Wednesday, November 18, 2009 - link
Jacerie the problem with benching MMO's and why you don't see more of them is all the other factors that come into play. You have to now deal with server latency, you also have no control of how many players are usually in the server at any given time when running benchmarks. There is just to many variables that would not make the benchmarks repeatable and valid for comparison purposes!mesiah - Thursday, November 19, 2009 - link
I think more what he is interested in is how well the card can render multiple instances of the game running at once. This could easily be done with a private server or even a demo written with the game engine. It would not be real world data, but it would give an idea of performance scaling when multiple instances of a game are running. Myself being an occasional "Dual boxer" I wouldn't mind seeing the data myself.Jacerie - Thursday, November 19, 2009 - link
That's exactly what I was trying to get at. It's not uncommon for me to be running at lease two instances of EVE with an entire assortment of other apps in the background. My current 3870X2 does the job just fine, but with 7 out and DX11 around the corner I'd like to know how much money I'm going to need to stash away to keep the same level of usability I have now with the newer cards.Zool - Wednesday, November 18, 2009 - link
The so fast is only becouse 95% of the games are dx9 xbox ports. Still crysis is the most demanding game out there quite a time (it need to be added that it has a very lazy engine). In Age of Conan the diference in dx9 and dx10 is more than half(with plenty of those efects on screen even1/3) the fps drop. Those advanced shader efects that they are showing in demos are actualy much more demanding on the gpu than the dx9 shaders. Its just the thing they dont mention it. It will be same with dx11. A full dx11 game with all those fancy shaders will be on the level of crysis.crazzyeddie - Wednesday, November 18, 2009 - link
... after their first 40nm test chips came back as being less impressive than **there** 55nm and 65nm test chips were.silverblue - Wednesday, November 18, 2009 - link
Hehe, I saw that one too.frozentundra123456 - Wednesday, November 18, 2009 - link
Unfortunately, since playing MW2, my question is: are there enough games that are sufficiently superior on the PC to justify the inital expense and power usage of this card? Maybe thats where eyefinity for AMD and PhysX for nVidia come in: they at least differentiate the PC experience from the console.I hate to say it, but to me there just do not seem to be enough games optimized for the PC to justify the price and power usage of this card, that is unless one has money to burn.