AMD's Radeon HD 5870: Bringing About the Next Generation Of GPUs
by Ryan Smith on September 23, 2009 9:00 AM EST- Posted in
- GPUs
Lower Idle Power & Better Overcurrent Protection
One aspect AMD was specifically looking to improve in Cypress over RV770 was idle power usage. The load power usage for RV770 was fine at 160W for the HD4870, but that power usage wasn’t dropping by a great deal when idle – it fell by less than half to 90W. Later BIOS revisions managed to knock a few more watts off of this, but it wasn’t a significant change, and even later designs like RV790 still had limits to their idling abilities by only being able to go down to 60W at idle.
As a consequence, AMD went about designing the Cypress with a much, much lower target in mind. Their goal was to get idle power down to 30W, 1/3rd that of RV770. What they got was even better: they came in past that target by 10%, hitting a final idle power of 27W. As a result the Cypress can idle at 30% of the power as RV770, or as compared to Cypress’s load power of 188W, some 14% of its load power.
Accomplishing this kind of dramatic reduction in idle power usage required several changes. Key among them has been the installation of additional power regulating circuitry on the board, and additional die space on Cypress assigned to power regulation. Notably, all of these changes were accomplished without the use of power-gating to shut down unused portions of the chip, something that’s common on CPUs. Instead all of these changes have been achieved through more exhaustive clock-gating (that is, reducing power consumption by reducing clock speeds), something GPUs have been doing for some time now.
The use of clock-gating is quickly evident when we discuss the idle/2D clock speeds of the 5870, which is 150mhz for the core, and 300mhz for the memory . The idle clock speeds here are significantly lower than the 4870 (550/900), which in the case of the core is the source of its power savings as compared to the 4870. As tweakers who have attempted to manually reduce the idle clocks on RV770 based cards for further power savings have noticed, RV770 actually loses stability in most situations if its core clock drops too low. With the Cypress this has been rectified, enabling it to hit these lower core speeds.
Even bigger however are the enhancements to Cypress’s memory controller, which allow it to utilize a number of power-saving tricks with GDDR5 RAM, along with other features that we’ll get to in a bit. With RV770’s memory controller, it was not capable of taking advantage of very many of GDDR5’s advanced features besides the higher bandwidth abilities. Lacking this full bag of tricks, RV770 and its derivatives were unable to reduce the memory clock speed, which is why the 4870 and other products had such high memory clock speeds even at idle. In turn this limited the reduction in power consumption attained by idling GDDR5 modules.
With Cypress AMD has implemented nearly the entire suite of GDDR5’s power saving features, allowing them to reduce the power usage of the memory controller and the GDDR5 modules themselves. As with the improvements to the core clock, key among the improvement in memory power usage is the ability to go to much lower memory clock speeds, using fast GDDR5 link re-training to quickly switch the memory clock speed and voltage without inducing glitches. AMD is also now using GDDR5’s low power strobe mode, which in turn allows the memory controller to save power by turning off the clock data recovery mechanism. When discussing the matter with AMD, they compared these changes to putting the memory modules and memory controller into a GDDR3-like mode, which is a fair description of how GDDR5 behaves when its high-speed features are not enabled.
Finally, AMD was able to find yet more power savings for Crossfire configurations, and as a result the slave card(s) in a Crossfire configuration can use even less power. The value given to us for an idling slave card is 20W, which is a product of the fact that the slave cards go completely unused when the system is idling. In this state slave cards are still capable of instantaneously ramping up for full-load use, although conceivably AMD could go even lower still by powering down the slave cards entirely at a cost of this ability.
On the opposite side of the ability to achieve such low idle power usage is the need to manage load power usage, which was also overhauled for the Cypress. As a reminder, TDP is not an absolute maximum, rather it’s a maximum based on what’s believed to be the highest reasonable load the card will ever experience. As a result it’s possible in extreme circumstances for the card to need power beyond what its TDP is rated for, which is a problem.
That problem reared its head a lot for the RV770 in particular, with the rise in popularity of stress testing programs like FurMark and OCCT. Although stress testers on the CPU side are nothing new, FurMark and OCCT heralded a new generation of GPU stress testers that were extremely effective in generating a maximum load. Unfortunately for RV770, the maximum possible load and the TDP are pretty far apart, which becomes a problem since the VRMs used in a card only need to be spec’d to meet the TDP of a card plus some safety room. They don’t need to be able to meet whatever the true maximum load of a card can be, as it should never happen.
Why is this? AMD believes that the instruction streams generated by OCCT and FurMark are entirely unrealistic. They try to hit everything at once, and this is something that they don’t believe a game or even a GPGPU application would ever do. For this reason these programs are held in low regard by AMD, and in our discussions with them they referred to them as “power viruses”, a term that’s normally associated with malware. We don’t agree with the terminology, but in our testing we can’t disagree with AMD about the realism of their load – we can’t find anything that generates the same kind of loads as OCCT and FurMark.
Regardless of what AMD wants to call these stress testers, there was a real problem when they were run on RV770. The overcurrent situation they created was too much for the VRMs on many cards, and as a failsafe these cards would shut down to protect the VRMs. At a user level shutting down like this isn’t a very helpful failsafe mode. At a hardware level shutting down like this isn’t enough to protect the VRMs in all situations. Ultimately these programs were capable of permanently damaging RV770 cards, and AMD needed to do something about it. For RV770 they could use the drivers to throttle these programs; until Catalyst 9.8 they detected the program by name, and since 9.8 they detect the ratio of texture to ALU instructions (Ed: We’re told NVIDIA throttles similarly, but we don’t have a good control for testing this statement). This keeps RV770 safe, but it wasn’t good enough. It’s a hardware problem, the solution needs to be in hardware, particularly if anyone really did write a power virus in the future that the drivers couldn’t stop, in an attempt to break cards on a wide scale.
This brings us to Cypress. For Cypress, AMD has implemented a hardware solution to the VRM problem, by dedicating a very small portion of Cypress’s die to a monitoring chip. In this case the job of the monitor is to continually monitor the VRMs for dangerous conditions. Should the VRMs end up in a critical state, the monitor will immediately throttle back the card by one PowerPlay level. The card will continue operating at this level until the VRMs are back to safe levels, at which point the monitor will allow the card to go back to the requested performance level. In the case of a stressful program, this can continue to go back and forth as the VRMs permit.
By implementing this at the hardware level, Cypress cards are fully protected against all possible overcurrent situations, so that it’s not possible for any program (OCCT, FurMark, or otherwise) to damage the hardware by generating too high of a load. This also means that the protections at the driver level are not needed, and we’ve confirmed with AMD that the 5870 is allowed to run to the point where it maxes out or where overcurrent protection kicks in.
On that note, because card manufacturers can use different VRMs, it’s very likely that we’re going to see some separation in performance on FurMark and OCCT based on the quality of the VRMs. The cheapest cards with the cheapest VRMs will need to throttle the most, while luxury cards with better VRMs would need to throttle little, if at all. This should make little difference in stock performance on real games and applications (since as we covered earlier, we can’t find anything that pushes a card to excess) but it will likely make itself apparent in overclocking. Overclocked cards - particularly those with voltage modifications – may hit throttle situations in normal applications, which means the VRMs will make a difference here. It also means that overclockers need to keep an eye on clock speeds, as the card shutting down is no longer a tell-tale sign that you’re pushing it too hard.
Finally, while we’re discussing the monitoring chip, we may as well talk about the rest of its features. Along with monitoring the GPU, it also is a PWM controller. This means that the PWM controller is no longer a separate part that card builders add themselves, and as such we won’t be seeing any cards using a 2pin fixed speed fan to save money on the PWM controller. All Cypress cards (and presumably, all derivatives) will have the ability to use a 4pin fan built-in.
327 Comments
View All Comments
ClownPuncher - Wednesday, September 23, 2009 - link
Absolutely, I can answer that for you.Those 2 "ports" you see are for aesthetic purposes only, the card has a shroud internally so those 2 ports neither intake nor exhaust any air, hot or otherwise.
Ryan Smith - Wednesday, September 23, 2009 - link
ClownPuncher gets a cookie. This is exactly correct; the actual fan shroud is sealed so that air only goes out the front of the card to go outside of the case. The holes do serve a cooling purpose though; allow airflow to help cool the bits of the card that aren't hooked up to the main cooler; various caps and what have you.SiliconDoc - Wednesday, September 23, 2009 - link
Ok good, now we know.So the problem now moves to the tiny 1/2 exhaust port on the back, did you stick your hand there and see how much that is blowing ? Does it whistle through there ? lol
Same amount of air(or a bit less) in half the exit space... that's going to strain the fan and or/reduce flow, no matter what anyone claims to the contrary.
It sure looks like ATI is doing a big favor to aftermarket cooler vendors.
GhandiInstinct - Wednesday, September 23, 2009 - link
Ryan,Developers arent pushing graphics anymore. Its not economnical, PC game supports is slowing down, everything is console now which is DX9. what purpose does this ATI serve with DX11 and all this other technology that won't even make use of games 2 years from now?
Waste of money..
ClownPuncher - Wednesday, September 23, 2009 - link
Clearly he should stop reviewing computer technology like this because people like you are content with gaming on their Wii and iPhone.This message has been brought to you by Sarcasm.
Griswold - Wednesday, September 23, 2009 - link
So you're echoing what nvidia recently said, when they claimed dx11/gaming on the PC isnt all that (anymore)? I guess nvidia can close shop (at least the gaming relevant part of it) now and focus on GPGPU. Why wait for GT300 as a gamer?Oh right, its gonna be blasting past the 5xxx and suddenly dx11 will be the holy grail again... I see how it is.
SiliconDoc - Wednesday, September 23, 2009 - link
rofl- It's great to see red roosters not crowing and hopping around flapping their wings and screaming nvidia is going down.Don't take any of this personal except the compliments, you're doing a fine job.
It's nice to see you doing my usual job, albiet from the other side, so allow me to compliment your fine perceptions. Sweltering smart.
But, now, let's not forget how ambient occlusion got poo-pooed here and shading in the game was said to be "an irritant" when Nvidia cards rendered it with just driver changes for the hardware. lol
Then of course we heard endless crowing about "tesselation" for ati.
Now it's what, SSAA (rebirthed), and Eyefinity, and we'll hear how great it is for some time to come. Let's not forget the endless screeching about how terrible and useless PhysX is by Nvidia, but boy when "open standards" finally gets "Havok and Ati" cranking away, wow the sky is the limit for in game destruction and water movement and shooting and bouncing, and on and on....
Of course it was "Nvidia's fault" that "open havok" didn't happen.
I'm wondering if 30" top resolution will now be "all there is!" for the next month or two until Nvidia comes out with their next generation - because that was quite a trick switching from top rez 30" DOWN to 1920x when Nvidia put out their 2560x GTX275 driver and it whomped Ati's card at 30" 2560x, but switched places at 1920x, which was then of course "the winning rez" since Ati was stuck there.
I could go on but you're probably fuming already and will just make an insult back so let the spam posting IZ2000 or whatever it's name will be this time handle it.
BTW there's a load of bias in the article and I'll be glad to point it out in another post, but the reason the red rooster rooting is not going beyond any sane notion of "truthful" or even truthiness, is because this 5870 Ati card is already percieved as " EPIC FAIL" !
I cannot imagine this is all Ati has, and if it is they are in deep trouble I believe.
I suspect some further releases with more power soon.
Finally - Wednesday, September 23, 2009 - link
Team Green - full foam ahead!*hands over towel*
There you go. Keep on foaming, I'm all amused :)
araczynski - Wednesday, September 23, 2009 - link
is DirectX11 going to be as worthless as 10? in terms of being used in any meaningful way in a meaningful amount of games?my 2 4850's are still keeping me very happy in my 'ancient' E8500.
curious to see how this compares to whatever nvidia rolls out, probably more of the same, better in some, worse in others, bottom line will be the price.... maybe in a year or two i'll build a new system.
of course by that time these'll be worthless too.
SiliconDoc - Wednesday, September 23, 2009 - link
Well it's certainly going to be less useful than PhysX, which is here said to be worthless, but of course DX11 won't get that kind of dissing, at least not for the next two months or so, before NVidia joins in.Since there's only 1 game "kinda ready" with DX11, I suppose all the hype and heady talk will have to wait until... until... uhh.. the 5870's are actually available and not just listed on the egg and tiger.
Here's something else in the article I found so very heartwarming:
---
" Wrapping things up, one of the last GPGPU projects AMD presented at their press event was a GPU implementation of Bullet Physics, an open source physics simulation library. Although they’ll never admit it, AMD is probably getting tired of being beaten over the head by NVIDIA and PhysX; Bullet Physics is AMD’s proof that they can do physics too. "
---
Unfortunately for this place,one of my friends pointed me to this little expose' that show ATI uses NVIDIA CARDS to develope "Bullet Physics" - ROFLMAO
-
" We have seen a presentation where Nvidia claims that Mr. Erwin Coumans, the creator of Bullet Physics Engine, said that he developed Bullet physics on Geforce cards. The bad thing for ATI is that they are betting on this open standard physics tech as the one that they want to accelerate on their GPUs.
"ATI’s Bullet GPU acceleration via Open CL will work with any compliant drivers, we use NVIDIA Geforce cards for our development and even use code from their OpenCL SDK, they are a great technology partner. “ said Erwin.
This means that Bullet physics is being developed on Nvidia Geforce cards even though ATI is supposed to get driver and hardware acceleration for Bullet Physics."
---
rofl - hahahahahha now that takes the cake!
http://www.fudzilla.com/content/view/15642/34/">http://www.fudzilla.com/content/view/15642/34/
--
Boy do we "hate PhysX" as ati fans, but then again... why not use the nvidia PhysX card to whip up some B Physics, folks I couldn't make this stuff up.