AMD's Radeon HD 5870: Bringing About the Next Generation Of GPUs
by Ryan Smith on September 23, 2009 9:00 AM EST- Posted in
- GPUs
More GDDR5 Technologies: Memory Error Detection & Temperature Compensation
As we previously mentioned, for Cypress AMD’s memory controllers have implemented a greater part of the GDDR5 specification. Beyond gaining the ability to use GDDR5’s power saving abilities, AMD has also been working on implementing features to allow their cards to reach higher memory clock speeds. Chief among these is support for GDDR5’s error detection capabilities.
One of the biggest problems in using a high-speed memory device like GDDR5 is that it requires a bus that’s both fast and fairly wide - properties that generally run counter to each other in designing a device bus. A single GDDR5 memory chip on the 5870 needs to connect to a bus that’s 32 bits wide and runs at base speed of 1.2GHz, which requires a bus that can meeting exceedingly precise tolerances. Adding to the challenge is that for a card like the 5870 with a 256-bit total memory bus, eight of these buses will be required, leading to more noise from adjoining buses and less room to work in.
Because of the difficulty in building such a bus, the memory bus has become the weak point for video cards using GDDR5. The GPU’s memory controller can do more and the memory chips themselves can do more, but the bus can’t keep up.
To combat this, GDDR5 memory controllers can perform basic error detection on both reads and writes by implementing a CRC-8 hash function. With this feature enabled, for each 64-bit data burst an 8-bit cyclic redundancy check hash (CRC-8) is transmitted via a set of four dedicated EDC pins. This CRC is then used to check the contents of the data burst, to determine whether any errors were introduced into the data burst during transmission.
The specific CRC function used in GDDR5 can detect 1-bit and 2-bit errors with 100% accuracy, with that accuracy falling with additional erroneous bits. This is due to the fact that the CRC function used can generate collisions, which means that the CRC of an erroneous data burst could match the proper CRC in an unlikely situation. But as the odds decrease for additional errors, the vast majority of errors should be limited to 1-bit and 2-bit errors.
Should an error be found, the GDDR5 controller will request a retransmission of the faulty data burst, and it will keep doing this until the data burst finally goes through correctly. A retransmission request is also used to re-train the GDDR5 link (once again taking advantage of fast link re-training) to correct any potential link problems brought about by changing environmental conditions. Note that this does not involve changing the clock speed of the GDDR5 (i.e. it does not step down in speed); rather it’s merely reinitializing the link. If the errors are due the bus being outright unable to perfectly handle the requested clock speed, errors will continue to happen and be caught. Keep this in mind as it will be important when we get to overclocking.
Finally, we should also note that this error detection scheme is only for detecting bus errors. Errors in the GDDR5 memory modules or errors in the memory controller will not be detected, so it’s still possible to end up with bad data should either of those two devices malfunction. By the same token this is solely a detection scheme, so there are no error correction abilities. The only way to correct a transmission error is to keep trying until the bus gets it right.
Now in spite of the difficulties in building and operating such a high speed bus, error detection is not necessary for its operation. As AMD was quick to point out to us, cards still need to ship defect-free and not produce any errors. Or in other words, the error detection mechanism is a failsafe mechanism rather than a tool specifically to attain higher memory speeds. Memory supplier Qimonda’s own whitepaper on GDDR5 pitches error correction as a necessary precaution due to the increasing amount of code stored in graphics memory, where a failure can lead to a crash rather than just a bad pixel.
In any case, for normal use the ramifications of using GDDR5’s error detection capabilities should be non-existent. In practice, this is going to lead to more stable cards since memory bus errors have been eliminated, but we don’t know to what degree. The full use of the system to retransmit a data burst would itself be a catch-22 after all – it means an error has occurred when it shouldn’t have.
Like the changes to VRM monitoring, the significant ramifications of this will be felt with overclocking. Overclocking attempts that previously would push the bus too hard and lead to errors now will no longer do so, making higher overclocks possible. However this is a bit of an illusion as retransmissions reduce performance. The scenario laid out to us by AMD is that overclockers who have reached the limits of their card’s memory bus will now see the impact of this as a drop in performance due to retransmissions, rather than crashing or graphical corruption. This means assessing an overclock will require monitoring the performance of a card, along with continuing to look for traditional signs as those will still indicate problems in memory chips and the memory controller itself.
Ideally there would be a more absolute and expedient way to check for errors than looking at overall performance, but at this time AMD doesn’t have a way to deliver error notices. Maybe in the future they will?
Wrapping things up, we have previously discussed fast link re-training as a tool to allow AMD to clock down GDDR5 during idle periods, and as part of a failsafe method to be used with error detection. However it also serves as a tool to enable higher memory speeds through its use in temperature compensation.
Once again due to the high speeds of GDDR5, it’s more sensitive to memory chip temperatures than previous memory technologies were. Under normal circumstances this sensitivity would limit memory speeds, as temperature swings would change the performance of the memory chips enough to make it difficult to maintain a stable link with the memory controller. By monitoring the temperature of the chips and re-training the link when there are significant shifts in temperature, higher memory speeds are made possible by preventing link failures.
And while temperature compensation may not sound complex, that doesn’t mean it’s not important. As we have mentioned a few times now, the biggest bottleneck in memory performance is the bus. The memory chips can go faster; it’s the bus that can’t. So anything that can help maintain a link along these fragile buses becomes an important tool in achieving higher memory speeds.
327 Comments
View All Comments
mapesdhs - Saturday, September 26, 2009 - link
MODel3 writes:
> 1.Geometry/vertex performance issues ...
> 2.Geometry/vertex shading performance issues ...
Would perhaps some of the subtests in 3DMark06 be able to test this?
(not sure about Vantage, never used that yet) Though given what Jarred
said about the bandwidth and other differences, I suppose it's possible
to observe large differences in synthetic tests which are not the real
cause of a performance disparity.
The trouble with heavy GE tests is, they often end up loading the fill
rates anyway. I've run into this problem with the SGI tests I've done
over the years:
http://www.sgidepot.co.uk/sgi.html">http://www.sgidepot.co.uk/sgi.html
The larger landscape models used in the Inventor tests are a good
example. The points models worked better in this regard for testing
GE speed (stars3/star4), but I don't know to what extent modern PC
gfx is designed to handle points modelling - probably works better
on pro cards. Actually, Inventor wasn't a good choice anyway as it's
badly CPU-bound and API-heavy (I should have used Performer, gives
results 5 to 10X faster).
Anyway, point is, synthetic tests might allow one to infer that one
aspect of the gfx pipeline is a bottleneck when infact it isn't.
Ages ago I emailed NVIDIA (Ujesh, who I used to know many moons ago,
but alas he didn't reply) asking when, if ever, they would add
performance counters and other feedback monitors to their gfx
products so that applications could tell what was going on in the
gfx pipeline. SGI did this ages years ago, which allowed systems like
IR to support impressive functions such as Dynamic Video Resizing by
being able to monitor frame by frame what was going on within the gfx
engine at each stage. Try loading any 3D model into perfly, press F1
and click on 'Gfx' in the panel (Linux systems can run Performer), eg.:
http://www.sgidepot.co.uk/misc/perfly.gif">http://www.sgidepot.co.uk/misc/perfly.gif
Given how complex modern PC gfx has become, it's always been a
mystery to me why such functions haven't been included long ago.
Indeed, for all that Crysis looks amazing, I was never that keen on
it being used as a benchmark since there was no way of knowing
whether the performance hammering it created was due to a genuinely
complex environment or just an inefficient gfx engine. There's still
no way to be sure.
If we knew what was happening inside the gfx system, we could easily
work out why performance differences for different apps/games crop
up the way they do. And I would have thought that feedback monitors
within the gfx pipe would be even more useful to those using
professional applications, just as it was for coders working on SGI
hardware in years past.
Come to think of it, how do NVIDIA/ATI even design these things
without being able to monitor what's going on? Jarred, have you ever
asked either company about this?
Ian.
JarredWalton - Saturday, September 26, 2009 - link
I haven't personally, since I'm not really the GPU reviewer here. I'd assume most of their design comes from modeling what's happening, and with knowledge of their architecture they probably have utilities that help them debug stuff and figure out where stalls and bottlenecks are occurring. Or maybe they don't? I figure we don't really have this sort of detail for CPUs either, because we have tools that know the pipeline and architecture and they can model how the software performs without any hardware feedback.MODEL3 - Thursday, October 1, 2009 - link
I checked the web for synthetic geometry tests.Sadly i only found 3dMark Vantage tests.
You can't tell much from them, but they are indicative.
Check:
http://www.pcper.com/article.php?aid=783&type=...">http://www.pcper.com/article.php?aid=783&type=...
GPU Cloth: 5870 is only 1,2X faster than 4890. (vertex/geometry shading test)
GPU Particles: 5870 is only 1,2X faster than 4890. (vertex/geometry shading test)
Perlin Noise: 5870 is 2,5X faster than 4890. (Math-heavy Pixel Shader test)
Parallax Occlusion Mapping: 5870 is 2,1X faster than 4890. (Complex Pixel Shader test)
All the above 4 tests are not bandwidth limited at all.
Just for example, if you check:
http://www.pcper.com/article.php?aid=674&type=...">http://www.pcper.com/article.php?aid=674&type=...
You will see that a 750MHz 4870 512MB is 20-23% faster than a 625MHz 4850 in all the above 4 tests, so the extra bandwidth (115,2GB/s vs 64GB/s) it doesn't help at all.
But 4850 is extremely bandwidth limited in the color fillrate test (4870 is 60% faster than 4850)
Also it shouldn't be a problem of the dual rasterizer/dual SIMDs engine efficiency since synthetic Pixel Shader tests is fine (more than 2X) while the synthetic geometry shading tests is only 1,2X.
My guess is ATI didn't improve the classic geometry set-up engine and the GS because they want to promote vertex/geometry techniques based on the DX11 tesselator from now on.
Zool - Friday, September 25, 2009 - link
In Dx11 the fixed tesselation units will do much finer geometry details for much less memmory space and on chip so i think there isnt a single problem with that. Also the compute shader need minimal memory bandwith and can utilize plenty of idle shaders. The card is designed with dx11 in mind and it isnt using the wholle pipeline after all. I wouldnt make too early conclusions.(I think the perfomance will be much better after few drivers)MODEL3 - Saturday, September 26, 2009 - link
The DX11 tesselator in order to be utilized must the game engine to take advantage of it.I am not talking about the tesselator.
I am talking about the classic Geometry unit (DX9/DX10 engines) and the Geometry Shader [GS] (DX10 engines only).
I'll check to see if i can find a tech site that has synthetic bench for Geometry related perf. and i will post again tomorrow, if i can find anything.
JarredWalton - Friday, September 25, 2009 - link
It's worth noting that when you factor in clock speeds, compared to the 5870 the 4870X2 offers 88% of the core performance and 50% more bandwidth. Some algorithms/games require more bandwidth and others need more core performance, but it's usually a combination of the two. The X2 also has CrossFire inefficiencies to deal with.More interesting perhaps is that the GTX 295 offers (by my estimates, which admittedly are off in some areas) roughly 10% more GPU shader performance, about 18.5% more fill rate, and 46% more bandwidth than the HD 5870. The fact that the HD 4870 is still competitive is a good sign that ATI is getting good use of their 5 SPs per Stream Processor design, and that they are not memory bandwidth limited -- at least not entirely.
SiliconDoc - Wednesday, September 30, 2009 - link
The 4870x2 has somewhere around "double the data paths" in and out of it's 2 cpu's. So what you have with the 5870 putting as some have characterized " 2x 770 cores melded into one " is DOUBLE THE BOTTLENECK in and out of the core.They tried to compensate with ddr5 1200/4800 - but the fact remains, they only get so much with that "NOT ENOUGH DATA PATHS/PINS in and out of that gpu core."
cactusdog - Friday, September 25, 2009 - link
Omg these cards look great. Lol Silicon Doc is so gutted and furious he is making hmself look like a dam fool again only this time he should be on suicide watch...Nvidia cards are now obsolete..LOL.mapesdhs - Friday, September 25, 2009 - link
Hehe, indeed. Have you ever seen a scifi series called, "They Came
From Somewhere Else?" S.D.'s getting so worked up, reminds me of
the scene where the guy's head explodes. :D
Hmm, that's an alternative approach I suppose in place of post
moderation. Just get someone so worked up about something they'll
have an aneurism and pop their clogs... in which case, I'll hand
it back to Jarred. *grin*
Ian.
SiliconDoc - Friday, September 25, 2009 - link
That is quite all right, you fellas make sure to read it all, I am more than happy that the truth is sinking into your gourds, you won't be able to shake it.I am very happy about it.