A Quick Primer on ILP

NVIDIA throws ILP (instruction level parallelism) out the window while AMD tackles it head on.

ILP is parallelism that can be extracted from a single instruction stream. For instance, if i have a lot of math that isn't dependent on previous instructions, it is perfectly reasonable to execute all this math in parallel.

For this example on my imaginary architecture, instruction format is:

LineNumber INSTRUCTION dest-reg, source-reg-1, source-reg-2

This is compiled code for adding 8 numbers together. (i.e. A = B + C + D + E + F + G + H + I;)

1 ADD r2,r0,r1
2 ADD r5,r3,r4
3 ADD r8,r6,r7
4 ADD r11,r9,r10
5 ADD r12,r2,r5
6 ADD r13,r8,r11
7 ADD r14,r12,r13
8 [some totally independent instruction]
...

Lines 1,2,3 and 4 could all be executed in parallel if hardware is available to handle it. Line 5 must wait for lines 1 and 2, line 6 must wait for lines 3 and 4, and line 7 can't execute until all other computation is finished. Line 8 can execute at any point hardware is available.

For the above example, in two wide hardware we can get optimal throughput (and we ignore or assume full speed handling of read-after-write hazards, but that's a whole other issue). If we are looking at AMD's 5 wide hardware, we can't achieve optimal throughput unless the following code offers much more opportunity to extract ILP. Here's why:

From the above block, we can immediately execute 5 operations at once: lines 1,2,3,4 and 8. Next, we can only execute two operations together: lines 5 and 6 (three execution units go unused). Finally, we must execute instruction 7 all by itself leaving 4 execution units unused.

The limitations of extracting ILP are on the program itself (the mix of independent and dependent instructions), the hardware resources (how much can you do at once from the same instruction stream), the compiler (how well does the compiler organize basic blocks into something the hardware can best extract ILP from) and the scheduler (the hardware that takes independent instructions and schedules them to run simultaneously).

Extracting ILP is one of the most heavily researched areas of computing and was the primary focuses of CPU design until the advent of multicore hardware. But it is still an incredibly tough problem to solve and the benefits vary based on the program being executed.

The instruction stream above is sent to an AMD and NVIDIA SP. In the best case scenario, the instruction stream going into AMD's SP should be 1/5th the length of the one going into NVIDIA's SP (as in, AMD should be executing 5 ops per SP vs. 1 per SP for NVIDIA) but as you can see in this exampe, the instruction stream is around half the height of the one in the NVIDIA column. The more ILP AMD can extract from the instruction stream, the better its hardware will do.

AMD's RV770 (And R6xx based hardware) needs to schedule 5 operations per thread every every clock to get the most out of their hardware. This certainly requires a bit of fancy compiler work and internal hardware scheduling, which NVIDIA doesn't need to bother with. We'll explain why in a second.

Instruction Issue Limitations and ILP vs TLP Extraction

Since a great deal of graphics code manipulates vectors like vertex positions (x,y,c,w) or colors (r,g,b,a), lots of things happen in parallel anyway. This is a fine and logical aspect of graphics to exploit, but when it comes down to it the point of extracting parallelism is simply to maximize utilization of hardware (after all, everything in a scene needs to be rendered before it can be drawn) and hide latency. Of course, building a GPU is not all about extracting parallelism, as AMD and NVIDIA both need to worry about things like performance per square millimeter, performance per watt, and suitability to the code that will be running on it.

NVIDIA relies entirely on TLP (thread level parallelism) while AMD exploits both TLP and ILP. Extracting TLP is much much easier than ILP, as the only time you need to worry about any inter-thread conflicts is when sharing data (which happens much less frequently than does dependent code within a single thread). In a graphics architecture, with the necessity of running millions of threads per frame, there are plenty of threads with which to fill the execution units of the hardware, and thus exploiting TLP to fill the width of the hardware is all NVIDIA needs to do to get good utilization.

There are ways in which AMD's architecture offers benefits though. Because AMD doesn't have to context switch wavefronts every chance it gets and is able to extract ILP, it can be less sensitive to the number of active threads running than NVIDIA hardware (however both do require a very large number of threads to be active to hide latency). For NVIDIA we know that to properly hide latency, we must issue 6 warps per SM on G80 (we are not sure of the number for GT200 right now), which would result in a requirement for over 3k threads to be running at a time in order to keep things busy. We don't have similar details from AMD, but if shader programs are sufficiently long and don't stall, AMD can serially execute code from a single program (which NVIDIA cannot do without reducing its throughput by its instruction latency). While AMD hardware can certainly handle a huge number of threads in flight at one time and having multiple threads running will help hide latency, the flexibility to do more efficient work on serial code could be an advantage in some situations.

ILP is completely ignored in NVIDIA's architecture, because only one operation per thread is performed at a time: there is no way to exploit ILP on a scalar single-issue (per context) architecture. Since all operations need to be completed anyway, using TLP to hide instruction and memory latency and to fill available execution units is a much less cumbersome way to go. We are all but guaranteed massive amounts of TLP when executing graphics code (there can be many thousand vertecies and millions of pixels to process per frame, and with many frames per second, that's a ton of threads available for execution). This makes the lack of attention to serial execution and ILP with a stark focus on TLP not a crazy idea, but definitely divergent.

Just from the angle of extracting parallelism, we see NVIDIA's architecture as the more elegant solution. How can we say that? The ratio of realizable to peak theoretical performance. Sure, Radeon HD 4870 has 1.2 TFLOPS of compute potential (800 execution units * 2 flops/unit (for a multiply-add) * 750MHz), but in the vast majority of cases we'll look at, NVIDIA's GeForce GTX 280 with 933.12 GFLOPS ((240 SPs * 2 flops/unit (for multiply-add) + 60 SFUs * 4 flops/unit (when doing 4 scalar muls paired with MADs run on SPs)) * 1296MHz) is the top performer.

But that doesn't mean NVIDIA's architecture is necessarily "better" than AMD's architecture. There are a lot of factors that go into making something better, not the least of which is real world performance and value. But before we get to that, there is another important point to consider. Efficiency.

Derek Gets Technical Again: Of Warps, Wavefronts and SPMD AMD's RV770 vs. NVIDIA's GT200: Which one is More Efficient?
Comments Locked

215 Comments

View All Comments

  • FITCamaro - Wednesday, June 25, 2008 - link

    Yes I noticed it used quite a bit at idle as well. But its load numbers were lower. And as the other guy said, they probably just are still finalizing the drivers for the new cards. I'd expect both performance and idle power consumption to improve in the next month or two.
  • derek85 - Wednesday, June 25, 2008 - link

    I think ATI is still fixing/finalizing the Power Play, it should be much lower when new Catalyst comes out.
  • shadowteam - Wednesday, June 25, 2008 - link

    If a $200 card can play all your games @ 30+fps, does a $600 card even make sense knowing it'll do no better to your eyes? I see quite a few NV biased elements in your review this time around, and what's all that about the biggest die size TSMC's every produced? GTX's die may be huge, but compared to AMD's, it's only half as efficient. Your review title, I think, was a bit harsh toward AMD. By limiting AMD's victory only up to a price point of $299, you're essentially telling consumers that NV's GTX 2xx series is actually worth the money, which is a terribly biased consumer advice in my opinion. From a $600 GX2 to a $650 GTX 280, Nvidia's actually gone backwards. You know when we talk about AMD's financial struggle, and that the company might go bust in the next few years... part of the reason why that may happen is because media fanatics try to keep things on an even keel, and in doing so they completely forget about what the consumers actually want. No offence to AT, but I've been into media myself, and I can tell when even professionals sound biased.
  • paydirt - Wednesday, June 25, 2008 - link

    You're putting words into the reviewer(s) mouth(s) and you know it. I am pretty sure most readers know that bigger isn't better in the computing world; anandtech never said big was good, they are simply pointing out the difference, duh. YOU need to keep in mind that nVidia hasn't done a die shrink yet with the GTX 2XX...

    I also did not read anything in the review that said it was worth it (or "good") to pay $600 on a GPU, did you? Nope. Thought so. Quit trying to fight the world and life might be different for you.

    I'm greatful that both companies make solid cards that are GPGPU-capable and affordable and we have sites like anandtech to break down the numbers for us.

  • shadowteam - Wednesday, June 25, 2008 - link

    Are you speaking on behalf of the reviewers? You've obviously misunderstood the whole point I was trying to make. When you say in your other post that AT is a reviews site and not a product promoter, I feel terribly sorry you because reviews sites are THE best product promoters around, including AT, and Derek pointed this out earlier that AT's too influential to ignore by companies. Well if that is truly the case, why not type in block letters how NV's trying to rip us off, for consumers' sake, may be just for once do it, it'll definitely teach Nvidia a lesson.
  • DaveninCali - Wednesday, June 25, 2008 - link

    I completely agree. Anand, the GTX 260/280 are a complete waste of money. You are not providing adequate conclusions. Your data speaks for itself. I know you have to be "friendly" in your conclusions so that you don't arouse the ire of nVidia but the launch of the 260/280 is on the order of the FX series.

    I mean you can barely test the cards in SLI mode due to the huge power constraints and the price is ABSOLUTELY ridiculous. $1300 for SLI GTX 280. $1300!!!! You can get FOUR 4870 cards for less than this. FOUR OF THEM!!!! You should be screaming how poorly the GTX 280/260 cards are at these performance numbers and price point.

    The 4870 beats the GTX 260 in all but one benchmark at $100 less. Not to mention the 4870 consumes less power than the GTX 280. Hell, the 4870 even beats the GTX 280 in some benchmarks. For $350 more, there shouldn't even be ONE game that the 4870 is better at than the GTX 280. Not even more for more than 100% of the price.

    I'm not quite sure what you are trying to convey in this article but at least the readers at Anandtech are smart enough to read the graphs for themselves. Given what has been written in the conclusion page (3/4 of it about GPGPU jargon that is totally unnecessary) could you please leave the page blank instead.

    I mean come on. Seriously! $1300 compared to $600 with much more performance coming from the 4870 SLI. COME ON!! Now I'm too angry to go to bed. :(
  • DaveninCali - Wednesday, June 25, 2008 - link

    Oh and one other thing. I thought Anandtech was a review site for the consumer. How can you not warn consumers from spending $650 much less $1300 on a piece of hardware that isn't much faster and in some cases not faster at all than another piece of hardware priced at $300/$600 in SLI. It's borderline scam.

    When you can't show SLI numbers because you can't even find a power supply that can provide the power, at least an ounce of criticism should be noted to try and stop someone from wasting all that money.

    Don't you think that consumers should be getting some better advise than this. $1300 for less performance. I feel so sad now. Time to go to sleep.
  • shadowteam - Wednesday, June 25, 2008 - link

    It reminds of that NV scam from yesteryears... I'm forgetting a good part of it, but apparently NV and "some company" racked up some forum/blog gurus to promote their BS, including a guy on AT forums who eventually got rid off due to his extremely biased posts. If AT can do biased reviews, I can pretty much assure you the rest of the reviewers out there are nothing more than just misinformed, over-arrogant media puppets. To those who disagree w/ me or the poster above, let me ask you this... if you were sent out $600 hardware every other week, or in AT's case, every other day (GTX280's from NV board partners), would you rather delightfully, and rightfully, piss NV off, or shut your big mouth to keep the hardware, and cash flowing in?
  • DerekWilson - Wednesday, June 25, 2008 - link

    Wow ...

    I'm completely surprised that you reacted the way you did.

    In our GT200 review we were very hard on NVIDIA for providing less performance than a cheaper high end part, and this time around we pointed out the fact that the 4870 actually leads the GTX 260 at 3/4 of the price.

    We have no qualms about saying anything warranted about any part no matter who makes it. There's no need to pull punches, as what we really care about are the readers and the technology. NVIDIA really can't bring anything compelling to the table in terms of price / performance or value right now. I think we did a good job of pointing that out.

    We have mixed feelings about CrossFire, as it doesn't always scale well and isn't as flexible as SLI -- hopefully this will change with R700 when it hits, but for now there are still limitations. When CrossFire does work, it does really well, and I hope AMD work this out.

    NVIDIA absolutely need to readjust the pricing of most of their line up in order to compete. If they don't then AMD's hardware will continue to get our recommendation.

    We are here because we love understanding hardware and we love talking about the hardware. Our interest is in reality and the truth of things. Sometimes we can get overly excited about some technology (just like any enthusiast can), but our recommendations always come down to value and what our readers can get from their hardware today.

    I know I can speak for Anand when I say this (cause he actually did it before his site grew into what it is today) -- we would be doing this even if we weren't being paid for it. Understanding and teaching about hardware is our passion and we put our heart and soul into it.

    there is no amount of money that could buy a review from us. no hardware vendor is off limits.

    in the past companies have tried to stop sending us hardware because they didn't like what we said. we just go out and buy it ourselves. but that's not likely to be an issue at this point.

    the size and reach of AnandTech today is such that no matter how much we piss off anyone, Intel, AMD, NVIDIA, or any of the OEMs, they can't afford to ignore us and they can't afford to not send us hardware -- they are the ones who want an need us to review their products whether we say great or horrible things about it.

    beyond that, i'm 100% sure nvidia is pissed off with this review. it is glowingly in favor of the 4870 and ... like i said ... it really shocks me that anyone would think otherwise.

    we don't favor level playing fields or being nice to companies for no reason. we'll recommend the parts that best fit a need at a price if it makes sense. Right now that's 4870 if you want to spend between $300 and $600 (for 2).

    While it's really really not worth the money, GTX 280 SLI is the fastest thing out there and some people do want to light their money on fire. Whatever.

    i'm sorry you guys feel the way you do. maybe after a good night sleep you'll come back refreshed and see the article in a new light ...
  • formulav8 - Wednesday, June 25, 2008 - link

    Even in the review you claim 4870 is a $400 performer. So why don't you reflect that in the articles title by adding it after the $300 price?? Would be better to do so I think anyways. :)

    Maybe say 4870 wins up to the $400 price point and likewise with the 4850 version up to the $250 price that you claimed in the article...

    This tweak could be helpful to some buyers out there with a specific budget and could help save them some money in the process. :)


    Jason

Log in

Don't have an account? Sign up now