Intel's Larrabee Architecture Disclosure: A Calculated First Move
by Anand Lal Shimpi & Derek Wilson on August 4, 2008 12:00 AM EST- Posted in
- GPUs
Thread and Data Management: It's Time to Blow Your Mind
With both the recent NIVIDA and AMD graphics hardware launches, we spent quite a bit of time talking about thread management. Since Larrabee is designed to be more of a collection of general purpose scalar and vector processing units, and vertex, primitive and pixel data (along with associate shader programs) are software managed. As we discussed what a context is for AMD and NVIDIA graphics hardware, a true context is going to be a different thing altogether on Larrabee.
We do have to make a point of saying before proceeding that NVIDIA and AMD are under no obligation to actually tell us how their architecture is physically implemented. It is entirely possible that much of the attributes of the hardware are not actually attributes of the hardware but simply reflections of how hardware resources are used. In recent discussions with both companies about certain realities of their hardware revealed to us that the belief is if the system behaves like a specific physical implementation then it effectively is the same as that physical implementation.
Of course, we disagree. And it is possible that some of this has more similarity with NVIDIA and AMD than they are letting on. But we'll go on what we've got for now, and assume that what Intel is doing is as divergent as it sounds.
Each Larrabee core on a chip (of which it seems likely there will be some multiple of 8 in the final product) can maintain 4 simultaneous software threads (4 contexts are kept active at a time). This gives the appearance of 4 virtual physical processors to software running directly on the hardware even though all four threads are sharing a single resource. It is very likely that the major purpose of this is to hide some of the long latency we hit when going to memory for texture data and the like.
Now, for the purpose of graphics rendering using Intel's software rendering library or as it emulates DirectX and OpenGL, a thread is set up to manage the resources for a larger group of instructions and data that Intel calls a "fiber". Normally a thread will manage 8 fibers at a time. The hardware thread maintains a context in software for the fiber. The fiber's job is to manage the execution data parallel kernels on multiple groups of 16 "strands" (because the vector processor is 16-wide). A strand is what we have traditionally called a thread on other graphics hardware. The problem here is that Intel hardware is actually executing threads in a way that emulates hardware features of other architectures.
To put it together a little better, imagine one of Intel's threads as one of NVIDIA's TPCs, a fiber as an SM, and a strand as a thread. Okay, so it isn't that simple (simple?). But it is a sort of rough way of looking at it and a quick way of understand why naming is different here.
Let's take a deeper look at what goes on. With 4 threads per core (with at least 8 and hopefully something more like 32 cores), 8 fibers per thread, and some multiple of 16 strands per fiber, we could end up with a huge number of strands being managed simultaneously. This is active, running threads we are looking at as well. Since Larrabee will be a CPU in a true sense of the term, we can have as many threads as necessary live and waiting for a time slice. In the context of a normal CPU, this would be managed by the operating system, but as Larrabee will see the light of day as a graphics card, the driver will probably be managing timesharing issues an OS would normally perform.
While running ridiculous numbers of threads per core at a time might kill performance, unlike current GPUs, resource availability doesn't disrupt the creation of threads. Six of one, half dozen of the other? Maybe, and maybe not. Having active threads with data available to context switch to is key to hiding latency in NVIDIA and AMD hardware. If enough threads cannot be actively maintained, stalls happen and kill performance. Similar issues will impact Intel, and keeping dual-issue in-order hardware busy with multiple threads might be more easily managed if it can fall back on traditional CPU thread management paradigms to handle an abundance of threads that manage software that manages data parallel kernels.
101 Comments
View All Comments
christophergorge - Tuesday, August 5, 2008 - link
is it just me or does it look like another transmeta crusoe in the making?Byte - Tuesday, August 5, 2008 - link
Looks like Puma will have a hard prey to hunt. This should be pretty successful, even if it will be underpowered in DX games, but that shouldn't matter as even now Intel is selling lots of graphics just because they almost force it onto OEMs. Intel could similarly force these onto OEMs, but at least this time it won't be a huge pile of crap.ilkhan - Tuesday, August 5, 2008 - link
So is the on package GPU we expect to see in Havendale & Auburndale chips going to be larrabee chips?If anything Id expect to see 8 or 16 core versions to be the onboard GPU for those. Probably 8 core, to keep costs down for onboard chips.
DeepThought86 - Monday, August 4, 2008 - link
Nice HPC platform, terrible idea for a graphics chip. Just look at the die allocation, it's optimized for instruction-heavy and data-poor tasks. Killer for BOINC and folding type stuff, but there's no way this general purpose use of transistor budget makes sense for graphics.Power consumption for the high-speed ringbus will be killer as well. In idle today's GPUs are quite efficient, Larrabee will burn watts doing nothing.
This architecture will occasionally handle a particular game excellently, but completely fall down in others. In a way it's the opposite of Nvidia or AMD today.
Ah well, they've had a good run since 2006, looks like they're headed for their next down cycle, just as AMD has started rising again...
ltcommanderdata - Monday, August 4, 2008 - link
From Intel's Siggraph paper, Larrabee's claimed performance is pretty decent.Intel's internal results are that Larrabee will only require about 10 cores running at 1GHz to maintain HL2 Episode 2 above 60fps at a 1600x1200 resolution. They estimate that a 25 core 1GHz Larrabee will be sufficient to maintain FEAR and Gears of War above 60fps at 1600x1200. FEAR is older than Gears of course, but FEAR had an occasional frame spike, probably on a more complicate frame, so 25 cores should guarantee a 60fps minimum fps. Of course, these are Intel's own benchmarks and they only tested a very small section of the game that they picked, but things do look promising. At the very least performance is better they trying to play the game on current Intel IGPs.
iocedmyself - Tuesday, August 5, 2008 - link
1ghz core x 10 to maintain HL2 above 60 fps in 1600x1200...wow...that's on par with a x1800xt? at absolute most.1ghz core x 25 for FEAR and gears of war @ 60 fps..that is the equivolent of a
$180 ATi 4850 running in 1920x1200,@ 1600x1200 does 90 fps 50% better
...or
the same frame rate as the $290 ATI 4870...in 2560x1600, in 1600x1200 it does 114 fps, nearly twice the performance.
yes, they could scale it up to 50 cores, running 3ghz and it would still only equal about 2/3 the processing power as a single core 4870. Intel's 80 core terascale chip does 1 teraflop/sec at 3.2ghz.
This is a horribly flawed design...they are doing the opposite of the logical step...in what twisted reality can someone say,
"well if GPU's are capable of delivering x20-x40 the performance of a desktop cpu package running at 1/5th the clock speed (or more accurately x80-x110 the performance on a core by core basis) the logical solution is to put 48 cpu cores in a single package!"
Intel couldn't manage to produce an IGP that ran the GUI of an operating system smoothly at all times, they took years longer than AMD to develope 2 core die dual core, years longer to be able to make a photocopy of thier IMC, and continues to fail in 64bit computations comparitively...
but they think because they've developed a 32bit arch, built of a 10 year old design and gained market control for less than two years after producing complete and utter crap for the previous 7 straight...that they can take the video card market from 2 companies each having 13+ years expeirence in the market.
AMD is already testing 40nm die 64bit dual/quad cpu with IMC supporting DDR2 AND DDR3, 1 or more gGpu's and a total of 6-10MB on die cache.
Native dual core Gpu's, cpu's gpu's and a combination of both with built in memory...ya know, designs that actually have some promise...but they are going to nail an x86 in which developers will have to change the way they think, program and deploy ideas. We barely have software that will utilize 4 cores, let alone 40. Meanwhile all amd has to do is intergrate the 780G IGP into a cpu package and intel is screwed.
But hey....i bet AMD could make a kick ass Gfx card if they took the r540 (x850xt PE core) gave it a die shrink down to 55nm and added SM4 support, then stuffed 50 or 60 into a single package it would do great.
HELL why stop there, just give the r770 a die shrink down to 40nm, put 10 cores to a gpu die,
make a dual gpu board,
2x5 gigs of of GDDR5 memory clocked at 1250mhz (5ghz effective)
they would have a single card capable of doing more than 20 teraflops/sec.
BUT WAIT! THAN THEY HAVE CROSSFIREX THEY COULD HAVE 80 R770 CORES WITH 40 GIGS OF 5GHZ GDDR5 IN A SINGLE SYSTEM!!!!! 3DMARK06 WOULD BREAK 1,000,000 POINTS IN 4096x3200 WITH 1920 FSAA 1280 AF!!!
EVERYONE WOULD BE ABLE TO RUN CRYSIS ON ULTRA HIGH SETTINGS USING A MOVIE THEATER SCREEN FOR A MONITOR WITH NO JAGGED EDGES!!!!!!!!!!!
Then it would become aware, and improve the game code, crysis would spill over into Halo, halo would break into COD4, Fallout 3 would spill over into World of Warcraft where the characters would become self aware and program viruses to only infect intel based platforms...which would destroy Mac's completely,
IT WOULD BE THE FIRST DIGITAL STD!!!!! ZOMG
It would be sold with a 6000w PSU, and it would be Green because it would run on the power of internet porn, and have the power to heat your entire house....it would save the enviroment....ZOMFG!!
But eventually....intel would come back from the wreckage...
bringing with them the next revolutionary product...
the octo-punmped Itanium 4...with Netburst 3.4 arch, featureing 127 Pentium MX cores, Each core could handle 3 threads, and it would scale to 50,000mhz, with 2 terabyte SATA 4 hard drives used for the L1 cache of each core...and testing has shown that each core will only have to run 4.7ghz to achieve 60fps in the human genome project...
Sigh...sorry, i was pretending i worked at intel. It sure is fun to imagine what could be...isn't it?
ZootyGray - Tuesday, August 5, 2008 - link
Hey OC - I just had a flashback to the "jump to light speed" scene in StarWars. Dude, total nirvana, o yeh, thx for the ride. :)BTW - my GF says, she heard a rumour that the whole thing runs on 'corn'.
I think it must be nextgen corn, cos that's a lotta teraflops. Does any of this convert to metric tonnes of refined bs? Anyway, I think I will wait for your next release.
And you accomplished that in less than one page? nano shrink, huh!
peace.
ltcommanderdata - Tuesday, August 5, 2008 - link
So your own estimates are:"..or
the same frame rate as the $290 ATI 4870...in 2560x1600, in 1600x1200 it does 114 fps, nearly twice the performance. "
So you are admitting that a 1GHz x 25 core Larrabee could be about 50% the performance of a HD 4870. But, Larrabee could be available in configurations up to 48 cores, so then a 48 core Larrabee at 1GHz could match a HD 4870. Of course, launch clocks will be better than 1GHz, since the Intel only clocked the Larrabee cores at 1GHz in their benchmarks because it's a convenient reference base. You say that Terascale clocked at 3.2GHz, but being more conservative, if Larrabee clocked in a 2GHz at launch with 48 cores, then it would be twice as fast as a HD 4870.
This is of course based on preproduction drivers. Final performance may be higher. Admittedly, this is mainly hypothetical on early Intel provided data, but using your own figures for comparison, Larrabee may not be able to be able to overtake the fastest GPUs available in 2009/2010, but it'll likely be competitive in the mid-range $200-$300 segment. Which is really all Intel needs, since the point is to get a more general purpose x86 based accelerator card into as many computers as possible. Gaming is just the vehicle to do it, and the mid-range is far higher volume than the top-end.
And in terms of flops, I believe it was in the SIGGRAPH paper somewhere that a baseline prototype Larrabee with 1 core at 1GHz gets about 32GFLOPS. Now no doubt scaling isn't perfectly linearly, but just assuming it is through clock speed and core count, a 48 core Larrabee at 2GHz could peak at 3072GFLOPS or 3 times that of a HD 4870. ATI and nVidia will obviously keep moving forward in the next year or two, just as Larrabee is still evolving, but for now, Larrabee isn't really in as bad a position as you make it out to be.
JarredWalton - Tuesday, August 5, 2008 - link
What's worse is that there are all these assumptions made with no knowledge of the settings. 1600x1200 in HL2 at absolute maximum detail settings is nothing to scoff at, and certainly 60FPS would surpass an X1950 XTX. Are we running 4xAA or not? No idea from Intel, so we've got no reference point other than to say that it should be able to generate playing performance.FEAR is even better: 25 cores at 1GHz to hit 60FPS. Okay, that doesn't sound like a lot, but is that with or without 4xAA, and is it with or without soft shadows? Both of those factors can make a HUGE difference in performance. If they are enabled, 60 FPS at 1600x1200 is very impressive for early hardware. Now go with the assumption that Intel will hit clocks of at least 2GHz at launch and will likely have 32 or 48 cores. That should compare quite favorably with NVIDIA and ATI hardware next year.
Besides all of the above commentary on not knowing settings, we don't even know the scenes that were tested. Pretty much we have nothing to go on without a frame of reference. If Intel had said, "we achieve 60 FPS with 10 cores at 1GHz, and that compares to an 8800 GT running at 60 FPS with the same settings" we could start from a meaningful baseline. Which is probably why we didn't get that information.
Finally - and this is really the key - I believe all of the stuff right now is merely theoretical. They have modeled the performance of Larrabee in the various tests, but they do not have hardware and thus have not actually run any true tests. Okay, the modeling of the hardware is probably sufficient in all honesty, but some of you are talking as though these chips are actually up and running, and they're not (yet). We'll know a lot more in another year; until then, it all sounds very interesting but the proof as always is in the pudding.
The Preacher - Tuesday, August 5, 2008 - link
Man, you must have OC'ed yourself way too high! :D