Intel's 925X & LGA-775: Are Prescott 3.6 and PCI Express Graphics any Faster?
by Anand Lal Shimpi on June 21, 2004 12:05 PM EST- Posted in
- CPUs
PCI Express Graphics
When the first 440LX motherboards hit the streets with AGP support, it was so exciting to finally have a new slot on motherboards that had been littered with PCI and ISA slots for so long. The excitement of having that new slot is once again duplicated with the new PCI Express x16 slots that have found their way onto 925X and 915 boards. So, what is the big deal behind PCI Express as a graphics bus?For starters, AGP/PCI are parallel and PCI Express is serial. What this means is that rather than sending multiple bits at a time, PCI Express only sends one bit per clock in each direction. Mildly confusing is the fact that multiple PCI Express lanes can be connected to one device (giving us PCI Express x4, x8, x16 and so on). Why is an array of serial interfaces different than a parallel interface? We're glad that you asked.
Signaling is generally more difficult using a parallel communication protocol. One of the problems is making sure that all the data being sent in parallel makes it to its destination in a timely fashion (along with all the signaling and control flow data that are included with each block of data sent). This makes circuit board layout a little tricky sometimes, and forces us to keep signaling over cables to relatively short distances using equal length lines (e.g. an IDE cable). The fact that so much care needs to be taken about getting all the bits to their destination intact and together also limits signaling speed. Standard 32bit PCI speed is 33MHz. DDR memory is connected to the rest of the system in parallel and runs at a few hundred MHz. On the other hand, one PCI Express lane is designed to scale well beyond 2GHz.
The downside of this enhanced speed and bandwidth is bus utilization. Obviously, if we are sending data serially, we are only sending one bit every clock cycle. This is 32 times less data per clock cycle than the current PCI bus. Add on to that, the fact that all low level signaling and control information need to come over the same single line (well, PCIe actually uses a differential signal - two lines for one bit - but who's counting). On top of that, serial communications don't really react well to long strings of ones or long strings of zeros, so extra signaling overhead is implemented to handle those situations better. Parallel signaling has its share of problems, but a serial bus will always have lower utilization. Even in cases where a serial bus has a bandwidth advantage over a parallel bus, latency may still be higher with the serial bus.
Fortunately, PCI Express is a very nice improvement over the current PCI bus. It's point to point, so we don't need to deal with bus arbitration; its serial, so it will be easy to route on a motherboard (with just four data wires for PCI Express x1) and will scale up in speed more easily. It's also backwards compatible with PCI from the software's perspective (which means developers will have an easier time porting their software).
Unfortunately, it will be harder for users to "feel" the advantages of PCI Express over PCI, especially while the transition is going on and motherboards will be supporting "legacy" PCI slots and busses, and companies will have to find the sweet spot between their PCI and PCI Express (or AGP and PCI Express) based cards. Software won't immediately take advantage of the added bandwidth because it is common practice (and simply common sense) to develop for the widest audience and highest level of compatibility when dealing with any type of computing.
Even after game developers make the most of PCI Express x16 in its current form, end users won't see that much benefit - there's a reason that high end GPUs have huge numbers of internal registers and a 35GB/s connection to hundreds of megs of local GDDR3. By the time games come out that would even think of using 4GB/s up from and 4GB/s down to main memory, we'll have even more massive amounts of still faster RAM strapped on to graphics boards. The bottom line is that the real benefit will present itself to applications that require communication with the rest of the system, like video streaming and editing, or offloading some other type of work from the CPU onto the graphics card.
39 Comments
View All Comments
Phiro - Monday, June 21, 2004 - link
Great article, good pics on the new socket.I'm glad to see PCI-E performance is within a % or two of AGP-8X, and that Nvidia & ATI are neck and neck, no big hit on either one.
I think it was clear to anyone who has been following the move to PCI-E that the onus wasn't on a performance increase on a single card - the move to PCI-E is an engineering one, not a siloed performance gain. The idea is we have a much more robust bus, we can have many cards with tons of bandwidth instead of one, and we add alot of versatility.
It's like the move from VLB to PCI - anyone remember that? PCI was a good, good standard. While graphics cards didn't make a huge jump in performance, you finally got away from those damn ISA slots.
Anyhow. I think PCI-E is a good standard, and I'm going to have it in my next system.
RyanVM - Monday, June 21, 2004 - link
Why weren't there more comparisons between equal processors on the different platforms, such as LGA775 P4E vs. S478 P4E (2.8, 3.2, etc)? It seems to me that those would better isolate the chipset.ZobarStyl - Monday, June 21, 2004 - link
I don't think AMD solutions with PCI-E will be any faster...the reason that the benches using the new chipset with DDRII were considered even ground is that the companion article on the new Intel chipsets showed there is at this point no difference between the two setups in terms of performance, only in price. This generation of PCI-E solutions based on AGP designed chips (from both camps) wasn't really built with PCI-E bandwidth in mind, so the gains on any system are likely neglible. Once chips (and games too, I would assume) can be built with the bandwidth of PCI-E in mind perhaps we will see a gain, right now let rich kids upgrade while you sit back on a much cheaper AGP solution that gives the same perf. =)CU - Monday, June 21, 2004 - link
I think ATI said they were buffers and not a bridge. I could be wrong though.elephantman - Monday, June 21, 2004 - link
I'd have to agree with justly on that last oneAlso..I believe nvidia had posted an xray of ati's pcie core which showed a bridge solution and not a fully native pcie solution as stated...maybe we'll get a response from ati on this soon
justly - Monday, June 21, 2004 - link
This quote from page two is pure rubbish."It used to be that the heatsink, not the socket's lever, was what provided the majority of force on the CPU itself to ensure proper contact with the socket."
The force exerted on the CPU by the heatsink is used to maximize heat transfer. If the heatsink force was to provide "contact with the socket" then there would be no need for a lever (at least on a ZIF socket). This would also mean that no one should worry that a CPU could burn up without a heatsink, as it would not have "contact with the socket" without the force of the heatsink pushing down on it.
mkruer - Monday, June 21, 2004 - link
To be fair I would use the P4E for rendering IF it wasn’t a power hog. But since I doen render anything movies, I guess not.mkruer - Monday, June 21, 2004 - link
Hum interesting PCI Express offers virtually no gain because of DDR-2 latencies? I wonder how much better PCI Express would be on an AMD 64 with DDR-1? You don’t have the DDR-2 latencies issues, plus because of HT, that’s Hyper Transport for you Intel people out there, I wonder if in the long run the AMD systems will perform better for the graphics card on average then any Intel chipset. Anyway this confirms my suspicion, “never buy any first generation product form either company” and in Intel’s case this time you might want to wait for the Merom, Conroe and Tukwila, chips because I think everyone should stick a fork in the P4 it’s done! (pun intended)phobs - Monday, June 21, 2004 - link
Interesting read,Bit of a error on page 22, you say "concluding our AGP vs. PCI Express performance investigation." and then go on to have 2 more pages of benchmarks...