Page 1 of 1

PCI-Express

Posted: Wed Jul 21, 2004 4:08 pm
by Peijen
I was reading a white paper about it and have some question maybe the ECE people can answer them.

The paper mentioned a switch that some channel connects to. So are some PCI-E bandwidth shared? From the paper it seems like some bandwidth are dedicated to specific device such as graphics card that connects to the controller directly, but other device are connected to a switch. Is the switch only for routing device-to-device data thus reduce controller load or does it actually share the bandwidth like internet connection?

Posted: Wed Jul 21, 2004 4:35 pm
by VLSmooth
Disclaimer: I haven't kept up to date on PCI-Express

In my experience, a switch is almost always a hardware switch (aka, massive crossbar), meaning there's neglible controller overhead and a fixed amount of bandwidth (not shared). The switch simply allows point-to-point contact between nodes.

If this is an interconnect network where each node has its own switch, the bandwidth is limited by the channel architecture and routing algorithm, but somehow I doubt this is the approach PCI-Express took. I could be wrong...

Posted: Wed Jul 21, 2004 4:43 pm
by Peijen

Code: Select all

           |----------|
gfxcard----|controller|
           |----------|
                |
             switch
                |
               ---
              | | |
              D D D
Here is an example of a setup. the gfx card has it's own channel but I am not sure if the devices (D) share bandwidth or not.

and yes the switch is hardware crossbar thingy, the paper say it can be implemented separatly or integrated into the controller chip

Posted: Wed Jul 21, 2004 4:49 pm
by VLSmooth
Since there's only one switch, I'm inclined to believe bandwidth is not shared.

Then again, this is not a definitive answer, especially since I don't know the inner workings of the switch. It could only connect to one device at a time and work in a round-robin fashion for what I currently know. Sorry :(

Posted: Wed Jul 21, 2004 5:10 pm
by Peijen
http://arstechnica.com/paedia/p/pci-express/pcie-1.html

read this and tell me what you think. I am still not sure, but I am leaning towards shared bandwidth at switch

Posted: Wed Jul 21, 2004 5:48 pm
by VLSmooth
After a quick skim, here's my interpretation:
  • The switch routes device-to-device data packets
  • Each device has a dedicated number of lanes to the PCIe switch
  • The PCIe switch arbitrates where the packets it receives goes
  • Therefore, the amount of bandwidth available to any device is limited by either (1) the number of lanes and (2) the rate of the switch
  • Since they emphasize processing power isn't the bottleneck (which is very true), the bandwidth is limited by (1) the number of lanes
  • Since the number of lanes is fixed and separate for each device, there is no sharing of bandwidth

Posted: Wed Jul 21, 2004 7:21 pm
by quantus
Ok, Vinny is right in that bandwidth from device to switch is NOT shared. The only shared bandwidth is from the switch to the rest of the system. This shared link is usually very high bandwidth and won't be a bottleneck unless you have a couple Geforce UltraMega9000000000's spewing bits at the same time. On a side note, the author of that article was confusing bits and bytes. If each link was 1 byte wide as he said for sending and recieving, then a 16x link would be 256 wires which is a LOT of wires.
a one-lane link must break down each packet into a series of bytes, and then transmit the bytes in rapid succession. The device on the receiving end must collect all of the bytes and then reassemble them into a complete packet. This disassembly and reassembly happens must happen rapidly enough to where it's transparent to the next layer up in the stack. This means that it requires some processing power on each end of the link. The upside, though, is that because each lane is only one byte wide, very few pins are needed to transmit the data. You might say that this serial transmission scheme is a way of turning processing power into bandwidth; this is in contrast to the old PCI parallel approach, which turns bus width (and hence pin counts) into bandwidth. It so happens that thanks to Moore's Curves, processing power is cheaper than bus width, hence PCIe's tradeoff makes a lot of sense.
Also, "Moore's Curves"?!?! Who do they get to write these articles?! Geez.

Posted: Wed Jul 21, 2004 7:28 pm
by Peijen
quantus wrote:The only shared bandwidth is from the switch to the rest of the system.
Oh yeah this is kind of what I was asking. I am asking about the bandwidth between switch to controller, not device to switch. I know the bandwidth of device to device and device to swtich are not shared, I am wondering if the device to controller bandwidth is shared or not.

Sorry if I wasn't clear

Posted: Wed Jul 21, 2004 7:33 pm
by Peijen
quantus wrote:On a side note, the author of that article was confusing bits and bytes. If each link was 1 byte wide as he said for sending and recieving, then a 16x link would be 256 wires which is a LOT of wires.
according to the PCIe white paper each lane can send/recieve a byte with8b/10b encoding, of course it could mean that they send 8bits over 1 wire not 1bit over 8 wires

Posted: Wed Jul 21, 2004 7:39 pm
by Peijen
here is the white paper

http://www.pcisig.com/specifications/pc ... epaper.pdf

they also have spec on their site, but you have to be a member (which I am not) to download .

Posted: Wed Jul 21, 2004 7:41 pm
by quantus
The switch to controller bandwidth may or may not be shared. It depends on the total bandwidth of your devices compared to the bandwidth of the switch/controller link. One of the article's pictures seemed to indicate two PCIe switches: one in the northbridge to link in the graphics card and one in the southbridge to connect the rest of the PCIe devices. There's no description of the link between the north bridge and south bridge so I can't comment on the level of the sharing if any.

Jonathan, feel free to chime in to provide corrections, detail, and clarification.

Posted: Wed Jul 21, 2004 7:55 pm
by VLSmooth
quantus wrote:Also, "Moore's Curves"?!?! Who do they get to write these articles?! Geez.
Actually, I read Ars fairly often (at least used to) and they're pretty good regarding correctness and brevity. Moore's Curves does sound a tad fruity though.

Posted: Wed Jul 21, 2004 11:30 pm
by quantus
Peijen wrote:
quantus wrote:On a side note, the author of that article was confusing bits and bytes. If each link was 1 byte wide as he said for sending and recieving, then a 16x link would be 256 wires which is a LOT of wires.
according to the PCIe white paper each lane can send/recieve a byte with8b/10b encoding, of course it could mean that they send 8bits over 1 wire not 1bit over 8 wires
According to that pdf file you linked, there are two directions per link and a differential pair (two low voltage wires that go either -+ or +- to send a 0 or 1) per direction so it's 4 wires for a 1x link. This is still NOT 8 bits over 8 wires. Besides, they say a bunch of times that it's a serial interface, not a parallel interface. The 8b/10b is just a recoding of 8 bits into 10 bits in a special way to achieve a lower BER (bit error rate).

Posted: Wed Jul 21, 2004 11:32 pm
by quantus
VLSmooth wrote:
quantus wrote:Also, "Moore's Curves"?!?! Who do they get to write these articles?! Geez.
Actually, I read Ars fairly often (at least used to) and they're pretty good regarding correctness and brevity. Moore's Curves does sound a tad fruity though.
See my last post. They were just plain wrong. I still have respect for anandtech.com at least.

Posted: Wed Jul 21, 2004 11:41 pm
by Dave
so is PCI express is better than AGP for videocards?

Posted: Wed Jul 21, 2004 11:52 pm
by quantus
Yes, PCIe in the >= 4x varieties is better than current AGP

Posted: Wed Jul 21, 2004 11:59 pm
by Jonathan
You need a 3d card designed to take advantage of extra bandwidth provided by parallel PCIe implementations. The first generation of PCIe cards are identical to their AGP counterparts and thus provide no performance advantage.

Posted: Thu Jul 22, 2004 12:27 am
by VLSmooth
quantus wrote:According to that pdf file you linked, there are two directions per link and a differential pair...so it's 4 wires for a 1x link
Just to clarify, those 4 wires can still only transmit 2 bits at a time (1 up, 1 down)
quantus wrote:On a side note, the author of that article was confusing bits and bytes.
Here, I believe you are correct, however it might have been a minor oversight on their part. The granularity DOES seem to be byte-based, meaning that a single complete byte cannot span multiple lanes (ie. bit granularity).

Perhaps they mixed this up? As for Ars vs. Anandtech... let's just say I read Ars more frequently and leave it at that.