The Inter-Bridge Architecture

When the bridge concept started being used, the communication between the north bridge and the south bridge was done through the PCI bus, as shown in Figure 7. The problem with this approach is that the bandwidth available for the PCI bus (132 MB/s) will be shared between all PCI devices in the system and all devices hooked to the south bridge, especially hard disk drives.

Inter-Bridge ArchitectureFigure 7: Communication between north and south bridges using the PCI bus

When high-end video cards (at that time, video cards were PCI) and high-performance hard disk drives were launched, a bottleneck situation arose. For high-end video cards, the solution was the creation of a new bus connected directly to the north bridge, called AGP (Accelerated Graphics Port). This way the video card was not connected to the PCI bus and performance was not compromised.

The final solution came when the chipset manufacturers started utilizing a new approach: using a dedicated high-speed connection between north and south bridges and connecting the PCI devices to the south bridge. This is the architecture that is used today. Standard PCI slots, if available, are connected to the south bridge. PCI Express lanes can be available on both the north bridge chip and the south bridge chip. Usually, PCI Express lanes available on the north bridge chip are used for video cards, while the lanes available on the south bridge chip are used to connect slower slots and on-board devices, such as additional USB, SATA, and network controllers.

Inter-Bridge ArchitectureFigure 8: Communication between north and south bridges using a dedicated connection

The configuration of this dedicated connection depends on the chipset model. The first Intel chipsets to use this architecture had a dedicated 266 MB/s channel. This channel was half-duplex, meaning that the north bridge and the south bridge couldn’t “talk” at the same time. Either one chip or the other was transmitting.

Currently, Intel uses a dedicated connection called DMI (Direct Media Interface), which uses a concept similar to PCI Express, with lanes using serial communications, and separate channels for data transmission and reception (i.e., full-duplex communication). The first version of DMI uses four lanes and is able to achieve a data transfer rate of 1 GB/s per direction (2.5 Gbps per lane), while the second version of DMI doubles this number to 2 GB/s. Some mobile chipsets use two lanes instead of four, halving the available bandwidth.

AMD uses a dedicated datapath called “A-Link,” which is a PCI Express connection with a different name. “A-Link” and “A-Link II” use four PCI Express 1.1 lanes and, therefore, achieve a 1 GB/s bandwidth. The “A-Link III” connection uses four PCI Express 2.0 lanes, achieving a 2 GB/s bandwidth.

If you want to know the details of a given chipset, just go to the chipset manufacturer website.


Gabriel Torres is a Brazilian best-selling ICT expert, with 24 books published. He started his online career in 1996, when he launched Clube do Hardware, which is one of the oldest and largest websites about technology in Brazil. He created Hardware Secrets in 1999 to expand his knowledge outside his home country.