sharanbr :
Pinhedd :
sharanbr :
Hello All,
Can anyone give me pointers as to where I can find some good block diagram or Architecture for Xeon processor. Mainly I am a little confused with the number of root complex ports in the system (not just processor but full system), how these root complex ports are used, interface from processor to chipset, chipset to south bridge, concept of PCH etc.
If someone can help me with some pointers, I would be very thankful
Hi,
Intel's consumer microprocessors (i3, i5, and i7) and business microprocessors (Xeon E3, E5, E7) are usually derived from the same designs. Most of the technical information that you're looking for can be found in Intel's published datasheets which can be found on ark.intel.com
Thanks. I got one pdf that talks about intel architecture basics.
There are few observations & questions. I will be glad to get comments ...
The processor has one PCIe interface apart from DMI, Display and Memory interfaces.
Why is PCIe interface from processor needed for?
The processor also has display port. Is this same as graphics port?
The chipset interface is through DMI. Chipset handles most of the peripheral interfaces.
Is this chipset same as PCH?
Since DMI is used to connect to chipset, what is needed in case of multi-CPU systems for CPU-CPU communication?
Intel's current lineup of microprocessors have the PCIe root complexes integrated into the microprocessor. This allows the root complex to communicate with the main memory at a much faster rate than it would if it had to cross another bus from the North Bridge to the CPU. Intel's PCH (formerly called the South Bridge) exposes an addition 8 PCIe 2.0 lanes which do have to cross the DMI2.0 bus (which is itself just a proprietary PCIe 2.0 4x format) in order to reach the memory controller. This adds an additional off-chip transaction layer.
The PCIe lanes originating from the CPU are most commonly used for graphics cards, but they can be used for anything including coprocessors (Xeon Phi, Tesla, FireStream), RAID controllers, PCIe attached SSDs, etc... GPUs tend to benefit the most because they do tend to be time sensitive and bandwidth sensitive.
Intel CPUs that have onboard IGPs use an interconnected called Flexible Display Interface (FDI) to communicate with the PCH. The FDI communication works in parallel with the DMI communication. When present, the PCH exposes the FDI interface as a pair of either DisplayPort, HDMI, DVI, or VGA connectors. FDI is derived from DisplayPort.
In the past the chipset was comprised of two chips, a North Bridge (also called a Memory Controller Hub, or MCH) and a South Bridge (also called a Platform Controller Hub, or PCH). The North Bridge handled high bandwidth, low latency IO and was connected to the CPU(s) via the Front Side Bus. The FSB could be time-division multiplexed to allow multiple CPUs to share the bus, this was the case with all Core 2 Quad microprocessors that were implemented as a pair of Core 2 Duo microprocessors glued together in the same package. All microprocessors communicated with the North Bridge, which held the platform's memory controller. The North Bridge communicated with the South Bridge using a DMI bus (which is derived from PCIe).
Over time, Intel integrated the entirety of the North Bridge into the CPU package. This includes the memory controller, high speed PCIe lanes, and IGP where present. They also integrated the North Bridge's DMI interface into the CPU package, allowing the CPU to communicate with the PCH using DMI/DMI2.
The integration of MCH components into the CPU package didn't occur overnight. The i7-900 series microprocessors had the memory controller integrated, but did not have integrated PCIe lanes or an integrated IGP; these remained on the X58 chipset. The FSB was replaced with QPI. Unlike FSB, QPI is a point-to-point interface which allows one device to communicate with another directly without sharing the bus; requests from one device can be routed through another to reach a further device. Rather than two or more CPUs communicating with the same MCH via a time multiplexed FSB, one CPU communicated with the PCH via QPI, and other CPUs reached the MCH by first communicating with the first CPU via chains of separate QPI links. Since each CPU had its own triple-channel memory controller, the bottleneck caused by the FSB was greatly reduced; instead, QPI overhead was incurred only when a CPU needed to act on memory attached to a separate CPU.
Starting with the Xeon E5 series microprocessors (Sandybridge-E), DMI is used to communicate with the PCH (typically attached to the first socket) and QPI is used to facilitate communication between sockets only.