DMI vs QPI bus

Status
Not open for further replies.

redcom1227

Distinguished
Mar 13, 2011
1
0
18,510
I m looking forward to buy a Desktop for CAD Designing and Hard Core Gaming.

I m confused which processor is better from the following i7 Series or i7 2nd Generation (also which model)

I was also wondering DMI bus is better or QPI bus.
 


Well the highest performing CPUs are the Corei7 980x/990x but they're part of the Extreme Edition lineup and cost a pretty penny. These processors use QPi.

From a most cost sensitive point of view the Core i7-2600K delivers GREAT performance at a fraction of the cost. This processor uses DMI.

QPi is superior to DMI as a form of transmitting information from point A to point B. But the Core i7-2600K is equipped with a more powerful computational core (and has 4 of them). This more than make's up for the QPi vs. DMI (which really doesn't matter much except when it comes to Dual GPUs and PCI Express limitations due to DMI bandwidth restrictions).

If I were you.. I'd wait a few months for the Intel LGA 2011 socket (QPi + updated and improved Processing Core) and/or AMD Bulldozer (varying on which one is superior). Worth the wait imho.
 
"QPI vs DMI" is a completely false dichotomy.

DMI is used by both processors to connect the northbridge (or what used to be northbridge) to the PCH aka southbridge.

QPI is used to connect the socket 1366 processors to the northbridge, while the northbridge is completely incorporated into socket 1155 processors, so no QPI bus is possible or necessary there.

The real differences are in the number of memory channels (3 for 1366, 2 for 1155) and the number of PCIe channels (2x16 for 1366, 1x16 or 2x8 for 1155). Make your assessment based on that, not on whether QPI is "better" than DMI.
 

This is exactly right.

QPI- 25.6GB/s of bidirectional bandwidth at 6.4Gt/s or 12.8GB/s one way.
DMI-2GB/s of bidirectional bandwidth or 1GB/s one way.

The QPI has to deal with the data from the X58 chipset in the northbridge and the data coming from the southbridge. With 2x16 Pci-e 2.0 + 4 lanes in the north bridge that would be over 16GB/s and the max data from the South bridge would be 1GB/s.

As archibel said, DMI only has to deal with the south bridge since the Pci-e lanes are integrated into the processor. Processors that use motherboards with DMI can only use 1x16 or 2x8 lanes for graphics cards right now.

Most graphics cards don't saturate a Pci-e x8 lane yet anyway except for maybe the dual cards like 6990 or soon the GTX 590.
 
northbridge? The DMI is that connection between the processor and the platform controller hub. I don't think you understand what you have written there. 1x16 and 1x8 are directly connected to the cpu.
Nope. Haserath is exactly right. Processors with QPI need the incredible bandwidth to connect to the northbridge, which is separate. In these cases (X58), the northbridge is connected to the southbridge with DMI.

In the case of 1155/1156 CPUs, the northbridge functions are integrated into the CPU (primarily PCI-E). As a result, there isn't actually a northbridge on these boards. Instead, there's just a so-called platform controller hub, which is the equivalent of the southbridge on an X58 or older board. The processor connects to this with DMI, just like the connection from the northbridge to the southbridge on an X58. There is no QPI equivalent per se, since the CPU and northbridge are on the exact same die (so, all the communications handled by the QPI on an X58 board are actually internal on an 1156 or 1155 CPU).

The only time when a high-bandwidth external connection makes a large difference is for multiple CPU boards. On dual CPU workstations, each CPU has 2 QPI links, one of which goes to the northbridge, and one of which goes to the other CPU to allow for extremely fast inter-CPU communication. QPI is essential for this kind of communication, but for single CPU usage, it's not a big deal.
 
What about if you have a NF200 chip like I do on my 1155 p67a-ud7. Curious what kind of bandwidth I'm running between my pci lanes and my cpu. I run 2x sli with this board 16x 16x. Really curious if I'm getting the kind of bandwidth i was recieving with my 1156 on my previous asus p7p55 ws supercomputer board that I had. I was running the qpi at roughly 3.2 ghz. The motherboards bios on the ud7 does not give any reading or explanation and neither do any of the utilities I run.
 
Let me have a go

The Qpi is a high speed point to point connections between processors, and between processors and the I/O HUB. Each processor (In case of Xeons) has its own dedicated memory and in case it need to use a other processors memory it can do so through the QPI. That's why dual server board Xeons have two QPIs.

Now the DMI is the Direct media interface. Its a 2.5Gt/s four lane bi directional point to point interface which shares the Pci-e reference clock. It is used by the different interfaces to communicate with each other in short. The fancy word for it is a Proprietary Interconnect

The QPI has a theoretical bandwidth of up to 25Gt/s.


Am I correct to assume with the nf200 chip I would receive 8x 2.5Gt/s? Which would be 20 Gt/s. The nf200 nearly classifies as a northbridge on this motherboard.
 

You never had a QPI. 1156 had DMI, just like 1155. The only consumer boards with QPI are 1366.
 
Well I am sorry but in the Asus bios of that particular board it stated as QPI, and also the freq options were listed as qpi as well. Maybe because it too had an NF200 chip on board running 16x 16x.
 
For that board though, I had frequency options for the boards bus speed and I do not for this ud7 board. http://www.xtremesystems.org/forums/showthread.php?t=236801 is a link to a review showing you the QPI.

So back to the original question, what am I seeing as far as speeds on this p67a-ud7 board as far as motherboard bus. I really wish Gigabyte had it documented a bit more. Would be even better if I could have my way with it like I could with the asus board but on this platform that is more or less out of the question.
 
The Lynnfield and Clarkdale i3, i5, and i7s all have a QPI bus internal to the die (or internal to the package, in the case of Clarkdale). It is possible the BIOS permits the frequency of the QPI clock to be adjusted independently of the core clock, though in general this is not done.
 

The NF200 does not classify as a northbridge. The NF200 chip is fed by the 16 PCIe 2.0 lanes The NF200 then broadcasts the data received by the 16 PCIe 2.0 lanes from the processor, to the 32 PCIe 2.0 lanes it provides.
 
Since when does the I5 have a QPI link

It's internal, so the end-user doesn't see it, but that doesn't mean it's not there. What do you think links the formerly "northbridge" chipset die to the CPU die on Clarkdale/Arrandale? It's completely internal on the Lynnfield (monolithic) die.
 
Sorry, I'm just crappy at explaining this stuff. Maybe xbitlabs is better:

"At the same time, the memory controller is located inside a separate die than the computational cores. That is why the memory subsystem works slower, since there is now an additional QPI bus on the stretch between the processor and the memory, which helps the dies inside Clarkdale to communicate with one another."

http://www.xbitlabs.com/articles/memory/display/clarkdale-memory.html

It's shown in die diagrams as "MCP Interface" (MCP = Multi-chip package)

It's why CPU-Z lists a QPI link frequency on Clarkdale.
 
the DMI, QPI and Pci-e are almost the exact same thing. They're all packet base. They're all point to point links both directions and they all use 8b/10b encoding to transmit data. So its almost one and the same thing but just capped with a different theoretical bandwidth than each other by intel. They just name it differently to make it easier to differentiate between the theoretical bandwidth of each and the platform


No, they really aren't the same thing, and yes, Lynnfield has a QPI. Internally. It connects the CPU die itself to the memory controller die. The external connection is DMI.
 
PCIe and DMI = almost the same thing
QPI = entirely different protocol

Both are high speed differential serial interfaces, but beyond that they are quite different. I understand why it might look like this to the end user, but trust me: having read the internal specs, they are NOT the same.

That said, it doesn't change the answer to the original poster: DMI vs QPI is entirely a false dichotomy. Both 1156 and 1366 platforms have QPI and DMI, QPI is just "hidden" on socket 1156 and Intel marketing on its website makes it a bit convoluted to figure this out because they set up "QPI" and "DMI" as in contention, when they're complementary.

But as lemlo has discovered, some overclocking BIOSes for 1156 have options to tweak the QPI data rate of this hidden interface beyond the spec'd value, thus changing the bandwidth between the cores and the PCIe/memory interface.

Architecturally, 1156 and 1366 (uniprocessor, anyway) are extremely similar. Both have CPU cores which communicate to northbridges with QPI and northbridges which communicate to DDR3, PCIe, and southbridges (through DMI). It's just on 1156, the entire "northbridge" has been made part of the die (in Lynnfield) or package (in Clarkdale) while on 1366 only the DDR3 has moved to the CPU die; the remainder of the 1366 "northbridge" functionality remains on the X58.

The only major differences between the two platforms is 1) 1156 has two channels of DDR3, 1366 has three, and 2) the QPI on 1366 carries more data, allowing more PCIe channels on X58.

That's the difference people should be using to decide between 1366 and 1156 (or 1155, for that matter): do they need the extra 16 PCIe lanes, or the extra memory bandwidth? If not, go with the cheaper platform.
 


If what you say is true than the only QPi any 1156/1155 platform "could" have would be on-chip (on the CPU) as the communication pathway between the on-chip PCIe controller and the Central Processing Unit.

Having read several white papers on 1155 and 1156, I see no mention of these chips having an on-chip QPi link (in fact Intel states the opposite). If an 1155 or 1156 motherboard makes this statement... it is likely to be a terminology error on their part.
 


Exactly.

Having read several white papers on 1155 and 1156, I see no mention of these chips having an on-chip QPi link (in fact Intel states the opposite). If an 1155 or 1156 motherboard makes this statement... it is likely to be a terminology error on their part.

1155 has no such interface, 1156 did. I can't be certain why architectural decisions got made like this as I wasn't involved in them, but it's probable that the decision to go with a two-chip platform for client came relatively late in the game for Nehalem and it was more expedient to just take the architecture as-is and tweak it for two-chip rather than redesign without QPI.


Edit:
Guys, let's just drop it. Anything further I could discuss would bring me dangerously close to NDA, and frankly for something which has next to no practical implications it's just not worth it. If you think 1156 has no internal QPI, I'm more than willing to let it go at that.
 
So can someone tell me what the max bandwidth would be for the DMI on a Core i5 - 760? I am trying to run two 6Gb/s (SATA3) SSD in RAID 0, and am bottlenecked; I get a maximum read speed of 550mb/s and write of 411mb/s. I am wondering if I was to go with a PCIe RAID card would I bi-pass the DMI (which may or mat not be a constraint). If it is a constraint would the Sandy Bridge core series with DMI 2.0 alleviate the problem?
 
First, you should check the date before resurrecting old threads.

Second, the bottleneck is the two SSD's. The DMI can provide up to 2 GB/s over the link to the SB. In fact those SSD's might be sata II for their interface(which will only be 3gb/s on a III link), but even if they have sata III, they might only provide 275MB/s anyway. Don't waste your money on a raid card unless you have 4+ SSD's that need more bandwidth.
 
First, resurrecting an old thread is not an offence...and should be considered a good thing that people do their research and read other peoples posts before starting a new thread.

Second, you should only reply if you have a valid answer, which apparently you do not!! Many SSD's in RAID 0 are getting upwards of 1000mb/s. I am wondering if it may be my chipset's DMI which is the bottleneck.
 

It is a good thing to research before making your own thread, but it would also be much easier to help you if you made your own thread! Everyone will be able to tell who the op is and what the original question was.

May I ask what SSD's are in raid 0 and what type of benchmark you're running? Different workloads on an SSD will provide different results.

Random reads/writes will be quite a bit lower than sequential read/writes. The max throughput of an SSD is during sequential reading and writing(reading will always be faster also).
 


This is correct.

And if I remember correctly, Sandy Bridge (LGA1155) uses a internal ring bus system, much like on the HD2900 that had a 512bit ring bus, to allow for the cores to communicate. Its probably something they used from terascale, which also had a sort of ring bus system to communicate between the 80 cores, and the current Larrabee also uses it.
 



I get what you are saying !!!!!!! LOL
 
Status
Not open for further replies.