Intel Launches Light Peak Tech as ''Thunderbolt''

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
G

Guest

Guest
nice save from Intel there, rename it thunderbolt to distract from, the fact that first gen cooper based devices will not be compatible with later gen light based devices. My prediction, no one other then Mac folks will buy into the cooper thunderbolt and wait for the real lightpeak
 

fellskrazykayaker

Distinguished
Sep 2, 2010
109
0
18,690
[citation][nom]JohnnyLucky[/nom]Probably should have kept the Light Peak name. It was already branded.Thunderbolt is the nickname of a United States Air Force jet fighter.[/citation]
I agree Light Peak is better name. But it doesn't make any sense with the copper implementation which is why I think it was changed.
 

bto

Distinguished
Dec 8, 2010
53
0
18,630
[citation][nom]bavman[/nom]Thunderbolt says 10gbps, while a normal 2.0/2.1 pci-e x16 lane can handle 64gbps. Granted no card will max out all that bandwidth yet, but its still far off from pcie. You might be able to run some lower end to more mainstream cards well, but definitely not highend.[/citation]
in benchmarking most high end cards the difference between an 8 lane and a 16 lane is very small, not even noticeable if you are playing the game. which means half: 32GBps these are actually 20GBps. this will be plenty for mainstream gaming. If your an extreme gamer, you'd have money for a dedicated one anyways or if truely extreme you got your SLI laptop. Heck, who's to say they don't support multiple connections. The real problem is latency, but if we are talking light here, than 20 feet away is still faster than 1 mm from cpu.
 

kinggraves

Distinguished
May 14, 2010
951
0
19,010
So you guys know that PCIe v3 gets 8Gbps per lane right? I mean, there's also the factor of interference that any cord brings into play with the length of travel, shielding, signal loss due to imperfections at the connection points, etc. It's hard to compete with a direct path between the GPU and main CPU. I'm pretty sure the people at Intel realize this, since the types of numbers they're throwing out are comparative to USB/SATA, not PCI.

This does still bring some interesting possibilities though, the PC industry sure does love their future in the clouds, so maybe we will see "graphics boxes" in the future in an attempt to kill off desktops.
 

PudgyChicken

Distinguished
May 17, 2010
532
0
19,010
Well it's about damn time Intel released Light Point. Though I feel it would have been better if they integrated the controller in the Cougar Point chipsets and released it with the 2nd gen Core series. Let's face it, Macs are really not worth $1k+ for the average consumer. Unless you're doing some serious video/audio editing, you can use a far cheaper PC. If Intel really wants this technology to take off, they need to integrate it into PC's. That's another thing that irks me. PC=personal computer. Well, last time I checked, Macs were personal computers. I guess that make's Apple communist if they differentiate. "Collective Computers" lol
 

mwilliam12

Distinguished
Oct 7, 2010
9
0
18,510
I need to clear up a few things here. The article says that Thunderbolt features two bi-directional channels which offer 10 Gbps each. I am left to assume there are 4 channels all together - 2 upstream and 2 downstream - at 10Gbps each would that not equate to 40Gbps total bandwidth? In response to some people's posts about "USB's real-world bandwidth", USB does, in fact, reach it's theoretical limit on a daily basis, but I do believe that people are confusing the 480Mbps that USB 2.0 offer with 480MBps. There is a significant difference. A bit, represented with a lower case b represents the lowest common denominator when it comes to memory. A byte is a combination of 8 bits. Therefore the real-world limit of USB 2.0 is only 60MBps.

BTO: when you say 32GBps, are you meaning 32Gbps? If not, just be careful that you're saying something you didn't mean to. At 32MBps, you're in fact saying it's 108Mbps, which in response to bavman who was talking about bits, is widely inaccurate. But, you're right, 32MBps is plenty for all graphics cards, well into the future.
 

shadowmaster625

Distinguished
Mar 27, 2007
352
0
18,780
Hopefully this means cheaper external GPU solutions. I would love to have a portable HD5670 for $100 that I could take with me wherever I go and use on whatever computer I want. Of course by the time we get enough "Thunderbolt" enabled machines to make this worthwhile, an HD 5670 will only cost $25...
 

bavman

Distinguished
May 19, 2010
1,006
0
19,360
[citation][nom]mwilliam12[/nom]I need to clear up a few things here. The article says that Thunderbolt features two bi-directional channels which offer 10 Gbps each. I am left to assume there are 4 channels all together - 2 upstream and 2 downstream - at 10Gbps each would that not equate to 40Gbps total bandwidth? In response to some people's posts about "USB's real-world bandwidth", USB does, in fact, reach it's theoretical limit on a daily basis, but I do believe that people are confusing the 480Mbps that USB 2.0 offer with 480MBps. There is a significant difference. A bit, represented with a lower case b represents the lowest common denominator when it comes to memory. A byte is a combination of 8 bits. Therefore the real-world limit of USB 2.0 is only 60MBps.BTO: when you say 32GBps, are you meaning 32Gbps? If not, just be careful that you're saying something you didn't mean to. At 32MBps, you're in fact saying it's 108Mbps, which in response to bavman who was talking about bits, is widely inaccurate. But, you're right, 32MBps is plenty for all graphics cards, well into the future.[/citation]

A quick google search will show you that pci-e 2.0 lanes are 4Gbps not 4GBps or 4MBps or 4Mbps. Where B is for byte and b is bit. So a x16 lane can carry 64Gbps or 8GBps.
 

bto

Distinguished
Dec 8, 2010
53
0
18,630
[citation][nom]kinggraves[/nom]So you guys know that PCIe v3 gets 8Gbps per lane right? I mean, there's also the factor of interference that any cord brings into play with the length of travel, shielding, signal loss due to imperfections at the connection points, etc. It's hard to compete with a direct path between the GPU and main CPU. I'm pretty sure the people at Intel realize this, since the types of numbers they're throwing out are comparative to USB/SATA, not PCI.This does still bring some interesting possibilities though, the PC industry sure does love their future in the clouds, so maybe we will see "graphics boxes" in the future in an attempt to kill off desktops.[/citation]

You don't need to worry so much when shielding an optical connection which is probably what they are talking about when it comes to 20Gbps and no latency, which makes your initial point void. I also remember that we never maxed out the AGP 8X interface until long after pci-express
was solidfied by new cards adapted to old slots to res some gaming machines.
 

bto

Distinguished
Dec 8, 2010
53
0
18,630
[citation][nom]mwilliam12[/nom]I need to clear up a few things here. The article says that Thunderbolt features two bi-directional channels which offer 10 Gbps each. I am left to assume there are 4 channels all together - 2 upstream and 2 downstream - at 10Gbps each would that not equate to 40Gbps total bandwidth? In response to some people's posts about "USB's real-world bandwidth", USB does, in fact, reach it's theoretical limit on a daily basis, but I do believe that people are confusing the 480Mbps that USB 2.0 offer with 480MBps. There is a significant difference. A bit, represented with a lower case b represents the lowest common denominator when it comes to memory. A byte is a combination of 8 bits. Therefore the real-world limit of USB 2.0 is only 60MBps.BTO: when you say 32GBps, are you meaning 32Gbps? If not, just be careful that you're saying something you didn't mean to. At 32MBps, you're in fact saying it's 108Mbps, which in response to bavman who was talking about bits, is widely inaccurate. But, you're right, 32MBps is plenty for all graphics cards, well into the future.[/citation]

Actually I misread the article thinking it was 20 GBps when it's really only 20 Gbps still it's much faster than 6 ;) I think this will sooner than gpu implementation (now that I know the real speed we are talking here) be used for SSD interconnects, since the rate of SSD speed increase is ridiculously higher than standard HDD every year we doubled throughput went from 80 to 230 now 500+

SATA 6Gbps is going to die off quicker than we've ever seen a connection type die off due to the massive tech boom in SSD's. want more bandwidth? Stack more layers! same thing as a RAID 0 array only limited to cost and size. There is no speed limit! (other than the connection) If somecompany designed an optial based interconnect that could push 32GBps then it wouldn't be long till it got utilized to nearly full potential due to the scaling available with Nand flash.
 

bto

Distinguished
Dec 8, 2010
53
0
18,630
[citation][nom]bavman[/nom]A quick google search will show you that pci-e 2.0 lanes are 4Gbps not 4GBps or 4MBps or 4Mbps. Where B is for byte and b is bit. So a x16 lane can carry 64Gbps or 8GBps.[/citation]
Actually, no.

PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007.[10] The PCIe 2.0 standard doubles the per-lane throughput from the PCIe 1.0 standard's 250 MB/s to 500 MB/s. This means a 32-lane PCI connector (x32) can support throughput up to 16 GB/s aggregate. The PCIe 2.0 standard uses a base clock speed of 5.0 GHz, while the first version operates at 2.5 GHz.
 

bto

Distinguished
Dec 8, 2010
53
0
18,630
[citation][nom]mwilliam12[/nom]I need to clear up a few things here. But, you're right, 32MBps is plenty for all graphics cards, well into the future.[/citation]
You mean 32GBps :p

just be careful yourself ;)
 

jprahman

Distinguished
May 17, 2010
775
0
19,060


Wrong again. 500MB(bytes) per second * 8 bits per byte = 4000Mb(bits)per second = 4Gb(bits)per second. So 500MegaBytes(MB) per second is the same thing as 4GigaBits(Gb) per second. C'Mon guys get your units right.

PCIe 2.0 runs at 5GHz per lane, meaning that data gets transmitted at a rate of 5Gbits per second (one bit per clock cycle at 5GHz). However, PCIe 2.0 uses a 8b/10b encoding scheme where out of 10 bits transmitted only 8 bits are actual data. So applying that 8/10 ratio to the 5Gbits per second raw rate you get a final useful data rate of 4Gbits per second.

PCIe 3.0 runs at 8GHz, which is less than twice the 5GHz of PCIe 2.0. In order for PCIe 3.0 to double the bandwidth of PCIe 2.0 a new encoding scheme is used where out of every 130 bits sent 128 bits are data, meaning that the amount of useful data sent approaches the amount of raw data. The raw data rate is 8GigaBits per second so the effective data rate is just under 8GBits per second.
 

bto

Distinguished
Dec 8, 2010
53
0
18,630
[citation][nom]jprahman[/nom]Wrong again. 500MB(bytes) per second * 8 bits per byte = 4000Mb(bits)per second = 4Gb(bits)per second. So 500MegaBytes(MB) per second is the same thing as 4GigaBits(Gb) per second. C'Mon guys get your units right.PCIe 2.0 runs at 5GHz per lane, meaning that data gets transmitted at a rate of 5Gbits per second (one bit per clock cycle at 5GHz). However, PCIe 2.0 uses a 8b/10b encoding scheme where out of 10 bits transmitted only 8 bits are actual data. So applying that 8/10 ratio to the 5Gbits per second raw rate you get a final useful data rate of 4Gbits per second. PCIe 3.0 runs at 8GHz, which is less than twice the 5GHz of PCIe 2.0. In order for PCIe 3.0 to double the bandwidth of PCIe 2.0 a new encoding scheme is used where out of every 130 bits sent 128 bits are data, meaning that the amount of useful data sent approaches the amount of raw data. The raw data rate is 8GigaBits per second so the effective data rate is just under 8GBits per second.[/citation]

That was a wiki quote, why don't you go fix it ;)

Quote:
PCI Express 2.0
PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007.[10] The PCIe 2.0 standard doubles the per-lane throughput from the PCIe 1.0 standard's 250 MB/s to 500 MB/s. This means a 32-lane PCI connector (x32) can support throughput up to 16 GB/s aggregate. The PCIe 2.0 standard uses a base clock speed of 5.0 GHz, while the first version operates at 2.5 GHz.
PCIe 2.0 motherboard slots are fully backward compatible with PCIe v1.x cards. PCIe 2.0 cards are also generally backward compatible with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall, graphic cards or motherboards designed for v 2.0 will be able to work with the other being v 1.1 or v 1.0.
The PCI-SIG also said that PCIe 2.0 features improvements to the point-to-point data transfer protocol and its software architecture.[11]
In June 2007 Intel released the specification of the Intel P35 chipset which supports only PCIe 1.1, not PCIe 2.0.[12] Some people may be confused by the P35 block diagram which states the Intel P35 has a PCIe x16 graphics link (8 GB/s) and 6 PCIe x1 links (500 MB/s each).[13] For simple verification one can view the P965 block diagram which shows the same number of lanes and bandwidth but was released before PCIe 2.0 was finalized.[original research?] Intel's first PCIe 2.0 capable chipset was the X38 and boards began to ship from various vendors (Abit, Asus, Gigabyte) as of October 21, 2007.[14] AMD started supporting PCIe 2.0 with its AMD 700 chipset series and nVidia started with the MCP72.[15] The specification of the Intel P45 chipset includes PCIe 2.0.

Source:
http://en.wikipedia.org/wiki/PCI_Express
 

bto

Distinguished
Dec 8, 2010
53
0
18,630



wrong again?

there is a website that IS the standard which is:

"PCI Express Base 2.0 specification doubles the interconnect bit rate from 2.5 GT/s to 5 GT/s in a seamless and compatible manner. The performance boost to 5 GT/s is by far the most important feature of the PCI Express 2.0 specifications. It effectively increases the aggregate bandwidth of a 16-lane link to approximately 16 GB/s. The higher bandwidth will allow product designers to implement narrower interconnect links to achieve high performance while reducing cost."

Source:
http://www.pcisig.com/specifications/pciexpress/base2/#CM2

Sorry, YOU are wrong.
;)

not that it matters anymore, just annoying to be corrected by a newb.

So with pci-express 2.0 we see officially to have an aggregate transfer of 16 Gigabytes per second.
 

jprahman

Distinguished
May 17, 2010
775
0
19,060
What do you mean? Everything I said matched perfectly with what the standard says. I think you're not understanding the conversions and math correctly. A single PCIe 2.0 lane runs at 5GHz/5GT. Because of the encoding only 4Gbits per second of effective data is transmitted. Multiply 4Gigabits per second by 16 lanes you get 64Gbits per second, divide that by 8 bits per byte and you get 8GBps. Please state the exact claim I made that doesn't match with the PCIe spec, because you probably can't.

Bavman was correct is saying that the data rate of a PCIe 2.0 lane was 4Gbit per second, and you were incorrect in saying he was wrong because 500MBytes per second and 4Gbites per second are the same thing.

It no big deal, it's just annoying to be corrected by a newb who can't read a post fully and do the math before accusing someone of being wrong.
 

Camikazi

Distinguished
Jul 20, 2008
1,405
2
19,315
[citation][nom]Snipergod87[/nom]You can force the USB cable in the wrong way and damage the desktop side of the connector making that USB port unfunctional.[/citation]
You know how they say when you feel a barrier in your ear when using cotton swabs that you should stop pushing? Well USB kind of works the same way, if you need to put a lot of force into getting the plug in then STOP and make sure it's going in the right way, good god it's not rocket science people. It takes what? 1 maybe 2 sec to flip the plug over and try the other direction to see if it works?
 

kinggraves

Distinguished
May 14, 2010
951
0
19,010
[citation][nom]bto[/nom]You don't need to worry so much when shielding an optical connection which is probably what they are talking about when it comes to 20Gbps and no latency, which makes your initial point void. I also remember that we never maxed out the AGP 8X interface until long after pci-expresswas solidfied by new cards adapted to old slots to res some gaming machines.[/citation]

I don't think you got my "initial point" at all. Optical connections have been around for a long time, and are MUCH less vulnerable to common interferences such as EMI, but that isn't to say they'll get your data from A to B with ZERO interference. The article says LOW latency, not ZERO latency. My point is that it's better to have the graphics on board rather than on a different device connected by any cable, even an optical cable. Gamers cannot tolerate any latency between the GPU and CPU.

I doubt the maximum bandwidth of PCIe 3.0 will be necessary, but I also don't think that TBolt can replace having an onboard GPU. It's meant to extend the PCI bus out of the machine, which COULD eliminate all PCI/PCIe x1 slots onboard, making a much more compact mobo, but it isn't intended as being a replacement for PCIe bus, which will still be necessary to communicate between the TBolt interface and the processor.

Offboard sound cards, TV tuners, network and even SSD interfaces? Sure.
Offboard graphics competing with onboard graphics? Probably not. That's where GPUs on the CPU chip come into play.
 

kewlx

Distinguished
Mar 3, 2010
99
0
18,630
[citation][nom]bto[/nom]Now we'll start seeing external GTX 570's sitting on a desk next to your laptop port replicator wewt! only take your video card when you need it, imagine a 14" laptop that drops into a port replicator with a nice upper range pci-e vid card next to it. connected to your 42" LCD on your desk. Kewl,)[/citation] I resent that lol (look at my name)
 
[citation][nom]mianmian[/nom]Note that the bandwidth of thunderbolt is only 10GB/s (equals 4x pcie)It is not enought for memory, graphic card(need 8x, i think).I hope it can uniformed the exterenal I/O port (usb, display, eSATA, eithernet...). So we do not need so many type of interface. CopperPeak (well thunderbolt) all the way.[/citation]

This is the first generation of course and Intel plans to go up to 100GB/s somewhere down the line. But 10GB/s is plenty for a PCIe 2.0 GPU. 32 lanes of PCIe 2.0 can push 16GB/s. That means 16 lanes of PCIe 2.0 can puch 8GB/s. Well within Light Peaks capabilities and most single GPUs can't even saturate a x8 PCIe 2.0 lane.

I find this interesting because it basically smacks USB 3.0 and eSATA 6.0 in the face. USB 3.0 isn't even native to chipsets yet and pushes only 4.8GB/s and eSATA 6GB/s.

We wont see PCIe 3.0 until at least LGA 2011 (X68) chipsets from Intel and those will be way too powerful for current GPUs since it will effectivley double the bandwidth over PCIe 2.0.
 
G

Guest

Guest
Awful idea, awful execution, awful standard, awful name(I thought "bolt" referred to the visible "light" in "lightning", I didn't realize "thunder" had a "bolt")...

This is Intel trying to monopolize another aspect of computing, don't buy anything with a thunderbolt connector or port... and do realize that future generations will have the odd little requirement that you can't bend the cable too far or it won't work, which is why nobody has used fibre optics this way yet. You'll yearn for the reliability of USB, and you won't miss the bandwidth.

Truly, the Itanium/Larrabee/etc... of external connectors.
 
Status
Not open for further replies.