How Fast Is Thunderbolt, Really? Tom's Tests With 240 TB

Status
Not open for further replies.

twelve25

Distinguished
This is really impressive! A single 8Gb/s fibre channel or 10GbE storage port (typical enterprise SAN) is in the neighborhooed of 700-800MB/s max. Thunderbolt ports on a laptop are hitting enterprise storage speeds. 8Gb fiber cards are roughly $1500 for a dual port.

 

Essence25

Honorable
Mar 15, 2013
1
0
10,510
The TB cable has ic's in each end 12 in total on both ends. Think of Thunderbolt as a PCIE slot on a cable with no cpu latency. USB has cpu usage while accessing devices.
 

Jesper Madsen

Honorable
Mar 15, 2013
2
0
10,510
esrever -- ISP's use the same reasoning to keep the speeds as low as possible and prevent fiber from breaking through to the average consumer.
But you have got it all backwards. Innovations that uses these speeds are build for the speed available. You don't make a super HD service for a population consisting of low speed broadband users, but you make this when the consumer is able to use it. This is why new faster speeds and technology needs to be developed as fast as possible, otherwise we will just end in a stalemate.
Don't build for what we have now, but build for what we will have tomorrow.
As an example I can think of super fast internet speeds will enable us to pretty much run every calculation needed at our computer at home, but use it everywhere with minimal hardware. That would mean we need better network protocols to reduce lag, but it will be possible with time.
 

mapesdhs

Distinguished
[citation][nom]twelve25[/nom]... Thunderbolt ports on a laptop are hitting enterprise storage speeds. ...[/citation]

Hardly 'Enterprise' speeds. High-end systems were pushing 20X these speeds more than a
decade ago, eg. Onyx2 Group Station for Defense Imaging (40GB/sec load speed), and that
was the older tech (current NUMAlink6 interconnect is 6.7GB/sec per connection). See:

http://www.sgi.com/products/servers/uv/
http://www.sgi.com/pdfs/4377.pdf

That's Enterprise-level, not a piddly 1400MB/sec off a laptop. :D

Ian.

 
[citation][nom]esrever[/nom]do consumers really need these speeds right now?[/citation]
I am assuming you are kidding, but we absolutely need these types of connection speeds.

1) Power users NEED as much speed as possible. Everyone complains that Intel's new CPUs do not improve much on performance, and AMD is sadly the usual scapegoat on that front. But the real issue is that other components are playing catchup right now, and Intel is more concerned with fighting off the horde (a beautiful horde by the way) of ARM processors creeping into Intel's desktop and server territory.
Anywho, the point is that in modern systems your CPU speed is entirely second in importance to the performance to your HDD and GPU capabilities, and in more and more systems these duties are being taken over by external devices. As these more periphrial components are catching up in performance, and more and more of them are being used externally, we need connectivity capabilities that do not continue to choke them. Put another way; For what most people do (namely media consumption, office work, and internet), a late gen P4HT has more than enough processing power. Combine it with a cheap modern GPU and a SSD and most people would never know the difference between it and a brand new system. Granted, the newer systems do the same work on 10% of the power and with more integrated parts, but the processor itself is 'adequate' for what most people do. Now, if that ancient tech can still keep up with most user demands 6-8 years down the road, then what is the lifespan of modern PentiumG and i3 processors going to be? Much less bigger processors like i5 and i7s. Yes, power users will always need an upgrade, but for the masses whose most processing intensive task is watching HD video, a current gen i3 is going to have a 10+ year life of usefulness (unless something drastically changes). When you have these types of lifespans, combined with changes for the interfaces of internal components, it becomes very important to have an easy standardized highspeed external interface to ensure that these devices can use future hardware with minimal difficulty.

2) People prefer external storage for a variety of reasons. Some like the 'security' of being able to keep their data with them, others like being able to work on projects in multiple locations, others are on laptops that simply do not have space for extra drives, and still others (like myself) want options to get mechanical storage into another room so that we have a truly silent working environment. In a world where gigabit Ethernet is the limit of what is available for consumer use, we need much faster external storage options.

3) I know it is getting old to hear about, but the fact of the matter is that while traditional desktop computers are not dead, their death is coming. Personally I think that the desktop form factor will still be around for quite a while yet, but I will be truly surprised if my son ever has a traditional ATX desktop when he is ready for his own personal computer some 5 years from now. No doubt he will have tons of tech in his life, but I think it is going to be in the form of more specialized components. He will have a box for storage, a box for graphics power (probably integrated into the display), a box (or array of boxes) for extra processing power, and it will all be controlled by a central 'dockable' (wired or wireless) phone or tablet device which stores his personal documents and all of his software (or at least the licences for his software). Point is that as we slowly move away from PC architecture we will need high speed wired and wireless connectivity options which will tie what use to be internal devices into a mesh of external devices that may need to serve multiple users simultaneously.

4) 1GB/s is not as fast as it use to be. Sure, it may take a large array of HDDs to have that kind of saturation, but it only takes 2 SSDs to hit that kind of throughput. On my own rig I have a raid0 of 1 SSDs and I get a peak throughout of 1GB/s, and an average throughput of nearly 600MB/s, and I do not have high end SSDs, these are just mid-grade Agility 3 drives. The next gen of drives coming out late this year are going to bring the read/write speed of uncompressible data much closer to the speed of compressible data, which means that drives will get a true 500+MB/s of throughput per drive no matter what type of data you throw at it. So a little box with 2 SSDs in it will be able to push 1GB/s of throughput to whatever device you hook it up to, and once again we will be at the point where connectivity is the bottleneck of the system for the foreseeable future.

5) Lastly, we need something like this for an entirely different reason: Lightpeak was designed to be 'one cable to rule them all'. You were supposed to use a single daisy chain of fiber optic cable in order to connect your PC to external storage, your display, Ethernet, and even have adapters to go to things like USB or firewire devices, because Lightpeak was supposed to be a protocol agnostic connectivity standard where you could mix and match different types of devices all on one string. Obviously Thunderbolt has fallen far short of the hope that lightpeak heald, but it is still the first step towards that goal. You may not 'need' 1GB/s throughput for your HDDs... but you do need 1GB/s if you intend to run 3 displays, plus an external storage array all over a single cable. In fact, you need a lot more throughput by the time the tech catches on and we are using 4K displays and SSD arrays.
 
[citation][nom]Reynod[/nom]SSD's ??When I win lotto ...[/citation]
SSDs are not that expensive, and you only need 2 SSDs to hit 1GB/s of throughput. While you would need a pile of cash to get 240TB of SSDs, you only need 2 240GB SSDs to completely saturate thunderbolt, and that is entirely in the reach of just about anyone who needs that kind of throughput.
 

twelve25

Distinguished


I'm talking about things normal datacenters use. The fastest thing you'll find in a typical datacenter is 40Gb Infiniband, which approaches the real world limit of a PCI 3.0 bus.

The server you linked to is PC-based, has PCIe 3.0 sockets which have a theoretical maximum of 8GB/s (real world is likely 80% of that). So speeds like you mentioned above are only capable in aggregate, which you could theoretically do with thunderbolt links.



 

internetlad

Distinguished
Jan 23, 2011
1,080
0
19,310
On Mihalik’s recommendation, we tested using a MacBook Pro with Retina Display

Nice plug, i'm sure that retina display improves the connection speed so much.
 

Jesper Madsen

Honorable
Mar 15, 2013
2
0
10,510
Some people find my imagination wild when I speculate about the tech we will have in the future, but that is because I find almost anything possible if you could enhance todays tech enough.
We would need incredibly fast connections for:
Full medical body scan in very high resolution to upload to doctors, remote basic health risk assessing computers or maybe just our own home computer for assessing. I do believe that scanners good enough will be available for the average person at some point. But it would have to be 3d and very very high resolution which would take up a lot of drive space and demand a lot from the connection. Perhaps you would need an entire history of scans and would at some point need to transfer those.
Virtual reality detailed enough. Not so far into the future, but then again, think far out. With virtual reality I think that new kinds of content follows. Movies where you are in it, can turn around, examine different things in the scene and so on. A little like a game but still where you let the creator guide you more. Maybe movies and games would become one but as I imagine it, in games there would be more freedom and more choices. All games, even sandbox, are very limited. What I imagine is a world where more and more become perfectly simulated, where much greater detail is needed when looking closely at something, when hair, grass, water, smoke, gravity and anything you can think off get to be simulated realistically. This would take not only A LOT of processing power but also a really insane connections to whatever interface we are using.

I realize this is only 2 things, but I wore myself out. There are plenty of other examples I can think of. The short of it is: if anyone is ever in doubt if the regular user ever get to use these speeds then just imagine the wildest craziest thing you can and it will probably be possible at some point. I would rather get there sooner than later.
 

f-14

Distinguished
[citation][nom]twelve25[/nom]I'm talking about things normal datacenters use. The fastest thing you'll find in a typical datacenter is 40Gb Infiniband, which approaches the real world limit of a PCI 3.0 bus. The server you linked to is PC-based, has PCIe 3.0 sockets which have a theoretical maximum of 8GB/s (real world is likely 80% of that). So speeds like you mentioned above are only capable in aggregate, which you could theoretically do with thunderbolt links.[/citation]

Dude, re read what Ian said, SLOWLY

[citation][nom]mapesdhs[/nom]....High-end systems were pushing 20X these speeds more than adecade ago.....[/citation]
 

f-14

Distinguished
[citation][nom]esrever[/nom]do consumers really need these speeds right now?[/citation]
we needed these speeds 50 years ago when we first dreamed of spy planes, robots, and armored suits.
 
Status
Not open for further replies.