Intel Working on Thunderbolt for Tablets, Phones

Status
Not open for further replies.
The reason people aren't all going out and buying Thunderbolt everything is simply due to 3 factors:
1) Manufacturers aren't putting it on all their devices
2) There are so few peripherals that use it
3) The products that do implement it cost a hefty premium compared to those that use USB 3.0
Thunderbolt is amazing for high-speed external storage (SSD's and RAID arrays) but that is about it. USB 3.0 is fast enough for most users' needs and is reasonably priced, as well as being compatible with all of their USB 2.0 peripherals. The fact that Thunderbolt carries a DisplayPort signal is negated by the fact that very few displays offer DisplayPort inputs and those that do are far more expensive. Apple is the only company with a Thunderbolt display that offers additional functionality (3, USB 2.0, 1x Firewire 800, 1x GbE, webcam, speakers) and it costs $1000. Nobody else is even using this connector because it costs so much and isn't on any devices.
Intel needs to drop the price of using Thunderbolt and make it accessible to more manufacturers if they want it to catch on. It's implementation of external PCIe has many possibilities but they're standing in their own way if they think people will just needlessly buy it if it has an Intel logo on it.
 
Agree with WithoutWeakness. I also want to say I am more interested in Thunderbolt than WiGig because wireless N is pretty fast and I am hoping that we will be able to get an ultrabook laptop and connect via Thunderbolt to a separate box with discrete desktop graphics cards in there. Best of both worlds 😀
 

My newest LCD (Dell US2212) has DisplayPort and I got it for $160 (it was on-sale for 40% off) but it only has one DP port, so no support for pass-through to daisy-chain displays.

It isn't the DisplayPort connectors that make Thunderbolt expensive. It is all the extra bits (hardware + licensing) that are required for pass-through and mixed DP+PCIe support for Thunderbolt that make those devices more expensive.

The notion of needing somewhat expensive active cables for Thunderbolt even for fairly short runs likely irks many people too.
 
Seems some of you forget that latency is one of the most important advantages TB has over USB3, making it the only viable option for connecting external GPUs and other PCIe peripherals. If you are into stuff like pro music production and video editing, USB3 is not an option with a future. We're on the verge of 4K video and TB is the only solution that'll deliver.
 

Latency on USB3 could be substantially reduced by eliminating protocol conversions/encapsulations between PCIe and USB3 but that would require either rewriting the standard or implementing an HCI that bypasses PCIe.

Latency is not a major concern for external GPUs since we are only talking microseconds which the graphics drivers could be optimized to work around of fairly easily - assuming the graphics drivers even needs to read anything back from the GPU, which would make the rendering process largely latency-agnostic.

The biggest problem there would be bandwidth since USB3 has roughly half as much usable bandwidth as PCIe 3.0 x1 but you need x4 to get the most out of mid/high-end GPUs in graphics-intensive games/applications.

Reducing the underlying protocols' latency does you no good if the bandwidth to support whatever it is you are trying to do is not there in the first place. USB3 has less than 1/10th the bandwidth necessary to support high-end GPUs; nowhere near good enough to start worrying about latency.
 
Hope Intel loses here...I love the 4k push, but would rather have it without the fees to Intel. Go wigig. Another rev or 2 should enable much faster than 7Gbps killing the need for Intel to be involved soaking us for more money.
 
Yeah, other thing is that USB3.0 is backwards compatible, which helps upgrade rate. Plus phones with TB will still need a microUSB connector to charge, so who will want to buy a $40 (or whatever it is now; fibre will likely increase it) cable that you have to reach around the back of your computer to plug in, and breaks if kinked too tightly.
Other thing is that daisy-chaining significantly reduces the usability of the interface - having to disconnect and reconnect your screen to plug in an external HDD for example, or having a second wire coming out of your mouse to plug in something else.
Picking which devices support the new 20Gbit link will also be difficult, and you'll have to do serious chain optimizing to actually get it to work as well as possible.
I expect to see thunderbolt have a similar legacy to IEEE1394 - I'll use it about two, maybe three times, then I'll get a new computer that doesn't have it until Apple's next lemon reaches main-stream existence.
 
I think it'll be great for a phone/tablet. One port to charge, connect to a display, access storage, daisy chain all three to a gamepad and a external PCIe card...
 
Yeah, except that:
  • ■It would be two ports or you can't daisy chain.
    ■Thunderbolt is quite a thick connector.
    ■It requires quite a lot of power for the controller - probably more than the rest of the phone put together. Before you consider multiple ports.
    ■The newer fibre versions can't transmit power.
    ■It's not a hub/star config, it's a chain. That means you're limited to the speed of the slowest device between two points, and you can't have more than one thing that only has one port, so it's no display, no storage, or get a phone with two ports.
    ■microUSB is currently the charging standard, so you would have to have a third connector (which you would hopefully do, or those without Thunderbolt are relatively screwed). Or provide an adapter.

IMO it's a nice idea, but lacking in the implementation. PCIe over USB, like happens with SATA Express, would have been a better idea. And allow hubs - strings are very irritating as far as network topology goes.
 

The "hub" part should not be a problem for Thunderbolt's data side since that's PCIe which already allows bridges with multiple ports and devices with multiple endpoints so collapsing the chain into a single bridge should be relatively easy.

The main problem is DP which does not allow any of that so a star-topology Thunderbolt would have to either ditch DP backward-compatibility or detect and emulate daisy-chained displays on the host side when they are on separate ports on the downstream side.
 
Main thing is power consumption - have you seen the size of the heatsinks over PLX chips on MBs? And don't they mirror the data, which isn't very useful in this scenario?

Maybe for what you're thinking you just split up the four(?) lanes - but then you can't run only one at full speed (while the other devices are very low bandwidth, e.g. idle/keyboard/mice), because you have to spread the lanes out. Plus you would have a max of four devices per port. I wonder - tricky question.

I'm sure USB could get a couple more mid-life speed boosts - only some of the speed increases are due to the extra two data pairs. Considering TB has four pairs (two each way), and USB3.0 has three (one each way plus one slow legacy pair that is bi-directional half-duplex), you could probably add extra pairs, and increase the speed some more. Might need to do a fibre like TB did, but I hope not. End-user fibre could be a bad idea/mess (bend radius anyone).
 

What I was thinking about is standard PCIe bridges and those do not simply broadcast traffic to all ports; they act more like Ethernet switches. Ethernet switches have a MAC table that tells them which port a given packet should be forwarded to and PCI/PCIe bridges have address range tables that tell them which port transactions need to be forwarded to.

As for bend-radius, here are some bend-optimized fibers out there like Corning's ClearCurve that have less than 0.1dB loss on 5mm (0.2") bends, designed to get tacked into wall corners with regular hardware-store type cable staplers in FTTH installs. The cable itself is designed to be thick and stiff enough to make the minimum bend radius self-enforcing and protect the fiber from getting crushed under most circumstances.

I would be more worried about connector contamination and abuse than bend radius. Consumer-fiber needs connectors that are more tolerant of misalignment and minor contamination. I imagine SMF connectors would not last long in applications that get frequent connects/disconnects.

For the very short distances around PCs, POF (Plastic Optical Fiber) might also be an option - much cheaper to work with (only need scissors to cut the cable to whatever length you need) and works with LEDs instead of lasers which makes it much cheaper. Only works over very short distances (few meters at 10Gbps) due to horrible group delay and long distances at lower speeds are also limited by horrible attenuation.

As for people who want to stick to copper cables because fiber cannot transmit power, there is nothing preventing an optical cable standard from also having copper wires for power. A hypothetical optical USB4 with 12V@2A would be nice to power external HDDs, scanners, LED/OLED displays and other moderate-power devices that often come with power bricks.
 
There is already a USB Power Delivery spec that does 100W; I think it's at around 20V. They intend it to be used for charging laptops etc.

Take a look at TOSLINK, and imagine that getting a few thousand plug/unplug cycles. It'll be a pain. You also end up with very thick cables too.

PCIe bridge chips still use a ridiculous amount of power though, don't they? I know on most MBs you can spot them by the presence of a heatsink not much smaller than the VRM ones. That's on top of the actual thunderbolt controllers (though we'll likely see them integrated and optimized).

While USB has it's flaws, 3.0 goes some way to reducing them. The full-duplex traffic and higher bitrate are the main points, but as we see more devices supporting UASP etc. it will get better. I expect we'll see a USB Attached PCIe Protocol soon too, and while 500MB/s isn't much by PCIe standards (it's a single lane of gen1), it will get faster.
 

The 100W spec is in-the-works but it remains to be seen whether or not they will actually manage to make it work with existing cables and connectors as per their original intent. Most USB cables have #26 gauge power wires which should NOT be used for more than 2A: #26 wire has 0.133ohm/m resistance (x2 for power+gnd) so a 2m USB cable has ~0.52 ohm and at 5A, that would be ~13W wiring power loss on the 100W spec; enough to make the cable uncomfortably hot and likely unacceptable for public safety reasons. Also, since the 10W (2A@5V) spec is power/charge-only due to interference, I'm guessing the 100W spec would be power/charge-only as well, which means useless for powering actual medium/high-power PC/laptop peripherals.

The nice thing about POF is that it should still work even if the alignment is sub-optimal and connectors have gotten a little roughed up... you can cut the connector off, jam the raw plastic fiber in the connector and it will probably still work with some paper wrapped around it to help center it a bit and prevent it from falling out. With SMF or even MMF, you are out of luck if alignment is off by more than a few microns or degrees. Size-wise, TOSLINK's thickness isn't so bad and adding power would not make the cable substantially larger.

Normal PCIe bridges do not use "ridiculous amounts of power" what does use ridiculous amounts of power is ridiculously LARGE PCIe bridges like those 48-80 lanes PLX chips on motherboards with 4 full-speed x16 PCIe slots and extra IO chips. The main power hogs in those chips are the analog bits for each lane and PLLs for each lane group. USB3 is also somewhat of a power hog on the analog side of things.

As for PCIe over USB3, I think they should simply pick a page off SAS' book: use the same connector but auto-negotiate the protocol. If the host USB controller supports external PCIe and the USB device supports external PCIe too, the link should simply switch to hot-swap PCIe - avoid layering protocol whenever possible... if you put PCIe over USB, the protocol stack ends up being device driver, PCIe (virtual), USB (encapsulation), PCIe (sending to USB host controller), USB (actual wire), PCIe (target hardware), USB (target driver), PCIe (extracted emulated PCIe frame), device while with native support, it would be straight PCIe from CPU to device so you save four software conversions and two hardware ones... saves processing power, electrical power, memory and latency.

Gotta love when engineers choose unnecessary complications.
 
Status
Not open for further replies.