"provide transfer data rates up to 500 Mbit/sec. USB 3.0 is actually designed to handle transfers of up to 5 Gbit/sec"
Im tired of them kinda mis labling, mbit is bits not bytes, so its like not that amazingly fast. They need to make Mbytes/sec the standard (for now) some people will read this and see 500 mb...and assume the wrong thing...BUT usb 3.0 does look badass and it is about time for it.
[citation][nom]kaiser_25[/nom] They need to make Mbytes/sec the standard (for now) some people will read this and see 500 mb...and assume the wrong thing...BUT usb 3.0 does look badass and it is about time for it.[/citation]
agreed. isp's do the same thing. its like who the eff uses those standards. my 6mbit connection is just 600KB/sec so just call it that!!
[citation][nom]Shadow703793[/nom]900mA is more than enough to provide power for Flash,charging,etc. At any rate Toms, do you have info on Hot Chips Conference that's taking place now?[/citation]
edit: took place. It's over now.
[citation][nom]jeraldjunkmail[/nom]only 900mAmp? Woulda been nice if they could put some SERIOUS power on that line? 5 amp line anyone?[/citation]
Ya umm 5 amps would fry the hell out of any IC, MUCH less 45nm tech lol, theres a reason stuff is getting 'green' the small structures on the wafers cant stand high power, itll burn the crap.
Device bandwidth and throughput has been rated in bits per for years, so this is nothing new.
Anyhow, I'm looking forward to this technology - both in terms of USB-based RAID solutions and greater amperage for powering devices will be nice. Simply being able to reliably plug-in a portable USB HD without needing a special 2-plug USB Cable will be nice.
Ok so let me see this big jump in technology (this chip) gives me a whopping 20Mbits per second more than usb 2.0? Not only that but now you want me to run a raid array on it? Dont make a chip that cant handle the specs. USB 3.0 is 5 Gbits per second. If you cant make a chip that is better then throw it away and start over.
[citation][nom]Hanin33[/nom]5amps... are you serious? wot do you intend on powering? a blender?[/citation]
At 5 volts DC, 5 amps is a considerable amount of current. Hardly enough to power a blender, but considerable still. Depending on the speed, most blender AC motors pull between 2-5 amps on a 110 volt line.
I must admit though that I am happy to see such a larger increase in available current. I'm tires of getting messages every time I connect something to my keyboard and I'm told maximum controller power has been exceeded.
[citation][nom]Imperiex[/nom]"agreed. isp's do the same thing. its like who the eff uses those standards. my 6mbit connection is just 600KB/sec so just call it that!!"600KB/sec = 4.8MBits/sec so no, your 6MBit connection isn't 600KB/sec.1 Byte = 8 Bits[/citation]
Hes saying that with his 6mbit connections he can only achieve rates of about 600kb/s.. Which is about standard for DSL.. When I had 1.5mbit dsl I could get around 150kb/s, my 3mbit connection would get my about 300kb/s and my 6mbit got me about 600kb/s... Comcast is the only provider that has actually provided me with full bandwidth. My newsgroup transfers always pegged at a solid 8mbit, or 1mbyte/sec, except for the first minute or so where Comcast provides up to 24mbit/s with their TurboBoost..
Call me crazy but I dont understand why they dont overkill these standards. Why isnt the new USB spec like 20 Gbit/s? Is that technologically out of reach? Why aren't they doing a little more to future proof this stuff? With SSD's on the rise, that bandwidth is going to feel strained in a relatively short time.
Will USB 3.0 still rely on the CPU as heavily as the previous versions? I never considered USB a good solution for real time storage like video editing or games as during its use the CPU utilization climbs as the data transfer rate increases. An PATA, SATA, SCSI, and even NICs off load work from the CPU where USB currently does not.
One of the main reasons I recommend for users of cable modems not to use the USB hookup but use the NIC whenever possible.
[citation][nom]fooldog01[/nom]Call me crazy but I dont understand why they dont overkill these standards. Why isnt the new USB spec like 20 Gbit/s? Is that technologically out of reach? Why aren't they doing a little more to future proof this stuff? With SSD's on the rise, that bandwidth is going to feel strained in a relatively short time.[/citation]
Geez, don't neg-vote the guy for saying something like this, he may very well not understand.
fooldog, these speeds are not chosen arbitrarily, but are based upon the current limitations in the technology and some attempt at prognosticating where the technology will likely be when the standard actually becomes a standard.
Without getting into technical detail, consider the evolution of data transfer rates for hard drives. Forgetting all the MFM and RLL stuff, and jumping right into IDE. Initially, data transfer was controlled via PIO modes. The limitation here was that it required the CPU to be involved in data transfers, and when the amount of data was particularly large, it imposed a tremendous load on the CPU. Then came DMA modes, which allowed data to be written directly to memory without putting a huge burden on the CPU (those were ATA standards 1 and 2). Then came more DMA modes, then Ultra DMA modes. We went from ATA-1 at 3.3 Mb/s to ATA-7 at 133 MB/s (theoretical). Now why not just create ATA-7 well in advance of needing that much? Because it was technologically impossible at the time. The technology simply did not exist.
It's just like the CPUs, we hit 500 MHz back in 1998, 1 GHz back in early 2000, 2 GHz in late 2000, then it wasn't until 2005 that we hit 4 GHz, now a few months shy of 2010 and we're still hovering around 4 GHz... WTF is up with that? We should be pushing 10 GHz by now. Well, we hit the wall. We're up against the theoretical limit of the technology and no one has come up with the next trick yet. So instead of going faster, they started putting more cores (essentially multiple processors) on the same chip.
I'm sorry, this is longer than I wanted it to be. The bottom line is that creating a standard is no easy task. There are a multiude of factors that have to be taken into account. Anyway, I hope this helps.
Also putting 5 amps down the line isn't going to fry anything. The device is only going to draw as much current as needed. You fry things by giving it more voltage, not amps because it simply just won't use them. The only thing I would be worried about is 5 amps going down the tiny USB wires when a device really does need it. You could get almost the same amount of power by going to 12V and only needing 2 amps but then the little devices would require something like a transformer to reduce the voltage or fry.
serious why the negatives. I just dont see why people are all happy about this chip that provides more power but not any more throughput. Maybe I read something wrong, but 500Mbits isnt a big jump over 480Mbits. Until they reach higher speed I dont think they should put it on boards. All you are gonna have is people buying new motherboards and add in cards thinking they are able to get the full spec. I really feel that its false advertising. It isnt anywhere close to the spec.