It Begins: Carbon Nanotube Transistors Outperform Silicon In Research Lab

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
"but if you don't have a Ph.D. in materials science, it might be a bit much to decipher."
Hey, Gordon Freeman only has a Ph.D. in Theoretical Physics and he worked in the Anomalous Materials department without any issue, so I don't think a person needs a Ph.D in materials science specifically.
 
BTW, there's no such thing as a wonder material. It comes from decades of hard work. In the 1980s, they report only using handful of materials from the periodic table for computer chips. In the 1990s, 2-3 more. Now, they have to use 60% of the elements known in the periodic table of elements to make CPUs.

Real limits are coming.

Also there's a bigger issue than computer chips that allow you to watch youtube videos and post more facebook updates. That's health. Nanotechnologies have the capability of surpassing cellular barriers that create cancer and other dangerous health issues. Carbon Nanotubes and Graphene specifically.

Of course, scientists play with the world and deal with the after effects later.
 
Really interesting article, i'm always excited to see new technologies being developed to increase computing power.
 
This is fascinating, but still can't go below 1 atom, so while this may end up prolonging the ever-coming death the chip progress, it still won't keep it going that long. As far as carbon is concerned, diamonds are just carbon, those don't seem to harm people's health.
 


I stand corrected 😛
 
Dumb question, but please don't laugh too hard: Does this work in the same manner as conventional CPUs? As in, a chipset that was designed for a silicone chip, assuming they were the same dimensionally, could a CNT CPU be a 1:1 replacement?
 


Faster chips ain't dead, they are just taking a rest.

Where is gets interesting is when multiple cores are made to work in parallel to deliver higher apparent single thread performance. Consider 3D nand -- stack a bunch of not-cutting-edge lithography chips and get a super high capacity nand chip. Now consider stacking a bunch of cores. The problem is/will be getting the software to see the pool of processing as a fast thread rather than 1000's of threads. e.g. why can't people use GPUs with 1000s of cores to swiftly solve normal coding problems (other than cracking passwords).

 

Many workloads are very difficult to parallelize, and others are impossible. Just look at how few applications/users benefit from more than 2-4 cores. There isn't some magic way around the issue that will allow many cores to act as a single ultra fast core for single-threaded loads. The reason people can't use GPUs for running general software is that GPUs are only effective when dealing with embarrassingly parallel workloads (e.g. graphics rendering).
 
There's a big difference between skin creams containing nanoparticles vs. computer chips. I agree with concerns about reckless use of nanomaterials in many cases, without properly assessing the potential for long-term environmental and health impacts. But it's a mistake to lump all applications of nanotech together.

Microelectronics manufacturing must be done in a very exacting way and surely poses minimal risk.
 
You're mostly correct, but there is a footnote.

Keep an eye on these guys. Even if they don't succeed, I predict others will adapt similar techniques to achieve better scaling over relatively small numbers of cores:

http://www.tomshardware.com/news/soft-machines-virtual-cores-visc,31127.html
 

Yeah, I thought of that shortly after making my post, and I'm definitely interested to see what comes of it. But I'm skeptical of how much single threaded performance they could extract with that method. Now, I have only basic knowledge of CPU design, but based on what I know about how CPUs work, the nature of serial workloads, and what little info I've read about VISC, it seems unlikely that they'll be able to improve single threaded performance beyond modern, high performance CPUs that use things like OoOE and branch prediction.

It seems like the main benefit would be adaptability rather than raw performance, as you could have a single CPU that would be fairly good at both single threaded and many threaded loads. That, and the fact that you could maybe get away with simpler architecture (and therefore potentially lower power consumption). But again, my knowledge in this area is pretty limited, so I wouldn't be surprised if time proves me wrong.
 


A hypothetical CNT CPU won't be much different from a traditional one in its underlying architecture and principles. The fundamental difference lies in the materiel being used to manufacture the transistors - carbon instead of silicon or other semiconductors. It'll still be the programmable, scalable and functional SOC that it's today. And as for form, dimension and packaging - those will keep evolving even without CNT implementation (we already have most of the chipset integrated into the CPU, stacked memory too! ).

 
Out-of-order CPUs can only do local reordering and parallel execution. By analyzing the higher-level program code, VISC can split out higher-level blocks for concurrent execution on different cores. So, it's complementary, but could also be used on in-order architectures, like GPUs.

Of course, similar techniques can be used by compilers and Java/Javascript engines.
 
"power consumption, clock frequency and thread performance all plateaued in 2010"

The plot uses a logarithmic scale, which means that the sentence is not true. The speed of growth of these parameters plateaued, not the actual growth.
 
Status
Not open for further replies.