Intel Says Moore's Law Depends on Increasing Efficiency

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
how you fit into chip, chip is not human they have no brain, the 1 and 0(binary) is done by some voltage level above threshold and below threshold.
 
Sandy bridge is powerful but it doesnt need to stay 1.6GHz when I am using it for web browsing. Intel need to do better on down-clocking down voltage for further powersaving & launch a platform certificate like centrino with reqs for all other components to have "power saving"feature. Currently HDD do not go power saving mode other than shutting down, same goes for chipset, RAMs.
 
[citation][nom]Tomfreak[/nom]Sandy bridge is powerful but it doesnt need to stay 1.6GHz when I am using it for web browsing. Intel need to do better on down-clocking down voltage for further powersaving & launch a platform certificate like centrino with reqs for all other components to have "power saving"feature. Currently HDD do not go power saving mode other than shutting down, same goes for chipset, RAMs.[/citation]

Agreed. It makes me cringe when I see people using or salesmen/women recommending i5 processors for web browsing and movies, god forbid if it's the i7. Especially if the i5/i7 came with no dedicated GPU, resulting in a situation where you can do anything but playing most games.
 
Either pass current or not. ..... As long as the hardware is based on electricity, we're going to be using binary systems/Base 2.

this is not true...
current is not on or off, it can have different polarity and magnitude. in fact there are analog computers too. electronic computers with base other than two can be realized but require more realestate and more complex structures. binary is very simple to implement and this is great advantage. it means that on same chip one could pack much more logic and evaluation of results is dramatically simpler (hence faster). if we see computing systems using light, chances are they will still be binary to avoid overhead of processing variety of states. one advantage is easy criss-crossing of signals through same space. variety of electrical signals can be sent through same wire without interference as well but this requires use of filters (huge real estate cost when dealing with so many signals). quantum computing is also one of options although we don't see much of it. people are still evaluating if this is doable or not. proponents are challenging those that disagree to come up with a proof that it is impossible (in fact there is a prize for anyone that can provide proper proof).
 
It's only common to believe that server and supercomputer technology will be created in super small packages nowadays.
For long we are waiting for Intel's 128 core chip, 120W TDP, 128/256 threads.
I don't think they'll be able to scale down 3kW to 20W, not just yet, but I do believe we'll be seeing micro servers based on one chip design. Basically a regular desktop with multi cores like atom processor cores, that'll calculate multiple values. Some chips more optimized for running web servers, search engines, and databases; and some for application running like folding at home, or scientific programs.

Doing so, one could reduce a server from 3kW to a mere 200W server very easily with current technology!
 
Intel's problem is gpus are now getting much much more powerful than cpus and with windows 8 supporting arm intel could have a big problem if arm cpus are developed for desktops and laptops and clocked up to like 3ghz. x86 is just so inefficient
 
[citation][nom]lamorpa[/nom]You actually have no idea how deep your fundamental misunderstanding of this topic is. "thought up the idea of an 8 base"! "dial up, using 8 base" Classic! This might be better than the "electricity has only 2 states" guy.[/citation]

I think you misunderstood me. I did not say you'd actually run an 8-base system over dial-up, I just used dial-up as a theoretical reference point. It's kinda obvious that current technology (dial-up included) is not capable of using an 8-base system.


[citation][nom]rosen380[/nom]Three base-2 bits can represent eight states, just like one base-8 bit. Assuming that the computer can process a base-8 bit as fast as a base-2 bit, then I'd think we're talking about a 3x improvement in performance, not a ~20000x improvement that you are suggesting with your example. Since I'm pretty sure the cpu will need more time to process one base-8 bit than one base-2 bit, I doubt that 3x increase would ever be seen...[/citation]

The CPU isn't the issue here, transfer times are. If and when an 8-base system does come to our doorsteps, I think CPU's will be many times more powerful than they are now.

Take 1 byte on a 2-base system: 10101010
You end up with 256 possibilities.

Take 1 byte on an 8-base system: 01234567
You end up with 16777216 possibilities.

That's 65536 times higher than the 2-base system. It wouldn't work logically with current systems (ascii etc.) because you don't have enough characters to utilize the huge increase in byte size, but using compression tools, you could fit 1GB into roughly 15KB.

It's silly to use such a system in 'ordinary' use, because you'd have to keep compressing/decompressing the data, but if you were needing to put say 300 terabytes in long term storage, you could do so without wasting much space.

Additionally if you wanted to transfer a huge file over the internet, you could do it without waiting days for the download to finish, nor be concerned about using up your monthly data cap.
 
[citation][nom]Pherule[/nom]I think you misunderstood me. I did not say you'd actually run an 8-base system over dial-up, I just used dial-up as a theoretical reference point. It's kinda obvious that current technology (dial-up included) is not capable of using an 8-base system.The CPU isn't the issue here, transfer times are. If and when an 8-base system does come to our doorsteps, I think CPU's will be many times more powerful than they are now.Take 1 byte on a 2-base system: 10101010You end up with 256 possibilities.Take 1 byte on an 8-base system: 01234567You end up with 16777216 possibilities.That's 65536 times higher than the 2-base system. It wouldn't work logically with current systems (ascii etc.) because you don't have enough characters to utilize the huge increase in byte size, but using compression tools, you could fit 1GB into roughly 15KB.It's silly to use such a system in 'ordinary' use, because you'd have to keep compressing/decompressing the data, but if you were needing to put say 300 terabytes in long term storage, you could do so without wasting much space.Additionally if you wanted to transfer a huge file over the internet, you could do it without waiting days for the download to finish, nor be concerned about using up your monthly data cap.[/citation]
It's difficult to guess whether you are more fundamentally misinformed or simply do not understand the technologies involved. A data stream is transmitted over a channel according to it's bandwidth, which includes both the width of the range of frequencies and the clock rate (if fractions of the frequencies are used). "dial-up" uses a spread spectrum at a small fraction of the frequencies. At very low baud rates you could even distinguish tones. It is about as far from binary as you can get. For a theoretical maximum transmission rate over a given channel, clock rate is in exact ratio with bandwidth. Changing from a binary to an octal encoding would not result in any change in the data rate because of the physically required reduction in clock.
 
[citation][nom]Pherule[/nom]I think you misunderstood me.[/citation]

1GB is 1GB whether it is base 8 or base 2. If you want to talk about compression rates, talk about compression rates, not transmission rates over dial-up. Keep on thinking about your base 8 system, you will come to understand why it is not in use.

And to address the possible fundamental misunderstanding that started us down this road, Intel's tri-gate, 3D technology has nothing to do with changing the fundamentals of computing. It is a change from mapping circuitry in a 2 dimensional logic space to a 3 dimensional logic space. As in, introducing Height to logic circuit designs.
 
[citation][nom]Pherule[/nom]Take 1 byte on a 2-base system: 10101010You end up with 256 possibilities.Take 1 byte on an 8-base system: 01234567You end up with 16777216 possibilities.That's 65536 times higher than the 2-base system. It wouldn't work logically with current systems (ascii etc.) because you don't have enough characters to utilize the huge increase in byte size, but using compression tools, you could fit 1GB into roughly 15KB.It's silly to use such a system in 'ordinary' use, because you'd have to keep compressing/decompressing the data, but if you were needing to put say 300 terabytes in long term storage, you could do so without wasting much space.Additionally if you wanted to transfer a huge file over the internet, you could do it without waiting days for the download to finish, nor be concerned about using up your monthly data cap.[/citation]
I didn't initially read far enough into your reply to see that you introduced yet another nonsensical concept: as though data compression could be a function of the numeric base the data is stored in. I hope you do just the littlest bit of reading on the topic to realize that the idea is completely silly because 1) It's all the same information, base 2, base 8 or base 768 make no difference, 2) If this somehow were the case, what could possibly make you think that someone else (e.g. any of the thousands of PhD researchers in universities and industry) had missed this idea? Didn't it make you wonder?
 
In response to that, I'll just quote myself:
[citation][nom]Pherule[/nom]I guess some people will always be trapped in history, never wanting to make progress[/citation]

That said, not ALL researchers are backward minded.

To those above who mentioned it is impossible: Get your mind off dial-up and baud rates etc. and look at it this way:

Let's assume you're now using light (or any other medium that supports more than 2 'states')

In binary, assume you've got red and blue light. Both take roughly the same time to travel from one side of the world to the other. Let's assume an 8-length byte in binary takes 66 milliseconds to travel from Canada to Australia.

Now introduce other colors in the spectrum, so you've got 8 colors. Sending an 8-length byte in 8-base should still take about 66 milliseconds to travel that distance.

Come on guys, it doesn't take a genius to figure out that it is indeed possible. I could put it into language terms too. If you have a language that only has 100 words, you'd need very long sentences to make any sense. With a language that supports over 30,000 words, not so much. It can be related to transfer rates.

One of the only real reasons it hasn't been introduced yet is the current binary system is so integrated into world systems that it would take a lot of effort to replace it. That, and misguided projections of speed increases. The 1st world nations aren't interested in boosting international bandwidth speeds using this method because they already have an excess of 10Mbits/sec connections, and in some countries, up to and even higher than 100Mbits/sec. They forget that some nations are stuck with sub-1Mbit/sec speeds.
 
But you still then have to interpret the signal. If an on vs off can be identified in 1ns, but it takes 8ns to differentiate 8 wavelengths of light from each other, then you haven't gained anything. The increase in throughput is balanced by an increase in processing time.

If they could differentiate more than two states fast enough to make it worthwhile, then I'm certain we 'd see it in atleast high performance specialty chips.

 
[citation][nom]Pherule[/nom]In response to that, I'll just quote myself:That said, not ALL researchers are backward minded.To those above who mentioned it is impossible: Get your mind off dial-up and baud rates etc[/citation]
Do you even read the comments you are replying to? Dial-up is not binary. Dial-up is not binary. It is not binary. It is not. Data capacity is a function of frequency range (bandwidth) and rate. Data capacity is a function of frequency range (bandwidth) and rate. That is what it is. The theoretical maximum carrying capacity of a channel is fixed. You can use a wider spectrum at a lower rate or a narrower spectrum at a higher rate. The two things are fundamentally in ratio to each other. No one is missing some concept here except for you. You are not coming up with some amazing idea that no one thought of because they are not thinking "out of the box." You are thinking in a not-physically-possible-in-this-universe fantasy box that no one else is joining.

No one has 'though' of the ideas you are coming up with because the ideas are either already implemented or physically impossible. You do not know what you are talking about. At all. It is as though you are saying that you can get 10 times the gas mileage in your car if you install a gas tank that is 10 times smaller (since you are filling it up every time, the car should go just as far since you are filling it each time...).
 
[citation][nom]lamorpa[/nom]Dial-up is not binary.[/citation]
You don't say? /sarcasm

I said get your mind off dial-up and in your very next post you dedicate an entire paragraph to dial-up bashing. Just stop, you're stuck in a loop with yourself.

It is analog, and the ratio you are referring to may apply to dial-up, but not all systems have a fixed ratio like that. I explained quite clearly why it hasn't been implemented yet, I think you're the one who has reading comprehension problems.

@rosen380: It *may* (though probably not) take 4 times longer to read an 8-base signal as compared to a 2-base signal, but the data rate would be 65536 times higher, so it wouldn't make much difference.
 
"@rosen380: It *may* (though probably not) take 4 times longer to read an 8-base signal as compared to a 2-base signal, but the data rate would be 65536 times higher, so it wouldn't make much difference."

Oh, so you have a source for that? Or have you done it and the CPU Cartel is holding you down?

It's not really my department, so maybe someone else can chime in, but I still see it as tripling the data rate, not multiplying it by 65k; the data that formerly took 3 base-2 bits, now takes one base-8 bit.

Using that kind of math, why use bytes? a base-2 word has 4.3 billion combinations... and a base-8 word would have 79,200,000,000,000,000,000,000,000,000 combinations --- 18 billion billion times as much as base-2.

If there was really an overall performance boost on the order of 10000x, it would happen.

If it is close-- ie 2-4x as long to process each bit and need a third the bits, then that is a good reason why no one is making these general purpose machines.

If it was 65000x faste
 
[citation][nom]Pherule[/nom]You don't say?...[/citation]
For the last time: You are fundamentally confused and/or misinformed if you think that changing the data's numeric base would result in a higher data density for a transmission channel. Final answer.
 
[citation][nom]impreza[/nom]Intel's problem is gpus are now getting much much more powerful than cpus and with windows 8 supporting arm intel could have a big problem if arm cpus are developed for desktops and laptops and clocked up to like 3ghz. x86 is just so inefficient[/citation]

X86 is more efficient than any common architecture at 3GHz. Even a Pentium 4 has more IPC (performance per Hz per core, it means Instructions Per Clock) than Arm. Cortex A15 might be about as good as P4, maybe a little better in IPC, but still nothing compared to any modern desktop processor.

However, ARM does have interesting power efficiency. However again, Intel's smart phone and tablet Atoms are just as efficient and have similar performance per core to Cortex A15.
 
[citation][nom]rosen380[/nom]Oh, so you have a source for that? Or have you done it and the CPU Cartel is holding you down?It's not really my department, so maybe someone else can chime in, but I still see it as tripling the data rate, not multiplying it by 65k[/citation]
I explained how it would work quite clearly. If you didn't catch that then perhaps you need to go back to grade school to brush up on your maths.


[citation][nom]lamorpa[/nom]For the last time: You are fundamentally confused and/or misinformed if you think that changing the data's numeric base would result in a higher data density for a transmission channel. Final answer.[/citation]
All you can think of is channels and baud rates. Get over yourself. It's not just about changing the numeric base, it means using a higher range of transmission per bit. I explained how it would work with colors, do I have to make it any more simple for you?
 
Data can't be stored in base 8 any differently than base 2 because current storage is base 2. If storage could do base 8 in each bit of storage, then it could be used as multilevel cells for base 2 and such, there is no difference between base 8 and base 2 for the amount of capacity required to store identical data. The same is true for any numerical system, be it base 2 or base 1024. Base 8 is not any data-denser than base 2 because if a storage technology had 8 different phases, then that same base 8 bit could just be used to store multiple base 2 digits.

Granted, it could be a difference in processing time IF and only IF each bit in a base 8 signal could be identified faster than each byte of base 2. I don't think we can make stuff that can process base 8 faster than base 2, it's probably a very difficult thing to do at such small scales with these transistors etc. Base 8, with current tech, would probably be slower than base 2. It probably wouldn't even be difficult to do binary conversion from base 2 to base 8, so if it is a base 8 processor, it could even be made compatible with any base 2 processor we want too, so it could probably be no different on a software level than a base 2 processor.

Basically, we could probably make it an x86 compatible processor, or ARM compatible, etc. fairly easily, it just needs something to convert base 2 to base 8 and that's simple math. With light based tech, it might be a little easier to have 8 different signals for base 8, but it would still have a lot of wasted space and would probably still be slower than base 2. Basically, base 8 would result in larger chips that have less performance.

Changing the base number still won't make a difference in how much data fits into the same amount of data storage. Lets say that a cell of storage can store 8 different values. It can store a single base 8 bit with 8 different possibilities, or it can store three base 2 digits in 000, 001, 010, 100, 011, 110, 101, or 111 configurations still with 8 possibilities. This is why it makes no difference for storage if it is base 2 or 8 or any other number, the stored data can be interpreted as any base number we want it to be. We could take a four bit Hexadecimal value and it is essentially a 16 bit base 2 value instead of four bit base 16. However, a 4 bit hexadecimal value is easier to work with than a 16 bit binary value with our puny human minds. It also takes up less space when we write/type them out, but to the computer, it is the same.

I'm not an expert, so if I'm wrong I'd like for an expert to correct me, but I'm pretty sure on this.
 
Terahertz seems to be the wrong term, I think they meant TeraFLOPS. Terahertz would imply that the processor(s) run at 1THz, the power usage should be impossible to cool down at such as frequency with technology even remotely similar to what we have in the works today.

Also, yay another spam post.
 
The critical problem is heat... right now even at Gigahertz speeds with need active cooling systems to keep the CPU healthy, sure you could run today's processors at Terahertz speeds... they would need to be built using some kind of super conducting semi conducting material and kept at very low temperatures. Would be a nice lab experiment but completely commercially unfeasible.

I think the more fundamental problem is we need to redesign our processors differently, instead of processors with 4 or 8 monolithic CISC cores we should be creating processors with 128+ RISC cores then we need to develop languages and paradigms to enable software developers to easily harness this massive parallelism without needing a pHD or a deep understanding of the CPU architecture.

Additionally until we can develop high temperature super conducting materials we need to introduce some form of cooling system integral to the processor design.

Come to think of it such a design would also mean that when web browsing 98% of the cores could power down and thus save on power and heat and when gaming the whole shebang would fire up.

But again the focus should be on massively parallel designs of simple RISC based processors AND more importantly give the software developers the tools to easily harness that scale of parallelism.

Just my 2 cents.
 
Status
Not open for further replies.