Bad start for Intel

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I know too many people who have clocked northwoodCs to 5ghz to think they cant do it and be stable. Going to 1/2 the gate size should make it a cake walk.
The scotties have serious leakage current problems, but they started with northwood. FD SOI and a realigning of the current paths to a lower level would help a lot.
We'll probably find out in about 5 years, when Intel needs to speed up because multicores just dont give the umf we want anymore.
 
AMD does (supposedly) plan to move to 65nm this year, though they don't really need to rush. Their designs are still better than intels (From an engineering standpoint), and as a result, their newly announced processors outperform the 65nm Intel chips, and still use i think about the same power.

Intel has to complete their transition away from the Netburst Architecture to a more streamlined design before they can really compete again.

With AMD working toward a mobile platform and Dell supposedly being open to selling AMD, Intel needs to work fast (though more importantly smart) to heat up the competition.

I am not siding with one or the other, as I personally prefer there to be stiff competition between the two, it means better overall designs for all of us.
 
I'm really starting to hate Intel's new chip naming scheme already. Why don't they just give them a name rather than use very confusing random numbers and letters that nobody can possibly know what they mean let alone remember.
 
I'm really starting to hate Intel's new chip naming scheme already. Why don't they just give them a name rather than use very confusing random numbers and letters that nobody can possibly know what they mean let alone remember.
It's funny. When Intel used primarily MHz to differentiate their product lines everyone complained that Intel is pushing the MHz Myth. But then Intel switches to part numbers and people complain that it's too confusing. Well gee.
 
I know too many people who have clocked northwoodCs to 5ghz to think they cant do it and be stable. Going to 1/2 the gate size should make it a cake walk.
Yep. Northwood had a lot of potential left in it.

The scotties have serious leakage current problems, but they started with northwood. FD SOI and a realigning of the current paths to a lower level would help a lot.
I really don't know why Intel is resisting improving the process so much. :? It seems kind of weird. It can't really just be about the cost, can it?

We'll probably find out in about 5 years, when Intel needs to speed up because multicores just dont give the umf we want anymore.
:lol: They'll probably invent some weird NetBurst2/virtualization technology that reintroduces a giant pipeline. Not that this would be a bad thing, if done right.

Of course I'm still waiting for an array of small completely virtual 'cores' that share a unified set of registers, L1, and L2 cache set within a master control core that breaks down the incoming code into order-adjustable microcode so that it can relabel and distribute the code to these cores as efficiently as possible, regardless of the way the original code was written. You know, a CPU that really does good instruction-level-parallelism from x86 code. It'd take a lot of R&D and a truck load of testing, but if no one starts it, how can we ever get there? Once the kinks are worked out it'd have to be better than just cramming in more x86 cores into a single package.

I mean imagine how much of the resources of these dual-core x86 chips go wasted, especially when only running one intensive single-threaded app at a time? The more seperated things are, the less efficient they are, and the more of the die is wasted with unnecessary redundant units.
 
Of course I'm still waiting for an array of small completely virtual 'cores' that share a unified set of registers, L1, and L2 cache set within a master control core that breaks down the incoming code into order-adjustable microcode so that it can relabel and distribute the code to these cores as efficiently as possible, regardless of the way the original code was written. You know, a CPU that really does good instruction-level-parallelism from x86 code. It'd take a lot of R&D and a truck load of testing, but if no one starts it, how can we ever get there? Once the kinks are worked out it'd have to be better than just cramming in more x86 cores into a single package.

That is sorta the direction that the cell processor is starting toward.
 
That is sorta the direction that the cell processor is starting toward.
Sort of, but not really. The idea is to get rid of as much of the duplicate units as possible. Why have each 'core' use its own decoders, cache, registers, tables, queues, schedulers, yada yada yada? And they still have to talk to each other instead of openly sharing data. As cores shrink more and more, having just one 'core' do all of that part and the remaining 'cores' being just execution units, will really allow for a lot more efficient design. From what I've read the Cell processor is kind of a step in that direction, but not nearly enough of one.
 
True. However, from a design standpoint, what you are saying is more difficult than it sounds, and it will be a few years before something like that is seen.

(Though on one level, that is kinda how a current processor works, with different elements, such as the ALU, included on one core). I know it is not the same as what you are talking about though.
 
True. However, from a design standpoint, what you are saying is more difficult than it sounds, and it will be a few years before something like that is seen.
Oh, I know. Which is why work should be started now. (Should have started already.) It only really starts to make an economical sense once we reach quad-cores anyway.

(Though on one level, that is kinda how a current processor works, with different elements, such as the ALU, included on one core). I know it is not the same as what you are talking about though.
Exactly. This is just taking that to a much further extreme. Procs are already breaking up the x86 code into microcodes specific to the proc. We just need to take this further.
 
Earlier than Corprinicus, Aristotle invented the Scientific method. But your point still stand, if we are going to compare products then you must limit the number of variables being shown. In an ideal world both Intel and AMD would run on th exact same motherboard with the exact same ram, etc etc. Comparing one overclocked processor to a known overclocked processor is a gross violation of the scientific method. Not to mention that there are still cache differences and things of that nature. An Athlon X2 3800 overclocked to FX dual speed is not the same as a dual FX, there are cache differences.
 
The speed of light is releatvie to its medium, but it still maxes out in a vacuum at 186,000 miles per second. Most fiber optics don't quite reach that speed yet.

I read on a Popular Mechanics magazine that some scientists made light travel faster than "itself" and lowered its speed on a special type of fyber optic...

Indeed it would be VERY XXXpensive pc....
 
well said supremelaw

it would be a high risk venture for intel to put their money into a bunch of theory no one knows much about and they probably figure they've come so far, lets just improve on what we wasted so much money on already
 
but um.. isn't light fast enough anyway?

anyhow. you couldn't make a computer chip out of fibre optics because fibre obtics don't work through cables that r bent at more than 90 degrees and i don't think you can fit enough fibre optic cable into the size of a processor core without bending some of the fibre optics
 
The performance of the Presler is exceptional for an Intel based system
Perhaps, for an Intel system. Not to be compared to an Amd, or they would look bad.
Sad that they did not include what the scotties do best. These little puppies are great heaters.
 
Intels got conroe round the corner, that will atleast catch up to an A64 and might beat it if intel gets it right (its there "new" architecture - ground up new design so it should dam hell should even if its not equal clock for clock), either way Intel's conroe will be cooler and quicker then the current netburst rubish.