AMD has started shipping 90nm products

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
RIGHT NOW i would pay the $800+ a A64 FX chip if it were 3.5Gz. Hell i would pay the $1000 for a 3.5Ghz A64.
That would be like a A64 4800+ or something....*Drools*
Heck, I'd even pay that too for such a kickass product. :smile:
but anyway back to the real point, Is AMD going to 65nm like Intel? also when Dual core comes out and AMD has dual 2.2Ghz cores....will it be called a 4.4Ghz processor?
Intel is already transitioning one of its fabs to 65nm; they seem to be ahead of AMD again. In theory, they should be able to mass-produce 65nm-based products still in very late 2005 - theoretically; if they don't flop that transition like they did with the 90nm one, that might be their strike back against A64 technology. If well executed, that is. And it seems that's asking a lot of Intel nowadays. 😱

As for calling a dual 2.2Ghz a 4.4Ghz, that would be a blatant lie of course, but I'd suspect the marketing freaks to like that idea. Personally, I think that's so wrong it's offensive; they'd better call it "<b>Gemini 2.2Ghz</b> - two twin processors in one" or whatever... really catchy and quite accurate.

BTW, what is it with chip names? Why do they always have to be so boring? For instance, you could codename a chip "Chimera", then its predecessor "Bellerophon", for instance... Chimera for the mithological beast with three heads: one lion, one dragon and one goat head, on a lion's body... And Bellerophon for the beast's slayer...

And dual-core ones: Gemini. Personally, I liked the sound of "tanglewood" for Intel's multi-core (up to 16) itanium, 'cause it made me feel as if it would be a great processor to deal with lots and lots of threads - "entagled threads"... But then they renamed it Tukwila, which makes me kind of go "huh?"...

Hammer was ok, even if slightly dull and uninspired... Claw/Sledgehammer was great, but did they have to come up with the new, boring names? Paris, Athens, Newcastle, San Diego... Not really inspiring.
Sorry i am asking so much
That is not something to be sorry about. Great things happen when people come up with unexpected questions. The most basic questions are the ones we never quite seem to ask ourselves. So keep asking! :wink:

<P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 08/13/04 03:41 PM.</EM></FONT></P>
 
Really? i would figure once the dual cores hit the market in mass....not just initial samples, that AMD and Intel would stop innovating for single core CPU's.
What would the benifits of single core in 5years? how much further can the push the limits of single core?
Wouldn't that be like AMD spending money and time innovating a 32bit chip 10yrs from now? maybe for a while because of the cost of 64bit. but in 5yrs let alone 10 will AMD be making 32bit chips?

I must be so lost on what dual core is and the benifits of it. How will the dual cores be labled if not the total of the two? something like A64 3200++? or 3200+*2?

Do you have a link that can point out the direction of dual core and also explain the benifits of it?

What do you call a group of midgets? A pack, gaggle, a pride, or maybe just a murder?
 
Really? i would figure once the dual cores hit the market in mass....not just initial samples, that AMD and Intel would stop innovating for single core CPU's.
I'd imagine that budget-minded CPUs will indeed be single core for a long time to come, too... I agree with P4Man.
I must be so lost on what dual core is and the benifits of it. How will the dual cores be labled if not the total of the two? something like A64 3200++? or 3200+*2?
I guess they'll probably come up with something, but we don't know what the marketing department will do exactly, AMD hasn't divulged anything. I wouldn't be surprised if they came up with a whole processor name or high-end lineup for dual-core.
Do you have a link that can point out the direction of dual core and also explain the benifits of it?
I think a good place to start, if I may be so bold, would be 2cpu's <A HREF="http://www.2cpu.com/articles/6_1.html" target="_new">FAQ</A>. Granted, it's really about two processor systems and SMP, but it shows you the general direction things are going. Dual-core chips are essentially a more efficient version of dual-processor systems. They're more interesting from a technical point of view, and they're more sophisticated; processor-to-processor communication is more streamlined and cache is usually shared.

Therefore, dual-core systems also share dual-processor setup's limitations and advantages. In order to truly reap the benefits of going multicore, though, you must use multithreaded code. If you're not, then multicore will only make multitasking smoother - much smoother and troublefree, but it won't speed individual, single-threaded apps at all. Of course, ideally, a dual-core chip would perform twice as well as a single-core, but that will certainly almost never the case.

Which is the main limitation of going dual-core. If indeed AMD needs to somewhat reduce core clock to keep power dissipation at acceptable levels, then single-thread performance will be penalized - and keep in mind that a huge fraction of software nowadays is single-threaded, not multithreaded.

So, If AMD releases, say, a shiny dual-core processor with the cores at 2.4Ghz, and you can get, say, a single 2.8Ghz processor for less cash, then there is a big chance that, for many things, the single 2.8Ghz will be faster.

(which is also what makes a theoretical dual-core dothan a fantastic idea: it wouldn't need to be scaled down to keep power dissipation low; it's already very, very low with a single core!)
 
>Really? i would figure once the dual cores hit the market
>in mass....not just initial samples, that AMD and Intel
>would stop innovating for single core CPU's.

I wouldn't bet on it. Not all software can benefit subtantially enough from multithreading to be able to completely ignore single threaded performance. There may be a shift in focus from single threaded to multithreaded, but it will take a while for software to catch on on such a scale that there is no longer a point in improving single threaded cpu performance. Maybe this will even never happen.

>What would the benifits of single core in 5years? how much
>further can the push the limits of single core?

They've pushed it a little while over the last 30-40 years, I'm sure they will push it a bit further over the next 10 :)

>maybe for a while because of the cost of 64bit

Nothing to do with it.

>but in 5yrs let alone 10 will AMD be making 32bit chips?

Probably not for anything but the embedded market, but again, that has nothing to do with multicore. you could build a 32 bit dual/quad core just as well, or a 128 bit single core if you wanted to.

>I must be so lost on what dual core is and the benifits of
>it.
Dual core is just two cpu's in a single package ('die'). Its performance characteristics are nearly identical as a dual cpu machine, ie ranging from roughly identical as a single cpu machine to ~80% faster in the best cases (100% in theory). In reality, only multithreaded apps will gain (or running more than one single threaded app simultaneously), and performance increases rarely exceed ~20% for most desktop oriented tasks. For games, currently the speedup is zero. Rendering, photoshop and encoding tasks can make better use, ranging from 20 to ~70% in some cases.

> How will the dual cores be labled if not the total of the
>two?

Simply as dual core ?

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
Wouldn't developing dual-core be the same thing as developing one core + connectivity? Thus rendering the discussion on the development of one over the other moot?

Selling is another matter.

<font color=blue>The day <font color=green>Microsoft</font color=green> will make something that doesn't suck is the day they'll start making vacuum cleaners.</font color=blue>
 
I see your point, If you expand the single core speed you will expand the dual core because its two single cores.

I was making the point about 64/32bit as in once you have made a leap in technology why would you research old and useless tech. but i see now that 64bit single CPU will be important.

If games wont produce any benifits what about this:
A dual core machine like a A64, 2.6Ghz dual core with 2 6800Ultra's on dual PCI-E board....would that enable me to play something like UT2004 with my friend ON THE SAME MACHINE?
or would there be some bottleneck somewhere that wouldn't support this? Providing there is a lot of memory like 1.5-2GB.

because in games, not D3, where you dont need a SLI to make it play at highest quality you might be able to do that. It would be cool if we could.

Also one thing you said about software, is it impossible or just unlikely that M$ or Linux or some program can make use of the dual CPU like a RAID-0 and utilize it as if it were one giant PCU? (understandably it wouldn't be a 100% conversion but maybe a dual core of 2Ghz would act more like a 3.6Ghz)

I dont know much about the technical part the design of CPU's and i know even less about programing.

the possibilities that can arise from a dual core are interesting to me.

What do you call a group of midgets? A pack, gaggle, a pride, or maybe just a murder?
 
Your ignorance (and I don't mean that insulting at all) allows you to have some truly refreshing ideas :) And again, I don't mean this sarcastic, but both the SLI and cpu RAID ideas are terrific in their own way :) Unfortunately, therefore not quite realistic.

Lets start with 2 cpu's "in RAID-0". the big difference between storage and executing code, is that there is no dependancy with storeage, and there is with code. In a disk/RAID setup, one bit on disk 0 has absolutely no relation with the other bits on disk 1. you can simply put the odd bits on one disk, and even on another, and thats it.

With code, if you would randomly chop it into pieces, and feed different pieces to different cores (cpu's), you'd have enormous problem with dependency (one piece of code being dependant on the output of another, running on another core), cache trashing and other problems that would result in performance being an order of magnitude lower than running on a single core; the cpu's would both be completely stalled and bottlenecked by intercpu/core communication.

If you make the cpu's a bit more intelligent in determining which "pieces" can be executed in parallel, you end up with what every modern cpu does "being superscalar" by having more than one pipeline, and several execution units. There are limits to how far you can push this though, and stretching this over two cores doesnt give any improvement.

No, in order to make proper use of more than one cpu, you need software that allows this by creating more than one "thread" which can be processed with as little dependencies as possible with the other threads. Since even compilers can't properly do this automatically yet today, you can not reasonably expect cpu's to ever do this automatically/on the fly.

As for using your PC to play 2 copies of the game at once.. well, frankly some games already allow this, through split screen. Add a second monitor and support for it, and you could do it properly. There is no strics need for dualcore cpu's or dual SLI gpu's to achieve this. You'd need 2 keyboard/mice (which might be a problem already under windows I think), a powerfull enough computer and mostly, a game that supports this. If properly written, I also don't think it would require twice the cpu power, a lot of things could be shared between both game instances (AI, physics,..). At least I assume.

what you are suggesting however, isnt crazy at all, and a lot companies including MS and Intel are working hard at making it possible/easier to create several virtual instances on one PC (or server). Some of this technology will even allow you to have multiple different operating systems loaded at once, rebooting one instance while keeping the rest running, etc. Such technology should also allow you to play several games at once, or the same game obviously. But dual core chips or SLI videocards on their own will not enable this, nor will they be really required for it.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
Intel is already transitioning one of its fabs to 65nm
Yeah, but is that a <i>processor</i> fab?

As for calling a dual 2.2Ghz a 4.4Ghz, that would be a blatant lie of course, but I'd suspect the marketing freaks to like that idea.
If AMD doesn't get rid of their rating system then I <i>would</i> expect them to call it an Athlon64 DC 6500+ or some such nonsense.

BTW, what is it with chip names? Why do they always have to be so boring? For instance, you could codename a chip "Chimera", then its predecessor "Bellerophon", for instance...
By any chance have you been watching Mission Impossible 2 lately?

<pre><b><font color=red>"Build a man a fire and he's warm for the rest of the evening.
Set a man on fire and he's warm for the rest of his life." - Steve Taylor</font color=red></b></pre><p>
 
YAY! I knew my ignorance would pay off some day...lol.
I used to talk about physics with my Physics teacher and my friend who wasn't familiar with it...he would rbing fresh ideas and new ways of thinking about it. I dont see my not knowing all of this as a weakness. I am asking because i dont know and i am curious. I think the people who are content to sit in ingnorance are stupid. Not the people who ask questions....so i take not offense at all to anything you said.

I am thinking about some of this stuff. I guess its all in the software people's hands. In theory a game could use CPU1 for physics and CPU2 for AI...therefor making more complex code for both.

I know SOME games already allow the dual gaming to be done. But you would think with a simple program...(seems simple to me but i know know SH!T about programming and code.) EVERY game could be played this way even if it wasn't designed to.
With a dual core and dual GPU, you should be able to EASILY set up a virtual desktop or something like that to run the 2nd game from. So ALL games could be played like that.
Also i didn't think Windows had a problem with Mulitple keyboards and mice....with USB ports this shouldn't be a problem.

I guess i am dreaming, but to me i dont think it would take a super code to make two OS to be running from the same machine. With dual core, two GPU, enough RAM(maybe even sectioned off for each core when this is enabled), and multiple input devices....you should be able to do it.

But then again this is a moot point, it's all theory.

One more Q...if dual cores doesn't bring gaming up, and limited in other apps...why would companies spend hundreds of millions on something JUST for encoders?
Where is dual core technology going to help gamers and graphic designers and other people who will only see a minor change in performacne?

What do you call a group of midgets? A pack, gaggle, a pride, or maybe just a murder?
 
> In theory a game could use CPU1 for physics and CPU2 for
>AI...therefor making more complex code for both.

Not just in theory; indeed spinning off physics, geometry and AI into different thread is probably the easiest way, if its not even done yet to some extent now. Not sure what you mean by 'more complex code', but if you mean that you'd therefore have more processing power using multicore/multi cpu, then, yes indeed.

> EVERY game could be played this way even if it wasn't
>designed to.
>With a dual core and dual GPU, you should be able to EASILY
>set up a virtual desktop or something like that to run the
>2nd game from. So ALL games could be played like that.


Yeah, if the OS allows both virtual computer to use 3D accelerated graphics (on 1 or 2 videocards), 2 different audio cards/outputs (that already works now for sure), 2 difference input devices (which is a must anyhow, not just for games), yes, your plan would work. Not there yet though.

>Also i didn't think Windows had a problem with Mulitple >keyboards and mice....with USB ports this shouldn't be a
>problem.

Just tried it.. plugged a second USB mouse, and it works.. well, not if you expected 2 pointers though LOL.. both mice fight for control like I expected. but hey, things like that should be easy to change, maybe an application could even already detect the 2 different mice and redirect their inputs..

>One more Q...if dual cores doesn't bring gaming up, and
>limited in other apps...why would companies spend hundreds
>of millions on something JUST for encoders?

Yeah, why did intel ever release hyperthreading netburst cores that suck at anything except encoding 😉 Seriously though, dual core (or dual cpu) makes little sense on the average desktop today, that doesnt mean software won't be rewritten to take advantage of it. I'm sure intel will spend some fortunes on subsidizing companies to make their software SMT friendly (like they did for MMX, SSE, SSE2,..). In fact, intel may already have laid the groundwork for this with Hyperthreading.. which also requires threaded software. So its not like there won't be a benefit at all, if nothing else, atleast running 2 different cpu intensive tasks at once should benefit hugely, its just not like 2 cpu's would be better than a single cpu at twice the speed. I wouldn't want to give up more than ~20% clockspeed for a second cpu/core.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
It seems to me like after finding this out....65nm is much more exciting then dual core. It seems like dual core is best for servers and encoders.
The faster and more powerful single core for other things.
I guess that might be another choice people will have to make in the future.
I wish i could get a hold of companies that design software and give them ideas of what i want to do with my dual core and have them write code for it. I think dual cores have so many options but i have a feeling it will be so rushed to market that none of them will be utilized.

What are the odds of ANYTHING i have mentioned making it to market?

What do you call a group of midgets? A pack, gaggle, a pride, or maybe just a murder?
 
>It seems to me like after finding this out....65nm is much
>more exciting then dual core.

Don't get overexcited by either, its just progress, evolution, there is no revolution (or if there is one, its a continuous one).

> It seems like dual core is best for servers and encoders.
>The faster and more powerful single core for other things.

Yes, servers and workstation will benefit the most (and the first) from multicore, but I wouldn't completely ignore it for the desktop either. there is already quite a bit of MT software out there, just don't expect dual core to double performance for everything (or even anything).

>What are the odds of ANYTHING i have mentioned making it to
>market?

Like what ? 4 GHz K8s ? No. 4 Ghz "K9" or K10, K11 or whatever? definately. Dualcore, dual GPU, machines capable of supporting two games/gamers at once ? Yes, definately, won't take that long either. Multicore cpu's that benefit automatically even from single threaded software ? I doubt it in the near future, but wouldn't rule it out.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 
With the problems Intel had with 90nm i figured that the transition to 65nm would be put on the shelf for a long time and ultimatly not yeild that great of results.

But knowing that 65nm now looks much more likely, because of AMD producing quality 90nm prodcuts, its more excitng that 65nm might extend the limits of the CPU as i know it today.

Whats after 65nm? atomic transistors? i have heard about this, but isn't it super exspensive?

What do you call a group of midgets? A pack, gaggle, a pride, or maybe just a murder?
 
>With the problems Intel had with 90nm i figured that the
>transition to 65nm would be put on the shelf for a long
>time and ultimatly not yeild that great of results.

Impossible for us to tell really. First, i'm not sure intel is having any at all trouble with 90nm, they are having trouble with *Prescott* (not really with Dothan, also 90nm). For all I know, Prescott on 130nm could be a 200W monster.

Secondly, intel will be moving to FD-SOI at 65nm, AMD (afaik) will stick to PD-SOI like now, so its even harder to guess how good/bad 65nm product will be.

>Whats after 65nm?

45nm :)
I don't know how much further they will be able to shrink after that (I think I read something about some technology in the works that would make it possible to scale down to 10nm without radical differences, anything further is science fiction), but its certain it will get increasingly hard and expensive and it won't last forever. At some point quantum mechanical effects just take over and you can't design traces smaller than a couple of atoms anyway. But by that time, who knows what they will come up with ? 3D layouts, quantum computers, diamond/light cpu's.. ? whatever, use your imagination, reality is likely even more absurd :) There is very little point in trying to look more than ~5, maximum 10 years ahead, even intel or AMD don't really know what they'll do then (like they didn't know 10, 20, 30 years ago).

In this context, this <A HREF="http://www.byte.com/art/9612/sec6/art7.htm#" target="_new"> old Byte article </A> might interest you. Its from 1996 and asked top executives at Intel, AMD, Cyrix, DEC, etc to look into their chrystall ball. Note how they are all pretty much at a loss what would happen 10 years down the line, as i'm sure they are now. Some prediction are pretty much spot on though, especially Andy Grove and Jerry Sanders did pretty well if you ask me.

= The views stated herein are my personal views, and not necessarily the views of my wife. =