Archived from groups: alt.comp.periphs.mainboard.elitegroup (
More info?)
Hello Flipper,
Your last post had me chuckling. Lots of neat history there.
>What's your core voltage at? Because that is the biggest factor in
>'extra heat' from overclocking. Power goes up with the square of
>voltage but linearly with speed.
Core voltage is running at 2.2. That's the factory specification.
Everything is hunky dory as far as I can tell.
>>All right. I notice that both my old cpu and new cpu had the code
>>26351 on the carrier. Initially the pull I bought had a bunch of
>>remnant thermal paste on it, so seeing the same code on the carrier,
>>and not being able to make out the lettering on the actual chip (due
>>to the paste residue) made for some anxious moments: "Did I just get
>>taken, and buy another K6-2 350MHz?" I quickly rubbed off the paste,
>>and lo, there was the "500" that gave me relief.
>
>Oh wow. Sorry for the heart attack and I know don't how I got the
>numbers mixed up but 26050 is the 'old' K6 core. 26351 IS the CTX
>core.
>
>I must have mispasted from the wrong column in the data sheet.
Oh, that's all right. Actually, it wasn't your numbers that freaked
me out, this was before you ever mentioned the gold numbers. I just
thought those gold numbers could be used to figure out which CPU/speed
was on the carrier. I thought that each processor would have a unique
"gold" number. I see now that the cpu itself has the unique number
telling about it's recommended operating conditions and manufacture
date, in addition to the name and speed which is clearly marked.
>
>>>A million circuits is peanuts these days. Even a K6-2 has 9.3 million
>>>transistors in it and the corresponding Celeron (PPGA) has 19 million.
>>>
>>>Athlon XP, Barton core, is 53 million and a Northwood P4 is 55
>>>million.
>>
>>53 million? Ay carumba. Amazing. I remember reading some time ago
>>that cpus were reaching the limits of physics with regards to how
>>densely the circuits were packed on the chip and how much heat and
>>cross-current they would generate (or some such thing). The article
>>said they were testing other materials for building chips. This was
>>back in the early Athlon days I think. Since then, chips have come
>>quite a long way, so I guess that article was a little off.
>
>Yes, I've seen articles talking like that too. They're always in the
>context of 'what we know' and what current techniques are, though, and
>people have a tendency to come up with new creative ideas <g>.
>
>If you go WAY back to the germanium transistor days it was known that
>silicon transistors were theoretically possible but you just couldn't
>make them because it was impossible to get silicon pure enough for the
>physics to work. Well, that is, until Texas Instruments figured out
>how to make silicon that pure. And to add even more humor to the
>story, they used a process that was known to not be good enough, and
>by several orders of magnitude. hehe Just that no one had though to
>repeat the unsatisfactory process and, as it turned out, it improved
>exponentially on each pass.
>
>Make one wonder who had the balls to ask "what happens if we do this
>known-not-to-work useless process twice?" And why anyone listened.
It must have been someone who actually washed their own dishes. "You
know, when I was oatmeal off a bowl, sometimes there's some stuck on
after the first wash. Sometimes I have to wash it again."
As far as people coming up with creative ideas to further technology,
that's really true as well. It has been amazing how much faster and
better computers have become even from 10 years ago.
Of course, Telstar's pong game was the cat's pajamas back in the 70s.
We also had an apple computer with these stupid "paddles"; black knobs
you turn with a little red button on them. That was the big advance
of the Apple II computer; from Pong to Breakout. ;-)
It's surprising how long PCs went without someone developing a decent
sound card.
>
>
>>>For those of us who remember discrete transistors it's staggering.
>>>Imagine, a transistor with a failure rate of one per million years
>>>sounds rather decent but that would mean a K6-2 failure every 5 weeks,
>>>or so!
>>
>>I never even considered that. One per million years. So, the
>>circuits on a cpu are much more resillient than that? Must be. I
>>don't hear a lot of stories about cpus wearing out under normal
>>operating conditions... only under overclocking. Of course, most
>>processors reach their limits of usefulness before they wear out.
>
>Yep. Now, when you get to a detailed analysis of where failures come
>from it turns out that 'packaging' and 'mechanical things, like the
>little wires connecting the thing to the outside world, are a major
>source of failures and, of course, on an integrated circuit the
>connections are internal so some of that improvement comes
>'naturally'.
>
>But a lot is purity of the materials (see silicon transistors) and how
>the dope layers are deposited and so on.
>
>It's not a trivial thing, though. People laugh at how weak old
>computers were but even more of a problem was keeping them running. It
>was a red letter, banner headlines, day when 'large scale' computers
>(vacuum tube) reached the reliability point that you could repair them
>faster than they broke down. LOL That meant you could run it during
>the day, put an all night maintenance crew to repairing it, and then
>run again the following day! My goodness, what will they think of
>next?
Wow. Amazing. Think how it would be if we had to repair our
computers every night. People get peeved when they have to reboot
after installing new software.
I've never seen a computer read those paper programming cards, but
when I think of those cards, I think of a Simpsons episode where Apu
is showing the Simpsons the boxes of cards he used to write a program
that got him his Computer Science degree at Cal Tech (Calcutta Tech).
Bart grabs a random card and asks "What does this one do?".
Crestfallen, Apu takes the entire box of cards and empties it into the
garbage, since the sequence of cards has been disturbed, ruining his
program.
>
>That is one reason why no one in the 1950/60s was really all that
>worried about megalomaniac computers taking over the world. Who the
>hell would repair them?
>
>My favorite, though, is the computer that was afraid of the dark:
>turn the lights off and it would go nuts. Apparently the light
>provided some 'extra' stimulation to either the little neon indicator
>lights or the vacuum tubes and without it the circuits would misfire.
That's a funny story too. <Chuckle>
>
>
>>>If you watch it closely, yes. But a system can appear 'locked up' and
>>>still be consuming power and a processor can consume power when it's
>>>'crazy' .
>>>
>>>That's how my experiment got away from me. I didn't notice it was
>>>locked up, as it was an extended test, and it cooked itself to death
>>>even though, for all appearances, it was 'doing nothing'.
>>>
>>>Point is, you can't count on it going back to 'cool' just because the
>>>system 'locks up'. It isn't 'fail safe'.
>>
>>Good to know. Those freeze ups are kind of scary in that light.
>
>Not really. It isn't that the lock up itself is a heat problem. It's
>mistakenly thinking one can leave it powered up like that. And it's
>not like you have to catch it in 5 seconds. Just don't leave it that
>way, which no one looking at it would do anyway.
Oh, all right. That's not so bad then. Just read a book within sight
of the monitor while it runs its tests. Or, like I was reading last
night, a computer supply catalogue. Reading that made me think
perhaps the rate of technological advance in chip speed is progressing
a little slower. Seems like Intel has been using the P4 platform a
long time, and my 1 1/2 year old Barton 2500+ is still running very
well. I suppose if I were a major gamer, I might be looking at
upgrading, but I'm still very happy with its performance.
I suppose there will come a day when computers are sufficient for most
user's needs and upgrading will slow. However, maybe this is one of
those predictions that will sound ridiculous in a few years, like
"Someday, computers will fill only a single room."
>
>> My
>>MSI board with the 2500+ on it has some monitoring equipment on it. I
>>eventually turned it off because it kept giving an alarm on my case
>>fan speed. That system is running completely at spec, so I wasn't
>>concerned about over-stressing the system, so giving processing power
>>to a monitoring system that I didn't think was all that reliable
>>wasn't that compelling. Perhaps it was giving a warning because my
>>case fan is of quite large diameter, and likely spins slower than a
>>smaller fan would.
>
>Very possible. But that should be an adjustable parameter too.
Yes, that might be the case. I suppose at the time, I had other more
pressing needs, or just wasn't that interested in the problem. The
constant alarm was kind of unnerving, so I turned it off. Ignorance
was bliss, and I suppose it still is.
>
>
>>Actually, I'm a little embarrased to say, I failed to set the clock
>>speed properly in my first 6 or 8 months of operation. It was way
>>underclocked, since it was assembled by the computer store, and either
>>they didn't set the multiplier, or I had to do so much flashing to get
>>the built-in network card to work that it reset the multiple to
>>factory default.
>
>Flash probably did it. Some will reset to default on a set number of
>failed power up posts too.
Ok, I'll have to watch that I check it after every flash. Actually,
II just confirmed it's running at the right speed, since I can't
remember if I flashed it since I set the speed properly.
>
>>It's kind of humorous in hind sight how poorly the mainboard operated
>>fresh out of the box. They were shipping product with major operating
>>flaws and with no documentation on how to make the networking to work.
>>I can't imagine figuring it out on my own, without the help of the MSI
>>forum. That is, with help from other users, not from MSI themselves.
>
>Well, the above story about early computers will put it in some
>perspective then. <g>
lol. Yeah, that's for certain. It's great that once a machine is
running, rarely do I have to open the side panel to see what's going
on in there. This is evidenced by the amount of dust that accumulates
in there.
>
>>
>>Of course, I was lucky that was a second computer, not my first ever,
>>or else I would have had no way of connecting to the internet to see
>>how to make it work. I think the problem had to do with a MAC
>>address. The user has to manually enter the MAC address into BIOS.
>>Usually, to find the MAC address, the user has to use a mirror to see
>>the little sticker on the mainboard hidden close to the edge of the
>>case.
>>
>>It's a good think I like puzzles. A tinkerer will find joy in weird
>>situations like this where someone else will just find frustration.
>
>Yes. Well, as they say, computers are not toasters
😉 There's no other
>'appliance' that requires anywhere near the 'smarts' to operate.
I'm glad that tinkering with computers doesn't require near the amount
of tools that a mechanic (they call them Automobile Technicians now
around here) needs to fix a vehicle.
gene