warning: bottom contains two tl;dr sections. First one I ramble about monitors/lightgun accessories, and in the second, perhaps more relevant to the article, I go on about different CPU design philosophies. (read: why things like GPUs or the Cell aren't automatically better for everything)
[citation][nom]shadow187[/nom]I mean, if we get an 8,000mhz processor, give each core 4 threads! (Though I do not know if this is possible (I don't know how hyperthreading works) ).[/citation]
Well, you seem to show a phenomenal lack of understanding of how CPUs work on a few fronts. A very obvious thing you miss is that clock speed on its own means nothing; the record for top stock clock speed is STILL held by the Pentium 4. (specifically, the mostly-similar 571, 670, and 672 models) Yet when it comes to even single-core performance, a much-lower-clocked CPU from ANY of the "Core" (Core, Core 2, Core i3, i5, or i7) series will utterly butcher it. It's proven that clock rates are secondary to how much gets done per clock cycle.
As for hyper-threading, giving more than 2 virtual threads per physical core is going to yield more and more diminishing returns; aside from the more complicated issues of needing precise scheduling and branch prediction to make sure it's not squandered, there's the simple matter that there's only so much instruction pipeline to go around
[citation][nom]brett1042002[/nom]Duck Hunt[/citation]
I could've SWORN it was Tetris. I'm having difficulty getting 60fps with my current rig.
[citation][nom]shin0bi272[/nom]so anyone taking bets as to how long it will be before we see 8 core desktops? 6 core xeons came out 2 years ago this july IIRC and we just got 6 core desktops (*and they only cost your first born son).[/citation]
Well, since there really isn't much of a use for tons and tons of cores on a desktop, there isn't as much of a push to keep desktops "up-to-date." So, you might actually have already extrapolated that line of thought, and concluded that it'll be a year or two before Intel rolls out an 8-core CPU for desktops. (similarly how AMD already has a twelve-core Opteron, but it relies on a two-die-one-package design which I doubt they'll EVER use for desktop CPUs)
[citation][nom]touchdowntexas13[/nom]Speaking of duck hunt, I recently tried to play duck hunt on my old nintendo only to find out that the game does not work with LCD screens...It was a bit of a disappointment, but I guess I shoulda known.[/citation]
Yeah, LCDs fail to be able to generate the same sorts of contrast the zapper expects. Of course, you can always cheat, and aim it at an incandescent light bulb point-blank and never miss.
[citation][nom]BigBurn[/nom]At any given moment, there is only 1 pixel that is lighten on a CRT (big old) television. But each of them get lighten up many times per second so the human eye doesn't sees it. BUT the gun can, and when you shoot, it measure how long untill it detects light coming from TV (remember, only one pixel is making light on the screen) and if it takes X seconds it can determine that you aimed the Y pixel.Thats basicly how it worked.[/citation]
That's actually a tad incorrect; with CRTs, the light is emitted by phosphors which actually continue to emit light for quite a while (a few milliseconds) after being struck by the electron beam; generally (in a good-quality monitor) right until it's ready to be hit by the next frame's scan.
For the zapper, it can't time that well to actually spot the exact position of the election beam; there's too much latency involved, plus non-graphics programming is only on a frame-by-frame basis. Instead, it actually causes a bright flash from the screen for each object; if there are multiple valid targets available, it gives one flash for each, in sequence; the gun checks for bright lights then, and returns if it considered it a "hit." Hence how you can fake it out by pointing the gun at a bright light source, or by pressing it up against the screen; in either case, it'd be detecting bright light at the time it checks, and hence would register a hit. (in the event of multi-target scenes, it would always register a hit for the "first" target in sequence)
And as another aside, LCDs DO scan in a similar, linear manner as CRTs do, as changing all pixels simultaneously would require a stupidly wide (and hence expensive) connection. Rather, it just changes the color/brightness settings for each pixel in sequence, both to ensure compatibility with CRTs, as well as to make things simpler and cheaper. It's just that while with CRTs, the brightness/color will fade with time, on an LCD they remain constant until changed.
[citation][nom]nebun[/nom]why can't they make a chip with over 200 cores the way a modern GPU is made?[/citation]
As others have already mentioned SOMEWHAT, those "cores" aren't the same. In fact, they aren't "cores" at all. Rather, they're much more analogous to the SPEs found in, say, the PS3's Cell Broadband Engine.
A full "CPU core" is a fully independent physical CPU, with not just its own ALU, but instruction decoder/handler, (with support for a LARGE array of instructions; a modern x86 has HUNDREDS of 'em!) as well as other types of processing unit; IIRC, a modern x86 CPU core (starting with the Pentium III) has 2 ALU pipelines, 1 floating-point arithmetic unit, as well as a single 4-wide SIMD (aka Vector Multiple Data) unit. Plus, the CPU core has other "non-processing" circuitry such as schedulers and branch predictors, to allow it to most efficiently run the code given to it.
A CUDA core, stream processor, SPE, or whatever you wish to call it is, in fact, far less than a "core;" it can be equated to being ONE of those "processing units" within the core; typically, it's a 4-wide SIMD, since they offer the most "math density," though some vary, such as ATI's stream processors, which consist of 5 floating-point units loosely lashed together. In all these cases, each sub-processor doesn't have anything necessary to tell it WHAT to run; it's just the math units that do the running. They require a master unit to actually receive, handle, and dispatch instructions it receives from the program, predict branching, and manage/schedule operation flow in order to keep every part busy constantly.
With the tasks these co-processors are made for (like graphics or physics programs, or decoding hi-def movies) it's fairly easy to keep them filled, since structure-wise, they're running simple programs; very, VERY long loops, with tons of numbers to crunch with the exact same instructions. Hence keeping these sub-processors busy is an easy task, that can readily be handled by a smaller dispatch unit directing dozens or hundreds of these. Actual program logic, though, is far more complex, so if such designs attempted to run them, the vast majority of the units would lie dormant all the time, as the controller simply couldn't feed instructions fast enough to keep them occupied, as each instruction would, instead of covering thousands (or millions) of chunks of data, it'd only cover a tiny handful. (this is also why it's effectively impossible for a PS3 game to actually use the full CPU; the CPU's full capabilities were never meant for games, they were meant for playing blu-ray)
Full, dedicated CPU cores like those in x86 chips take the opposite approach; a LOT (or even most) of the circuitry is dedicated to feeding the sub-processing units. So, this is why there seem to be far fewer "cores" in a full CPU rather than a GPU, and why desktop CPUs post far lower math power figures. (<100 gigaflops vs. hundreds of Gflops or several teraflops) Of course, many future design plans look to try and "combine" these two architectural ideas, to try and get the best of both worlds; a few full-featured CPU cores, plus a built-in SIMD co-processor to handle huge lengths of math.