Talking Heads: Motherboard Manager Edition, Q4'10, Part 1

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

tom thumb

Distinguished
Apr 18, 2010
181
0
18,690
There are of course benefits to merging the CPU and GPU, but I think whatever such technology exists now, it is still miles away from being competitive.

... a part of me suspects that a CPU/GPU that is as good as an i7 + HD5870 would consume a lot more power... perhaps they'll need to up the 8-pin cpu power connector to a 12-pin or something.
 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010
... a part of me suspects that a CPU/GPU that is as good as an i7 + HD5870 would consume a lot more power... perhaps they'll need to up the 8-pin cpu power connector to a 12-pin or something.

Oh definitely- or there would be a CPU and a GPU power connector side-by side...

and a heat sink that takes up half the case
 

pandemonium_ctp

Distinguished
Dec 31, 2009
105
0
18,690
Great article. I love the insight and professionalism you guys have done here, very respectable.

While I'm satisfied with discrete GPU production not leaving us anytime soon (being someone who likes options and personal customization), I don't see it lasting forever. Convenience is certainly the driving development advocate for businesses with increasing prominence, and to say that onboard GPUs won't eventually dominate shares is pretty naive. Everything - particularly when involving technology - migrates toward a localized form. On a long enough timeline, everything localizes towards efficiency. It's reasonable and logical.

Case in point: Wal-mart. Stores that host products that common Joe doesn't consider exist, but aren't as in demand. Eventually it will dwindle to an eclectic collector's phase of existence much like that of the phonograph, Beta tapes, and the newspaper.

I'm certainly not saying there aren't conflicts that IGPs have to overcome. I'm merely pointing out that it's a matter of time before those problems are solved and "custom PC building" will consist of nothing more than choosing options you want to have on one fully integrated chipboard the size of your pen.

[citation][nom]TA152H[/nom]I'm kind of confused why you guys are jumping on 64-bit code not being common. There's no point for most applications, unless you like taking more memory and running slower. 32-bit code is denser, and therefore improves cache hit rates, and helps other apps have higher cache hit rates.Unless you need more memory, or are adding numbers more than over 2 billion, there's absolutely no point in it. 8-bit to 16-bit was huge, since adding over 128 is pretty common. 16-bit to 32-bit was huge, because segments were a pain in the neck, and 32-bit mode essentially removed that. Plus, adding over 32K isn't that uncommon. 64-bit mode adds some registers, and things like that, but even with that, often times is slower than 32-bit coding. SSE and SSE2 would be better comparisons. Four years after they were introduced, they had pretty good support. It's hard to imagine discrete graphic cards lasting indefinitely. They will more likely go the way of the math co-processor, but not in the near future. Low latency should make a big difference, but I would guess it might not happen unless Intel introduces a uniform instruction set, or basically adds it to the processor/GPU complex, for graphics cards, which would allow for greater compiler efficiency, and stronger integration. I'm a little surprised they haven't attempted to, but that would leave NVIDIA out in the cold, and maybe there are non-technical reasons they haven't done that yet.[/citation]

You people really need to stop voting down comments that are relevant and well thought out just because they may portray the demise of a company you will blindly follow. :/
 

aldaia

Distinguished
Oct 22, 2010
535
23
18,995
In the eighties I worked on the design of a PCB for a scalable multiprocessor. Each PCB had 4 CPUS (actually a CPU was 2 chips IPU+FPU and a chip contained no more than 300,000 transistors) the corresponding level 1 caches (at that time processors had no integrated cache at all) and 16 Mb of DRAM. That PCB was pushing the limits of technology. Who would have thought at that time that today we can buy a single chip with 4 cores and about 16 MB of integrated cache and fit it in our home computer?. Some try to imagine a CPU+GPU with today's technology, but cannot imagine the possibilities of technology one (or more) decades from now.
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
It's a shame that you didn't list which companys' spokesmen claimed that the discrete video card was on its way out. It'd have been nice to know which motherboard makers to avoid.

Those who liken the GPU to the x87 FPU are missing several important points. The FPU, unlike the GPU, was not stand-alone: it was basically another segment of the CPU, much like how processor ICs were before Intel integrated all the components together in the 4004. The truth is, the CPU and GPU are both examples of engineering: they take a finite amount of resources, (in this case, price, silicon area, etc.) and make choices on how to best trade off one aspect for another. Sure, if you raise your resource quantity (by spending more money, or getting a die shrink, etc.) you could then make a design that incorporates the best of both designs... But guess what? The trade-off designs could be likewise improved too!

Also, all these "people made wrong predictions in the past about computers!" arguments are actually pretty null; the same sort of argument tends to be popular with anyone advocating unaccepted fringe theories. "Oh, the people didn't believe Gallileo in the 1600s! So my fringe theory will automatically don that mantle of martyrdom without any justification!" No. Just, no. Instead, try educating yourself about computer architecture. We CAN make predictions right: I distinctly recall a man making a prediction that's held true for half a century... His name was Gordon Moore, co-founder of Intel.

Here's some REAL reasons you can't simply have just CPUs:

- First off, it's clear that the enthusiast crowd cannot be satisified. Already, we're up to 1 CPU plus 2 or more GPUs; that's way too much silicon surface to put in one area, especially with the trend of chips getting progressively hotter and more power-hungry. And no, very obviously we AREN'T gonna just wait (waste) two fabrication generations to get the same tech to fit on one chip; competition wouldn't allow it; if AMD decided to halt advancing so that they could cram a quad-core Bulldozer and a pair of 6870s on a single chip in 2013, guess what Nvidia and Intel will be doing in the meantime?

- Secondly, both the CPU and GPU fit different needs, which come in different amounts; Crysis, for all its impressiveness, doesn't need a quad-core; a "fusion" chip for it would best be a 2/3-core with tons of GPU hardware. On the flip side, StarCraft II doesn't need as much graphics hardware, but could certainly use more CPU horsepower. Hardly anyone plays just one game, so a combined CPU+GPU for a PC just won't cut it!

- Third, what is there to gain, performance-wise, by putting the GPU on the die? Nothing. With the FPU and cache, the CPU saw the benefit of reduced latency; this meant less time spent waiting to fetch that crucial operand, or waiting for that instruction to finish executing; those are the two chief bottlenecks any CPU ALWAYS sees. But with the GPU, once the CPU's finished seting up the scene for the GPU, it's DONE once it's sent the data over. And it's a LOT of data, sent in a stream: so what if sending it over PCI-e takes a few extra clock cycles to get started? It's going to be occupied for millions of 'em per frame. Remember that PCI-e was a DOWNGRADE from AGP in terms of latency, but PCI-e won out because of higher bandwidth.

Now, here's perhaps the REAL major reasons: the memory architectures needed for a CPU and GPU are wildly different; you can't satisfactorily meet the requirements for both. There's a lot to be said here:

- Without resorting to overclocking, 19.2 GB/sec is the highest memory bandwidth an enthusiast can get on a PC, using an Intel Core i7 CPU. Make it an AMD or an i5, and that drops to 12.8 GB/sec. (since they're dual-channel sets) A modern high-end card from either Nvidia or ATi will readily pass 100 GB/sec. Obviously, the 2/3-channel DDR3 setup for main memory won't cut it for graphics.

- Sure, you could swap for GDDR5 for main memory... But it'd bring forth very bad latency that will drag the CPU down, killing the point. Main CPU memory often comes with a CAS time of <5 nanoseconds; 10 ns is high enough the CPU will scream for mercy. Meanwhile, GDDR5 setups routinely seem to pass 20-30 ns without so much as the GPU shrugging; texture fetches and block-writes are big, predictable access patterns. Hence, latency matters far less than raw bandwidth for a GPU... But it's the opposite for a CPU: 12.8 GB/sec is overkill, as much of the main memory's time is spent waiting to switch to the right bank.

- Yes, this disadvantage in latency can be handled by good programming and handling of a CPU's caches. However, tweaking them is best done on a per-model basis; impractical to do on PCs, where you have two different manufacturers with a dozen lines and hundreds of models. (and if we had to factor for GPU permutations, that would rise to thousands) Hence, this is only done to a high level on consoles, where there's only ever one CPU to tweak for. (An Xbox programmer knows that all 360s use the same Xenon CPU)

- Lastly comes a matter of price. Sure, the solution for more bandwidth is more channels and faster speeds. However, DIMMs only provide 64 bits of interface per slot, and high-end cards run 256-512-bit. So that's 4 modules MINIMUM. Further, if you needed GDDR5 memory for graphics, guess what? you gotta waste money on GDDR5 capacity the non-graphics part doesn't need; so a machine that needs 4GB for the CPU and 2GB for graphics will have to buy 6 (or 8 for 4-channel!) GB of GDDR5, even though only 2 will be needed. Sure, the ultra-enthusiasts wouldn't mind, but what about those $500-750 gaming machines? Having to buy more expensive main memory makes an affordable gaming rig impossible. (anyone remember Rambus RIMMs?)

- An alternative would be to have two memory controllers; one for DDR3, the other GDDR5; but that adds further complexity, more pins, more package size to the CPU, added complexity and cost for the motherboard... Guess what? You basically just made a GPU-less design as complicated and expensive as simply having a discrete graphics card.

Overall, unlike the discrete sound card, the discrete graphics card has a secure place in the future. The former failed to offer more than a flat benefit in the face of exponential CPU increases. But the graphics card isn't staying flat; it's making exponential growth as well, potentially even faster than the CPU. In all, to sum up what the graphics card is not like:

- The graphics card is not like the FPU or cache. CPU->GPU latency isn't an issue, so there's nothing to gain by moving it on-die.
- The graphics card is not like the sound card or network card. Graphics are getting exponentially more complicated over time; audio and networking weren't changing, and eventually became an inconsequential task.
- The graphics card is not like the physics card. The physics card idea wasn't a good one to begin with. (hence why the PhysX card never succeeded)
- The graphics card isn't just an overglorified extra CPU. The CPU and GPU are paragons of not just opposite extremes of processor design, but also memory architecture as well.
- The technology will not "improve" to make things make sense. Sure, in 4 years an AMD fusion processor can compare to today's high-end discrete graphics card. But it'd be laughable compared to a discrete graphics card 4 years down the road.

Above all, perhaps, is this lesson: Moore's law alone can't let you catch up with someone if they're using Moore's law too.
 

pandemonium_ctp

Distinguished
Dec 31, 2009
105
0
18,690
[citation][nom]pandemonium_ctp[/nom]Great article. I love the insight and professionalism you guys have done here, very respectable.While I'm satisfied with discrete GPU production not leaving us anytime soon (being someone who likes options and personal customization), I don't see it lasting forever. Convenience is certainly the driving development advocate for businesses with increasing prominence, and to say that onboard GPUs won't eventually dominate shares is pretty naive. Everything - particularly when involving technology - migrates toward a localized form. On a long enough timeline, everything localizes towards efficiency. It's reasonable and logical.Case in point: Wal-mart. Stores that host products that common Joe doesn't consider exist, but aren't as in demand. Eventually it will dwindle to an eclectic collector's phase of existence much like that of the phonograph, Beta tapes, and the newspaper.I'm certainly not saying there aren't conflicts that IGPs have to overcome. I'm merely pointing out that it's a matter of time before those problems are solved and "custom PC building" will consist of nothing more than choosing options you want to have on one fully integrated chipboard the size of your pen.
You people really need to stop voting down comments that are relevant and well thought out just because they may portray the demise of a company you will blindly follow.[/citation]

Voted down and squelched by ignorance.
 

WarraWarra

Distinguished
Aug 19, 2007
252
0
18,790
We will likely be stuck with both 32bit software and low GPGPU utilization in the near future until Nvidia + ATI gets their thumbs out of their mouths and start to produce a interpreter / proxy software that accepts current poorly coded / 32bit apps and run then in a 64bit / GPGPU environment.

ie: software says I need 16 threads, win32 says no way in hell but GPGPU proxy software says sure and passes this onto actual hardware be it CPU only / CPU+ GPU / GPGPU with CPU as memory controller / cluster etc.

This will speedup development as few if any company would spend money on anything else than 32bit. 64bit = Double or triple the code = double / triple the cost. This all comes from poor planning and lazy / incompetent management at software company's with limited experience / view's to use alternatives like linux / win cross compilers ect. to build code for both 32bit, 64bit and GPGPU optimized code.
If you have it optimized like this all you need is a simple app to say okay we have reached the limit now scale back to 90% usage of requested threads.

100 hours extra a year for a lazy but intelligent company can results in 5 years of money saved.

Unfortunately too many Ego trip we are the boss we know everything "please help me switch on my computer or feed my computers mouse some cheese" intelligent people rune most company's and see others with money saving ideas like this as opposition instead of paying them to help save that idiot managements but in the long run.

It is and was such a major issues a few months ago to run VMware / virtual box on ATI stream / Nvidia Cuda or at least run 4 to 8 emulated pc's this way on 4x4890's / nv equivalent in crossfire / sli that no one bothered to develop for it and now they are crying about slow uptake.

Even today they virtualbox / vmware is too lazy to get with it. At least several other software company's are now cuda / ati stream capable Finally.
 

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
[citation][nom]pandemonium_ctp[/nom]Voted down and squelched by ignorance.[/citation]
I'd hazard a guess that it's likely due to your own lack of proper thinking-through and understanding of why the discrete video card has stuck around for 30+ years now, and will for indefinitely. I'd recommend reading a comment up (my huge comment) to get an idea why.

Also, your Wal-Mart example is HORRIBLE. They've been selling PCs for at least 10+ years... Hardly killed the discrete video card, now has it? And just because an "average Joe" (or an enthusiast's guess of what one is) doesn't know something exists doesn't mean it'll go out of style. See, for instance, all the computers in a car, controlling engine functions, etc. It outlines a basic principle: what the "idiot Joe" knows doesn't matter for what products will continue. It's a matter of what will be competitive. And as gaming, both on PCs and consoles, remains a massive industry worth billions and billions a year, there will be competition to deliver the best.

So again, if AMD decides to "wait a couple generations" for the silicon technology to "catch up" so we can have a quad-core with a 6870 in 2014, it'll have to face a 12-core from Intel paired with a 2,048-CUDA-core GTX 880. GPU business is an arms race, and integrated graphics, be it on the CPU or on the Northbridge, is simply a way of throwing a white flag.
 

pandemonium_ctp

Distinguished
Dec 31, 2009
105
0
18,690
[citation][nom]nottheking[/nom]I'd hazard a guess that it's likely due to your own lack of proper thinking-through and understanding of why the discrete video card has stuck around for 30+ years now, and will for indefinitely. I'd recommend reading a comment up (my huge comment) to get an idea why.Also, your Wal-Mart example is HORRIBLE. They've been selling PCs for at least 10+ years... Hardly killed the discrete video card, now has it? And just because an "average Joe" (or an enthusiast's guess of what one is) doesn't know something exists doesn't mean it'll go out of style. See, for instance, all the computers in a car, controlling engine functions, etc. It outlines a basic principle: what the "idiot Joe" knows doesn't matter for what products will continue. It's a matter of what will be competitive. And as gaming, both on PCs and consoles, remains a massive industry worth billions and billions a year, there will be competition to deliver the best. So again, if AMD decides to "wait a couple generations" for the silicon technology to "catch up" so we can have a quad-core with a 6870 in 2014, it'll have to face a 12-core from Intel paired with a 2,048-CUDA-core GTX 880. GPU business is an arms race, and integrated graphics, be it on the CPU or on the Northbridge, is simply a way of throwing a white flag.[/citation]

My Wal-mart example was an analogy. I was not speaking at all about the fact that Wal-mart sells computers and therefor the discrete card industry going "out of style". That was your reading incorrectly. In my analogy Wal-mart = GPGPU and specialized stores = CPU/GPU architecture.

I read your book of a response, and while you make some good points I think you (and others) fail to realize that there will come a point in time in the near future when hardware will be so far ahead of software (at the rate it's going now) that your comparison of discrete GPUs/CPUs & the combined GPGPU becomes invalid because there will be no functional purpose for it in the PC or anything pocketed even.

As well as the Earth is round and flat-landers still exist [another analogy, let me break this one down for you before you get all upset and post another ferverous response: Earth is round = those thinking outside the box; flat-landers still exist = stuck in the box], as with anything technological or thought invoked on the most remedial of levels, the only way for anything to gain traction is for someone to have a belief in its purpose and run with it. All technical boundaries that GPGPU architecture may encounter would then be solved, so the whole, "well this doesn't work because of blah, blah, blah" doesn't matter.

Again, as I said in my first post:
Convenience is certainly the driving development advocate for businesses with increasing prominence, and to say that onboard GPUs won't eventually dominate shares is pretty naive. Everything - particularly when involving technology - migrates toward a localized form. On a long enough timeline, everything localizes towards efficiency. It's reasonable and logical.
 

Draven35

Distinguished
Nov 7, 2008
806
0
19,010
Ten years ago, the whole "hardware speeding ahead of software/we have too much CPU/GPU power" statement was made as well, and manufacturers (particularly Intel) were predicting most people would be fine with integrated graphics. At the same time, some graphics card manufacturers were predicting that the average computer user would never need hardware transform and lighting support so we didn't need to buy those fancy GeForce cards.

Wal-Mart stores with larger electronics departments actually sell GPU cards- smaller ones used to but by and large wal-mart has greatly reduced their electronics departments in every location where there is a Best Buy nearby and ceded that market to Best Buy.. The systems they sell, with the exception of the very cheapest, are capable of taking them.
 

aldaia

Distinguished
Oct 22, 2010
535
23
18,995
[citation][nom]nottheking[/nom] [/citation]
Overall a great article very insightful on what are the problems for integrating GPU & CPU, considering current or short term technology.

And that is the key, you are thinking short term. You talk about DDR3 vs GDDR5, that is present, not future, DDR4 is expected to debut in 2 or 3years, but what will we have in 10 or 20 years, probably is not going to be named *DDR*. I clearly see the demise of the discrete GPU in the distant future, I don't know if it's a question of 10, 20, 50 or 80 years but its going to happen.

I'm well aware of Moore's law. The fact that it has not failed yet, doesn't mean that it will never fail. I tell you it WILL fail too, otherwise there will be a (not so distant) day where we will have a chip with more transistors than elemental particles are in the whole universe.

Above all, perhaps, is this lesson: Moore's law alone can't let you catch up with someone if they're using Moore's law too.
Unless economics allow you to progress faster than someone.

Don't forget that economics is a driving force much more powerful than Moore's low. By economics I mean economy of scale, margin profits and sales volume. First generation of integrated GPUs may not event threaten discrete GPU's, they may just kill on-board GPU. But future generations may progressively dent sub $30, $50 markets and even more as time goes on.
GPU chip makers don't make too much profit selling high end GPUs (low volume) most of the profit is made in low & mid GPUs. Once integrated CPUs start to dent profit in the low and mid market it may well be not profitable anymore to design and build high performance GPUs for mass market, though they may still be produced at a much higher cost.

There will be a day when a $110 CPU+GPU chip will provide the equivalent performance of a $100 CPU + $100 GPU. What will the enthusiasts have? well they could buy one of those short run $3000 discrete GPUs providing a performance equivalent to $200.

That doesn't mean that a company like Nvidia will go out of business, actually Nvidia already has taken the right steps into integrating CPU & GPU in the form of Tegra.

Above all, perhaps, is this lesson: Nothing stays forever, not even Moore's low.
 
Status
Not open for further replies.