AMD Fusion: How It Started, Where It’s Going, And What It Means

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
G

Guest

Guest
Will look forward to the OpenCL benchmarks on the AMD and Intel CPUs That have been appearing in the last few months. I hope The OEM's, laptop and desktop, will begin to offer better driver support than they have been! Is does not matter how good the GPU/CPU hardware advances are, if the driver software is not there from the beginning, and the driver software is never matained with proper and timely updates, the most advanced hardware in the world will not make any difference!
 
[citation][nom]WakeUpCall[/nom]Will look forward to the OpenCL benchmarks on the AMD and Intel CPUs That have been appearing in the last few months. I hope The OEM's, laptop and desktop, will begin to offer better driver support than they have been! Is does not matter how good the GPU/CPU hardware advances are, if the driver software is not there from the beginning, and the driver software is never matained with proper and timely updates, the most advanced hardware in the world will not make any difference![/citation]

Although it took several months to get an excellent driver, AMD did more or less consistently release a driver every month in that time period, so it was being updated well.
 

oj88

Honorable
May 23, 2012
91
0
10,630
Want to gamble to get AMD $400 products for free? Buy 100 shares of AMD stocks now ($420), and buy a Trinity APU in October (or any AMD products anytime). Both buying activities will boost AMD's stock price from current $4.2 easily up to $8. The flip side is, if everything goes wrong, your investment can go down to half of its value. Whether to gamble or not, depending on your confidence on AMD after reading this article.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
[citation][nom]WakeUpCall[/nom]Will look forward to the OpenCL benchmarks on the AMD and Intel CPUs That have been appearing in the last few months. I hope The OEM's, laptop and desktop, will begin to offer better driver support than they have been! Is does not matter how good the GPU/CPU hardware advances are, if the driver software is not there from the beginning, and the driver software is never matained with proper and timely updates, the most advanced hardware in the world will not make any difference![/citation]

In some benchmarks, the AMD OpenCL drivers for an INTEL CPU is much better than INTEL native OpenCL driverf for their own CPU's. But AMD has poorer drivers than Nvidia, generally.

AMD contribute much less than Intel for the Opensource/Linux community. They have yet to add anything similar to the Intel SNA acceleration. Their open source driver work is pathetic. Intel is VERY open for their FOSS drivers and software. Intel is constantly adding new features for Xorg and the GCC.
Nvidia is the worst of all three. It has zero FOSS driver policy. They have no documentation on their GPU/drivers.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
[citation][nom]blazorthon[/nom]If Intel continues improving their IGP performance by so much per generation, then they might surpass AMD in the next two or three generations if AMD doesn't get similarly great improvements (which is very possible). So far, Trinity is not a similarly great improvement over Llano, although it is a good improvement, especially considering that it kept the same process node.[/citation]that this kind of rate, pretty soon AMD "awesome"ness IGP in their APU will not look nice to the casual anymore. I wonder why AMD took their time in this segment.
I dont think Intel is an idiot to not realize this potential. With enough graphic performance, pretty soon most game will start optimize for Intel GPU as well along with AMD/Nvidia GPU, and thats the time they will an option jump-in discrete GPU market without any compatibility problems.
 

nibir2011

Distinguished
Aug 28, 2011
131
0
18,680
People are always talking about company fights. But that is absolutely not the case for this article. it is not AMD vs Intel or ATI vs nVIDIA or AMD vs all of them. AMD is trying to set up a new standard of parallel computing. And most importantly it is open source. Now that is what matters. Just like Linux if this HSA technology grows. It will be absolutely no time when this tech will become the most favored tool for programmers. The comments here most of them did not come from a programmers outlook. But it is about programming. Optimizing is such a wonderful process that you can make your 5 second code to run in one second.Remember the command line base computers in the 80s and early 90s.What processor or equipments did they have. But ask anyone of that time he will definitely tell you that those machines were fast at what the do. Recently as the cpu speed was increasing day by day we programmers were getting stung by a word that we were becoming lazy[ofcourse not me. LOL]. Now that is going to change. Programmers like challenges and hurdles that is put bu the hardware.And you what all the time we brought more performance from the machine than it was designed to.Do not believe me look at your smart phone. Look at the android and ios. What are the hardware and what is the performance.

Now lets talk about APU. It is not about performance but solely about money. for example for 350 bucks you can buy a core i7 and you can actually keep it 3 years if you think about performance. On the other hand i will buy a APU worth 100 bucks and have decent performance. after two years i will buy another one worth 100 bucks and again have decent performance and after another two year i will buy another one. So six years of decent performance and 50 dollars spared. Also the later of my APUs are very likely to outperform your 350$ high performance cpu. Now does not that sound cool. To me it does.

However being a intel fan i can say intel is not going to give up like that. but in the APU market AMD is far ahead of intel and intel needs to do something revolutionary.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980


+1'ed your post. Though I don't totally agree about the APU "solely" being about money/cost.
The concept itself has many implications like lower latencies between the CPU, GPU, and maybe other components; having to cool one APU instead of a CPU and GPU (those compact closed loop liquid coolers meant to be placed on CPU's for example would cool the whole APU); having more space and airflow in your system; and there are more I bet.

Money is one thing as well though, which is apparent, and I won't contest you there. :)
 

nibir2011

Distinguished
Aug 28, 2011
131
0
18,680
[citation][nom]army_ant7[/nom]+1'ed your post. Though I don't totally agree about the APU "solely" being about money/cost.The concept itself has many implications like lower latencies between the CPU, GPU, and maybe other components; having to cool one APU instead of a CPU and GPU (those compact closed loop liquid coolers meant to be placed on CPU's for example would cool the whole APU); having more space and airflow in your system; and there are more I bet.Money is one thing as well though, which is apparent, and I won't contest you there. :)[/citation]

People do not actually think about all those stuffs actually. But i bet they must very soon. The global warming is a real thing and efficiency is tried to implement in each and every part of every industry. Just some years ago people did not thought about mile per gallon when buying a car but now they must.Now we are not happy to make a generator of 70% efficiency but we want a 100% generator.

Although that is never going to happen we can assume we can reach upto95% efficiency in every aspect of our life. the soon we reach there the better.But people do not want to hear about that almost all of them think "life is short so live it up"

We need to understand these things.
 
[citation][nom]nibir2011[/nom]People do not actually think about all those stuffs actually. But i bet they must very soon. The global warming is a real thing and efficiency is tried to implement in each and every part of every industry. Just some years ago people did not thought about mile per gallon when buying a car but now they must.Now we are not happy to make a generator of 70% efficiency but we want a 100% generator.Although that is never going to happen we can assume we can reach upto95% efficiency in every aspect of our life. the soon we reach there the better.But people do not want to hear about that almost all of them think "life is short so live it up"We need to understand these things.[/citation]

A single, small volcanic eruption puts out more pollutants than the human race has over the last several hundred years. Global warming is happening, but it's simply part of a global climate cycle. Heck, when the dinosaurs were around, the average global temperature was far higher than it is these days, yet there (probably) weren't any humans around way back then to affect it.

I have nothing against efficiency, I'm all for it. However, don't think that we have a major impact on global warming. Now things such as acid rain, that we probably do have a major impact on. Poisoning our environment in other ways, that we have an impact on. Bringing species to other habitats where they can become invasive species, again, that we have an impact on. However, global warming simply isn't something that we can affect greatly. The Earth has its climate cycles and there probably isn't much that we can do about them. That sort of thing is a lot harder to impact than the livability of an area in other ways, such as poisons.
 
[citation][nom]nibir2011[/nom]However being a intel fan i can say intel is not going to give up like that. but in the APU market AMD is far ahead of intel and intel needs to do something revolutionary.[/citation]

Why must Intel do something revolutionary? As has been said, if they can keep up their rate of improvement and improve their drivers over the same time-scale (2-3 years is not a short amount of time, not even for that considering where Intel is right now and just how good at getting things done quickly Intel can be), then they can surpass AMD simply by doing what they have been doing since they released Nehalem CPUs with IGPs. AMD is ahead of Intel, but AMD is the one that needs to do something revolutionary to keep up. AMD was at a point where memory bandwidth hurts their IGP performance far more than the IGP itself. Heck, they could probably take even an A6 and beat a Radeon 6670 quite easily if not for there simply not being enough memory bandwidth for that to be done.

On the other hand, Intel has a lot of L3 cache that the GPU can access and much more efficient memory controllers that can get far more bandwidth at the same memory frequency and channel count. Intel can deal with much faster IGPs without memory bandwidth holding them back even close to as much as AMD. AMD probably knows this, yet they made an even more inefficient memory controller than Llano for Trinity. It needs a higher frequency to beat the older controller. Once AMD does this, they can stay ahead of Intel's advances much more easily.

Other things to consider if we delve deeper into this would be whether or not Intel will continue their rate of improvement. It relies on increasing the die area that is allocated to the GPU, so it might be more like how CPUs advanced much more quickly many years ago, but their rate of improvement slowed down substantially once our power usage peaked at between 100 and 150 watts. Unlike AMD with their APUs, I'm not sure if Intel is willing to continue this until they have a roughly half and half chip of CPU and GPU portions at least until the GPU is much more important in overall system performance with more than gaming and professional software than it is today.
 

baracubra

Distinguished
Jan 24, 2008
312
0
18,790
A truly amazing read! Even though I'm not too familiar with the coding and software behind all of these tech, I was hooked and read through everything! I never knew the underlying reasons for the big merger and this article deffinitely proved hugely enlightening.

Thanks for the great journalism William,
Hope to read more like this in the future..
 

technoholic

Distinguished
Feb 27, 2008
800
0
19,160
Very good article, thanks for this. APU approach is (and has always been) really interesting for me. I mean this requires a lot of effort to make this APU and it is really not certain what will happen in the future. This is kind of gamble for AMD. Is the APU going to be accepted by hardware manifacturers, and be used in their products? And what will happen with the software side? There are questions like these in mind. About GPGPU... this is another specific issue. If they can make an APU balance the workload between CPU and GPU and effectively do GPGPU works, and if software makers can write good applications for GPGPU, i think our computers will be very fast. I don't have very advanced knowledge of computers, GPUs etc but i thinnk the future is about GPU computing
 
[citation][nom]pcfan86[/nom]Intel will dominate with 14nm: http://www.pcarchitectures.com.[/citation]

It's just another process node... Besides, we have no idea as to where AMD will be at that time, so your prediction is just a blind shot into the dark.
 

pcfan86

Honorable
Aug 18, 2012
7
0
10,510
blazorthon: Intel will be able to pack 5B transistors in a single die at 14nm, the equivalent of 4-core CPU + high end GPU of today. No one else will approach 14nm for quite some time.
 
[citation][nom]pcfan86[/nom]blazorthon: Intel will be able to pack 5B transistors in a single die at 14nm, the equivalent of 4-core CPU + high end GPU of today. No one else will approach 14nm for quite some time.[/citation]

Intel won't be at 14nm until 2014 at the earliest. Do you really think that no one can get anywhere near it in that time frame?

EDIT: You also forget that that unless the die shrinks, meaning fewer transistors than your 5B number, the heat generated by so many transistors could be huge. Die shrinks don't drop power consumption by as much as they increase the transistor density. Being able to fit that many transistors in a single die would be almost meaningless if they need to keep frequencies so low that it wouldn't make much difference. This is a part of why performance doesn't scale linearly with linear transistor count increases.

Also, the first 14nm CPUs from Intel would almost definitely just be a die-shrink of Haswell, so it won't "dominate," especially if Intel uses crap paste between the IHS and the CPU die again so that it won't even beat Haswell.
 

pcfan86

Honorable
Aug 18, 2012
7
0
10,510
AMD is no longer in control of its process technology. Everyone else is still catching up to 22nm. Not that there aren't problems to be solved, but smaller transistors usually means less dissipation per transistor, otherwise we wouldn't have dies with 1B+ transistors today, with much much lower dissipation per transistor than the 100M transistor chips from a few years ago.
 
[citation][nom]pcfan86[/nom]AMD is no longer in control of its process technology. Everyone else is still catching up to 22nm. Not that there aren't problems to be solved, but smaller transistors usually means less dissipation per transistor, otherwise we wouldn't have dies with 1B+ transistors today, with much much lower dissipation per transistor than the 100M transistor chips from a few years ago.[/citation]

AMD doesn't need to be in control of their process technology. GF might fail, but TSMC could hit 14nm by 2014 or 2015. Heck, GF has been working together with Samsung lately, so they might manage 14nm by 2014 or 2015 too. Also, let's take a look at just Tahiti and Cayman from the GPUs... THe Cayman is larger than the Tahiti, yet it uses less power. Smaller process nodes use less power per transistor, but nor so much that the die can stay the same size and still use the same amount of power at the same clock frequency. That is why with smaller processes, the die size is also usually shrunken, especially with CPUs.

For example, a 1B transistor chip on 22nm might take somewhat more than half as much area as the same chip would if built on 32nm, but it does not suck half as much power at the same clock frequency. IB and SB confirm this. IB packed its transistors about twice as dense as SB, but it still uses (even when IGPs are disabled for accuracy) only about 30% less power in the best case scenario. A more extreme die shrink such as from 22nm FinFet to 14nm would need the die size to shrink dramatically or else the heat generated would increase dramatically.

You don't seem to understand basic concepts about die shrinks. They get less effective in power efficiency boosts with every shrink. That is why each shrink is starting to be more extreme with additional features added (14nm from 22nm is a huge drop compare to the last few drops). For example, from 32nm to 22nm is a roughly 50% drop in die area with the same model chip, but Intel also needed the FinFet advantages just to make the die shrink actually make a severe difference in power consumption. That is undoubtedly why the next die shrink to 14nm is so extreme; Intel doesn't have another such idea to boost the effect of the die shrink sufficiently, so they're making the shrink more extreme than is usually done.

Smaller process nodes mean less power consumption, but they simply don't drop power consumption as much as the transistor density increases. This increases heat density, not improves it. What this means is that a die that is 50% smaller (such as Ivy) and consumes only 30% less power, it must have more heat per mm2 of die area being generated. A die shrink such as 14nm would have this affect increased dramatically because it is a severe die shrink and it is so small.

Also, process nodes are not the transistor size; they are the distance between the transistors. Transistors in chips vary in size (even on the same process technology) and their size depends on the type of transistor that they are.

One of Intel's engineers might be able to describe this about their CPUs better than I, but I'm sure that this is accurate given my own experience with the subject and the insurmountable evidence supporting it.
 

pcfan86

Honorable
Aug 18, 2012
7
0
10,510
E7-8870: 513 mm2 (vs. 216 mm2 for i7-2700K), 32nm, 2.6B transistors. Yes, it runs at 2.4 GHz @ 130W, but it's half-way to 5B transistors. Contrast this with a P4 HT 530: 112 mm2, 90nm, 125M transistors, 85W. Heat density is an issue, but a die that's 22% the size of the larger die while consuming 65% of the power means heat density isn't as big an issue as you claim. And of course going from 22nm to 14nm won't mean an automatic 2.5x increase in the number of transistors. A lot of factors are involved and a larger die size may be required. But even with the added cost of a larger die size, a CPU+GPU combination that provides the same performance as a $300 CPU + $300 video card leaves a lot of room for Intel to provide an all-in-one solution.

Intel is usually a few years ahead in process technology, and they will very likely get to 14nm before anyone else. I doubt that Intel would spend billions on a new process technology just to shrink the die, and not increase the transistor count.
 
[citation][nom]pcfan86[/nom]E7-8870: 513 mm2 (vs. 216 mm2 for i7-2700K), 32nm, 2.6B transistors. Yes, it runs at 2.4 GHz @ 130W, but it's half-way to 5B transistors. Contrast this with a P4 HT 530: 112 mm2, 90nm, 125M transistors, 85W. Heat density is an issue, but a die that's 22% the size of the larger die while consuming 65% of the power means heat density isn't as big an issue as you claim. And of course going from 22nm to 14nm won't mean an automatic 2.5x increase in the number of transistors. A lot of factors are involved and a larger die size may be required. But even with the added cost of a larger die size, a CPU+GPU combination that provides the same performance as a $300 CPU + $300 video card leaves a lot of room for Intel to provide an all-in-one solution.Intel is usually a few years ahead in process technology, and they will very likely get to 14nm before anyone else. I doubt that Intel would spend billions on a new process technology just to shrink the die, and not increase the transistor count.[/citation]

Xeons like that also get excellent binning too and the lower frequency is the main reason for the power consumption as well as the binning. Power consumption decreases exponentially with linearly dropped frequency and voltage. THat's why you don't see frequencies skyrocketing like they did about a decade ago. Also, heat density is only just beginning to become a serious problem. At 14nm, it could get severe. Again, these issues al get worse and worse with each die shrink.

I didn't say that Intel won't use more transistors, only that they are unlikely to use 5B transistors. They might use that many in their top server CPUs, but that'd be it.

Intel isn't interested in joining the high end gaming market with graphics and since their drivers are still absolute crap (or worse in some cases), they aren't ready even if they have the hardware for it (which they wouldn't). They couldn't fit a $300 CPU plus $300 GPU on the same die for many reasons.
One, it would suck over two hundred watts of power. That'd be simply unavoidable.
two, it would have a HUGE memory bandwidth limitation unless a lot of extra die space went to a complex series of more memory controllers for the GPU, decreasing maximum theoretical die space for the processing hardware.
Three, Intel can't afford to leave AMD with almost no market anywhere. If AMD tanks, then Intel will tank from anti-trust lawsuits.
Four, even if it could fit the massive memory bus, where would all of those chips go? Thre isn't enough space arpund the CPU socket for that many well-spaced memory chips.
I could go on and on. Intel being ahead in process technology, although true, doesn't let Intel avoid these limitations and more; I simply stopped counting them off because I think that these are plenty of reasons.
 

pcfan86

Honorable
Aug 18, 2012
7
0
10,510
Xeons are usually derived from the latest mainstream architecture. Today's mainstream chip outperforms a Xeon of a few years ago. Next generation Xeon will be more powerful than the current generation mainstream chip.

AMD Radeon HD 7970: 365 mm2, 4.31B transistors, 28nm, 250W. But for some reason, Intel won't go above 100W or a couple of billion transistors. Sounds a bit sketchy.

Intel already makes an LGA2011 socket with support for 8 memory sockets. Memory channels might be an issue, but that not a difficult problem to solve. Too expensive? So was every other technology before it went mainstream.

Intel could argue that the real competition is ARM + PowerVR, but no, they wouldn't do that. Intel is a nice company that wants everyone to get along...
 
[citation][nom]pcfan86[/nom]Xeons are usually derived from the latest mainstream architecture. Today's mainstream chip outperforms a Xeon of a few years ago. Next generation Xeon will be more powerful than the current generation mainstream chip.AMD Radeon HD 7970: 365 mm2, 4.31B transistors, 28nm, 250W. But for some reason, Intel won't go above 100W or a couple of billion transistors. Sounds a bit sketchy.Intel already makes an LGA2011 socket with support for 8 memory sockets. Memory channels might be an issue, but that not a difficult problem to solve. Too expensive? So was every other technology before it went mainstream.Intel could argue that the real competition is ARM + PowerVR, but no, they wouldn't do that. Intel is a nice company that wants everyone to get along...[/citation]

You don't understand. The memory needed for the IGP is GDDR5 and it needs many more channels than the mere 4 that SB-E provides of DDR3. SB-E provides several times too little bandwidth even with all four channels populated with high-end DDR3 memory modules. Furthermore, I said that there need to be spots for chips, not sockets for modules. GDDR5 doesn't come in modules because graphics cards need far too many pins for that and it would increase the latency of the connection.

The next generation Xeon, A xeon, is an x86 CPU. NOT a GPU. It it extremely different. It can't be said to be more p[owerful without saying more powerful in what (total performance, the 7970 has FAR more GFLOPS to go around in highly parallel applications than any CPU made in the next several years will be able to match).

It took SB more than a year after launching in the consumer space before it hit the Xeons just a few months ago. We've had SB since what, early 2011, maybe late 2010? We might have had it in desktops and laptops (even in a few tablets, nettops, and maybe even high-end netbooks) for over a year and a half before it hit the Xeon market. We've had IB for a few months or so now and it hasn't hit the Xeons yet. Xeons obviously do not necessarily get new architectures and die shrinks sooner than consumer processors do.

Expense isn't even the problem with the IGP's memory, it's whether or not that many chips can even fit in the area around the CPU. I don't think so. Furthermore, you also confused TDP with actual power consumption and failed to realize that CPU coolers simply can't handler much more than 100-150W very easily. They aren't as big as graphics coolers with as large HSFs. There are few coolers that can handle a load of much more than 150-200w. If you want that much power consumption, then you overclock, plain and simple. Intel also wants to be a green company, hence they lower the power consumption of their CPUs with almost every generation while increasing performance.

No offense intended, but you have shown that you don't know what you're talking about. I'm willing to continue explaining if you want me to do so, but pretty much everything that you have claimed is highly unreasonable or outright wrong and shows ignorance of the topic. You also don't seem to understand some of what I've already said to you.
 

pcfan86

Honorable
Aug 18, 2012
7
0
10,510
You're absolutely right. There are insurmountable limits that will:

1. Prevent a 5B transistor mainstream chip from ever being developed. Some law of physics or something. No, wait, it's legal limitations. Or Intel just not wanting to. But somehow, it just won't happen.

2. Prevent a high-end GPU + CPU combination because of memory bandwidth limits. This will never be overcome. Ever. No memory technology will ever allow it, not even DDR4 or Micron's Hybrid Memory Cube.

3. Prevent a CPU+GPU from going above 150W. Not green, even though today's CPU and graphics cards together far exceed 200W. Graphics cards with a GPU TDP of 250W must be able to dissipate 250W without the junction temperature going above critical, but that doesn't matter either, because CPU coolers will never be able to handle more than 150W. Ever.

Now I understand. Apply today's limitations to tomorrow's technology and all will be good.
 
LOL they act like they re-invented the PC, when Intel has had integrated graphics for generations now. Meanwhile after all that the APU is junk. The graphics are good but processing power is slow and shitty.
 
[citation][nom]pcfan86[/nom]You're absolutely right. There are insurmountable limits that will:1. Prevent a 5B transistor mainstream chip from ever being developed. Some law of physics or something. No, wait, it's legal limitations. Or Intel just not wanting to. But somehow, it just won't happen.2. Prevent a high-end GPU + CPU combination because of memory bandwidth limits. This will never be overcome. Ever. No memory technology will ever allow it, not even DDR4 or Micron's Hybrid Memory Cube.3. Prevent a CPU+GPU from going above 150W. Not green, even though today's CPU and graphics cards together far exceed 200W. Graphics cards with a GPU TDP of 250W must be able to dissipate 250W without the junction temperature going above critical, but that doesn't matter either, because CPU coolers will never be able to handle more than 150W. Ever.Now I understand. Apply today's limitations to tomorrow's technology and all will be good.[/citation]

Again, I don't think that you understand what I'm telling you. A 5B transistor CPU+GPU chip CAN be built, but it would NOT be practical nor reasonable. Even at 14nm, it would be large and would generate a lot of heat that is difficult to cool. Just look at modern air coolers for CPUs today. Some of them are huge. We might need even larger ones or some sort of more revolutionary changes to make this sort of chip practical to cool. GPUs and such have superior cooling area and arealso generally louder than CPU coolers handling the same amount of heat. It's a different form factor and that means different amounts of area and design.

DDR4 would not be enough within practicality. Look at it this way... It would take at least six, maybe eight very high frequency DDR4 modules to have enough bandwidth. Even then, the latency would be somewhat high and DDR4, although not as bad as DDR3, is still a system memory designed for CPUs. It is not optimized for graphics cards and performance could suffer somewhat, although I'm not sure of to what extent it would be brought down (probably not huge, but considerable).

DDR4 does not compare to GDDR5 in performance as a graphics memory. It simply doesn't. It is a huge improvement over DDR3, but it's not nearly as good as GDDR5.

A custom water loop system might be able to keep such a machine cool, but this would still not be simple. The memory would probably need cooling too and there's still the problem with placing the GDDR5 chips. Some would need to be on the underside of the board to fit, but there's little room for cooling them there, so that' might be out of the question.

The CPU/GPU chip would be incredibly complex. This stuff that you ask for is simply not practical.

Furthermore, you continue to mistake TDP for power consumption. They are not the same. A graphics card will rarely hit is TDP, especially with AMD's cards and Nvidia's Kepler-based cards. A cooler would hardly ever have to cool the full TDP of a card and if it did for an extended period, it might even overheat, depending on the card. Furthermore, cards such as the 7970 and other large cards have a lot of space for their HSF and fans to fit in. They can be over 10 or 11 inches long and several inches wide. They are also often designed with often more noise per watt in mind and with concepts such as vapor chambers and larger IHSs for better efficiency. Cooling a graphics card is not the same as cooling a CPU.

Furthermore, yes, Intel reduces power consumption of their consumer CPUs almost every generation. They do this for considerably good reasons. Graphics cards have also been mostly reducing in power consumed at a given performance bracket ever since they peaked a few years back, although not necessarily consistently. Electronics companies are trying to reduce the amount of power that we consume. Is that really a bad thing? Heck, even SSDs, hard drives, motherboards, and also even memory are coming down and down in power consumption these days. It isn't just processors.

I'm not applying some sort of then outdated point of view on this, I'm telling it to you straight. Maybe GDDR5 will be replaced by a DDR4 derivative and this system would be able to function with fewer chips, but the point would still remain in that it is simply not practical and nor is it how Intel even does business. Intel isn't a huge graphics player. They have some extremely low end IGP offerings and if you go back far enough, there might be a discrete card or two. Intel has since given up on the high-end graphics market. Maybe they will enter it some day, but they would need to fix many things before thing. I've probably said this before, but I'll say it anyway: Intel's drivers suck. They really suck. They're worse than how AMD/ATi did several years ago. They are only just beginning to get drivers written properly and even worse, Intel manages them poorly in other ways b letting OEMs customize them with crap versions that never get updated.

Sorry, but Intel isn't going to make a CPU+high end GPU setup on a single chip. It simply wouldn't make sense.
 
Status
Not open for further replies.