AMD CPU speculation... and expert conjecture

Page 81 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
Your argument sounds like its OK for TSMC and GloFo to move to a smaller node, but if Intel does it, its somehow wrong. Plus, what Intel has already done by moving to 22nm FinFET alone , the consoritum of IBM, GloFO, Sammy have not yet reached.
Intel already has 14nm working chips, which will have both low frequency version for mobiles, and high frequency version for DT. The high freq chips are much more harder to research and fabriate.
The consoritum is working to make only mobile chips, which are typically low freq.
Funny but I haven't seen anything confirming the existance of a DT broadwell cpu. Mobile and soldered down packaging, sure. The big question is going to be how fast can it get before melting?

Die shrinks are starting to fail in providing performance. The main focus on shrinks now seems to be geared soley for mobile platforms. Shrinks are not reducing power at high performance. Haswell was designed to reduce power, but somehow wound up at 84W tdp to Ivy's 77W at full speed of 3.5 ghz.

Where did all this effort go into reducing power? mobile and ultra portable only apparently. Increasing power draw then going for a die shrink isn't going to work. Die shrinks are not reducing thermal density, they are INCREASING it.
 


I wasn't refering to x86 performance, I was referring to power. The power consumed by Vishera at 3.3ghz like Thuban and Zambezi were clocked at the power consumed is well less at idle and load, so by perfecting its arch AMD managed much higher clocks, stable and within the same thermal envelope over what its prior parts achieve.

SR process or 28nm should allow extra resources to occupy the die space which should bring performance gains. Taking Vishera on the median of synthetics was 12% faster than Zambezi and considering the changes made to SR with a new front end, cache and IMC tweaks and the 30% branch prediction improvements could very well see AMD's IPC's head upwards.

If you take the history of AMD and Intel apart from one brief period where intel failed to achieve high clocks to compensate the deep pipelines, Intel has always had the resources to stay ahead of AMD in x86 and Fabrications, those are what intel is riding on, more so the later. Intels fabrications is the true worth of the company, it is the reason for the wealth and the reason why they can push FINFET before anyone else. The more pertinent issue is how a company with 5x AMD's net capital in R&D cannot make a CPU rich with features and distinctly ahead of its competition notably in gaming performance since i would assume 80% of DT market is enthusiast gamers or clockers yet on the median of benches while the 3570K represents best value for money in the general spectrum and including other games it is not much more than 5-10% faster at a stretch, the i7 and extremes almost have no mutual benefits unless its FPU heavy titles like Civ 5 and SC 2, for me the issues are a) Intel is not giving you enough and b) Intels performance is more synthetic and in general usage is not that far ahead despite AMD's problems.

My position on Intel is if you are just going to push synthetics all day long like it matters in life then they are fast but on day to day operations my Intel's don't seem any faster than my AMD's nor does my electricity amount to any more or less than my AMD's bear in mind I am comparing apples to apples, the 3970x and 990x I have are extremely expensive parts which I utilize in community grid rigs albeit Xeons to the job much faster. the whole crux is, AMD is not the fastest which apart from maybe 2 years AMD have never been faster but performance hell a 4300 with crossfired 7870LE's were doing a pretty stellar job running Crysis 3, the 750K FM2 Athlon with a 7970 achieved 72FPS in a 50min BF3 64 man map recording which is 2-3 less than a 8350/3570/3770/3820 and 7 less than a 3970X not bad for these "pathetic chips".

Much of Intel's monopoly of laptops and ultrabooks is down to AMD's non participation in that market or underwhelming offerings. Forward to Richland Kabini and Tamesh parts and you have copious x86 performance with a iGPU years ahead of its competition, some interesting dual graphics support as well at a much lower cost. These for LAN notebooks will be more available to the end user that cannot afford a i7 with Nvidia or AMD mobility part, all the while delivering top graphical performance and some potent streaming performance.



 


Last I heard they suspended work on Broadwell, high clocks met with failures and the tests with 1.8ghz clocks saw IPC's fall off the cliff.

My point I was trying to make and seems like is being misunderstood is that Intel on DT should keep the FAB for 22nm for a while yet, while a different branch for mobility fabrications, AMD is doing this with ARM and Glo foundries. While they will not rush down the die shrinks they will maximize performance at the die size they are capable of producing.

The other issue is die shrinks don't equate to colossal performance gains. 22nm Intel is at best 10-15% synthetic performance gains faster than AMD but in real world that number is closer to 10%, the iGPU is around 2-3x slower currently but this gap is going to extend considerably with Richland and Kaveri all the while AMD's graphics core is shrinking in size while giving more performance AMD wont be in a rush to ramp down the CPU die size with any haste.

The last issue with die shrinks, intels IPC's have been stagnant from SB conversely AMD's are gaining exponentially on a bigger die, if this die shrink allows you to throw more of these shiny FINFETs then where are they? Or perhaps Intel knows something they are not telling you, or maybe you are correct in that x86 has hit the wall and parallelism and heterogeneous computing is the prestige sea liner upon which to voyage until it hits the iceberg.

 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810

Die shrink dont provide performance increase.

The main focus on shrinks now seems to be geared soley for mobile platforms.

1. By mobile, do you mean ARM ? If yes, then think about x86 in 90's. Every generation, you got 50% improvements. ARM is in the same phase.
2. If you mean smartphone x86, then the current smartphone x86 arch is very old. A smaller node will allow Intel to create a new arch in a smaller power envelope. Eventually, the plan of intel is to have one single arhitecture from high end E series chips to smartphone SoC's. Possibly, this will happen at 14nm.
3. By mobile if you mean notebook x86, then this has been the focus of Intel since Merom days. Because dominance of mobile computing is inevitable.


Shrinks are not reducing power at high performance.
Because intel is adding more and more transistors for iGPU.

Increasing power draw then going for a die shrink isn't going to work. Die shrinks are not reducing thermal density, they are INCREASING it

I am completely ignorant about this. So until there is news about Broadwell processors melting, i am going to think you are a fearmonger and a troll. In case you are proven to be right, you are welcome to make snide remarks at me.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Sony has made a quad core ARM A9 chip that runs at 2.3 ghz. At that frequency, it will draw a tenth of the power of a PD quad core. But would you use that ?


SR process or 28nm should allow extra resources to occupy the die space which should bring performance gains. Taking Vishera on the median of synthetics was 12% faster than Zambezi and considering the changes made to SR with a new front end, cache and IMC tweaks and the 30% branch prediction improvements could very well see AMD's IPC's head upwards.

I wouldnt recommend a 8 core PD desktop CPU to anyone. But give me all that improvement you said above, and my money goes to AMD.

Fabrications, those are what intel is riding on, more so the later. Intels fabrications is the true worth of the company, it is the reason for the wealth and the reason why they can push FINFET before anyone else.
Intels fab advantage also allows them to fab chips with much less wastage. So much so, that the bastards have to artificially main their i5 and i3 to create market segments. GloFo is always struggling to better its technique. Llano was pretty much a failure because their just werent enough chips for people to buy.

The more pertinent issue is how a company with 5x AMD's net capital in R&D cannot make a CPU rich with features and distinctly ahead of its competition notably in gaming performance since i would assume 80% of DT market is enthusiast gamers or clockers yet on the median of benches while the 3570K represents best value for money in the general spectrum and including other games it is not much more than 5-10% faster at a stretch, the i7 and extremes almost have no mutual benefits unless its FPU heavy titles like Civ 5 and SC 2,
Atleast for games, I would say that a good game engine is that which uses the GPU for parallel work as much as possible. I would hate to be CPU bottlenecked in games. In one year GPU perf is going to 50% increasse. But i would be lucky to get a 10% CPU increase. So what game in bottlenecked by a CPU, is still bottlenecked.

There is a interesting test. You can use the Windows Advanced Raserizing Platform . It allows all DirectX calls to be executed on the CPU. Can you do a simple check and see which processor does well ?

for me the issues are a) Intel is not giving you enough

If you mean for the money, yes i agree with you. Lack of competition fcuks up the market.

Intels performance is more synthetic and in general usage is not that far ahead despite AMD's problems.

"general usage" is different for different people. And playing Crysis3 is not "general usage". General usage is browsing, office apps, music, movies, simple games. I am surprised you feel that much difference here. I myself cant find any "subjective" difference.
Except in using single threaded browsers like Firefox, where raw CPU power is noticable. But then again, you can say FF is a "legacy" application.





Much of Intel's monopoly of laptops and ultrabooks is down to AMD's non participation in that market or underwhelming offerings. Forward to Richland Kabini and Tamesh parts and you have copious x86 performance with a iGPU years ahead of its competition, some interesting dual graphics support as well at a much lower cost. These for LAN notebooks will be more available to the end user that cannot afford a i7 with Nvidia or AMD mobility part, all the while delivering top graphical performance and some potent streaming performance.

GPU is intels achilles heal. If they cant deliver excellent perf in reasonable TDP, they are out of the smartphone/tablet market (which thay mostly are). And it has been their focus for some time now. Hence most of the chip TDP is spent on the iGPU. And i think Haswell monster is still going to be slower than a Trinity. But, the power usage will be lesser. So OEM's are more liable to use a Haswell.
AMD has a chance there. But they cant do wonders with a 32/28nm node. Because adding a GPU means lots of transistors. And without a node advantage, you cant do that.

500px-Logistic-curve.svg.png


I suspect die shrinks are like this curve. As you reach smaller and smaller nodes, the TDP headroom starts getting flattened succesively.
 


Working over the math I should be ok. The 100W TDP on the 5800K is a bit misleading, the 5700 is only 65w and 200~400mhz slower. The 5800K doesn't actually use 100W but it's rated that high for turbo boosting and overclocking. The 100W 5800K is just a 65W A10-5700 with an unlocked multiplier that's been certified to go as high as 100W. No optical drive, no dGPU and only the board + memory + SSD. The board itself use's very little power.

http://forums.vr-zone.com/hardware-depot/2719755-itx-ing-fm2-asrock-fm2a85x-itx.html

They got 51W at idle and 142W under max load, this was with overclocking an additional 2TB HDD, DVD-RW / ect..

AMD A10-5800K (4.0GHz overclocked, CPU-NB overclocked @ 2000MHz) cooled with Zalman CNPS8700 NT
ASRock FM2A85X-ITX (BIOS P1.20) (thanks to AsView!!!)
2 x 4GB Samsung Green DDR3 1600 CL11 (overclocked @ 2133MHz, 9-11-10-38-58 2T)
OCZ Vector 256GB SSD (thanks to BanLeong!!!)
Seagate Barracuda 2TB HDD (ST2000DM001-1CH164)
HP DVD Writer 1260T (SATA)
Seasonic M12II 520W
AeroCool QS 200 Lite (Artic Cooling F12 @ intake & exhaust)
Cooler Master AeroGate 1 fan controller (4 channels)

The PSU they used was major overkill.

If I keep it at stock clock and disable turbo I should be well under the power envelope. Going with a picoPSU is important because anything larger requires a PSU fan or a larger case, neither of which will look good in my living room.
 
Sony has made a quad core ARM A9 chip that runs at 2.3 ghz. At that frequency, it will draw a tenth of the power of a PD quad core. But would you use that ?

If it was sufficient for my usage then perhaps, but I am not sure what the point is here.

I wouldnt recommend a 8 core PD desktop CPU to anyone. But give me all that improvement you said above, and my money goes to AMD.

While the FX8350 and 8320 are marked improvements over their predecessors, I am not in favor of them myself. They are flexible to use as proficient workbench chips and community grid parts but they are excessive and unutilized for a day to day user or gamer. The FX6300 and FX4300 are copious enough gaming parts which deliver fantastic performance depending on the hardware paired with it. It is still a good processor though.

As above while I find a lot of practicality in a FX 8 part as a file server, professional workbench, a lot has come down to power usage. The FX 8350 uses the same power as a Hexcore Intel Xtreme. While performance is not in question like the FX 8000 parts the Intel Xtreme line to me has very little practicality but is efficient as a workbench.

AMD outlines for SR is focus on Cache and Memory, one such tout was branch misses reduced by 30 or 35% which is a very big number. There were leaks of 25% general computing improvements across the board from top to bottom. With that said if it were correct then AMD will be right up with intels mainstream line in terms of IPC's. I am sceptical of the number but if it was to be around 20% that is a major leap in IPC terms for AMD in what will be a new arch.

Intels fab advantage also allows them to fab chips with much less wastage. So much so, that the bastards have to artificially main their i5 and i3 to create market segments. GloFo is always struggling to better its technique. Llano was pretty much a failure because their just werent enough chips for people to buy.

Llano was late, that was a major reason, Llano was late like Zambezi but all that remains the last vestiges of Hector Ruiz and his gang of misfits.

Zambezi to Vishera, Lllano to Trinity, Bobcat to Kabini. Using the same process AMD achieved a) lower power usage and b) more performance.

Atleast for games, I would say that a good game engine is that which uses the GPU for parallel work as much as possible. I would hate to be CPU bottlenecked in games. In one year GPU perf is going to 50% increasse. But i would be lucky to get a 10% CPU increase. So what game in bottlenecked by a CPU, is still bottlenecked.

There is a interesting test. You can use the Windows Advanced Raserizing Platform . It allows all DirectX calls to be executed on the CPU. Can you do a simple check and see which processor does well ?

Skyrim, Civ 5, SC 2. Three games that have been a struggle for AMD, yet in all of them PD based arches closed the gap to Intel, all those games seem to suggest that Intel have been bottlenecked so if AMD came up with a God chip then yes it will be bottlenecked as well.

That Rasterization Platform is interesting.

"general usage" is different for different people. And playing Crysis3 is not "general usage". General usage is browsing, office apps, music, movies, simple games. I am surprised you feel that much difference here. I myself cant find any "subjective" difference.
Except in using single threaded browsers like Firefox, where raw CPU power is noticable. But then again, you can say FF is a "legacy" application.

General Usage to me is defined by the end user. A gamers general usage is gaming a benchmarker is synthetic usage, for office usage, browsing and office based applications or accounting software etc, for a professional it would be image and content creation, rendering, 3D modelling, CAD, Photoshop etc.

GPU is intels achilles heal. If they cant deliver excellent perf in reasonable TDP, they are out of the smartphone/tablet market (which thay mostly are). And it has been their focus for some time now. Hence most of the chip TDP is spent on the iGPU. And i think Haswell monster is still going to be slower than a Trinity. But, the power usage will be lesser. So OEM's are more liable to use a Haswell.
AMD has a chance there. But they cant do wonders with a 32/28nm node. Because adding a GPU means lots of transistors. And without a node advantage, you cant do that.

Richland is more efficient than Trinity on the same node just mature process, yet deliver more performance on a iGPU with same specs as Trinity's Devastator.

We have seen that TSTC is now adopted by AMD and RCM is yet to be unveiled, I think AMD are wily enough to have figured out backdoors and control thermals while maximising performance. AMD are not circumventing laws of physics here they are just manipulating it some way that nobody know how, one factor is that AMD's iGPU is far more sophisticated than Intel's, whom are needing to drop the die size freeing up node space to through more power consuming and more importantly space consuming parts yet from what I see Haswell on DT the top end GT part will roughly be around the A8 3850 and 3870K level, which was about 33% on average slower than HD7660D, which will be 20% slower than HD8670D then there is Kaveri speculation of around 55%(nobody knows what GDDR5 will do so it could be more), broadwell better have a HD7970 or this baby as they say is back back back back GONE!.

Here is the other converse, AMD will improve its x86 until it starts to pull right up to Intel's exhaust pipe, all releases are seeing massive x86 gains all the while GPU technology is ramping down the node sizes and delivering more graphics potency at lower power thresholds. If AMD continues its progression with graphics along with the incremental IPC gains that should bear fantastic fruits. For Intel to achieve AMD level iGPU they need the more expensive route of microarchitectural engineering, first Intel have somewhat of a strange idea as to what a GPU really is what is seen on haswell looks like reinventing the wheel which increases its power and delivers what is largely unsatisfactory performance for its cost. The other element is needing to drop die size is expensive and theoretically Intel will reach the physical limits of how much die space is needed to have a working processor and if its iGPU is still powder puff what then? you at your limits then you may have the brain fart that maybe perfecting the process was better idea. For AMD the position is different, first AMD have shown they can squeeze performance out, second the most pertinent factor here is that AMD integrated graphics follows its Radeon core technology that is smaller dies, less power, more performance.

If you take Haswell's GT compared to Trinity, I bet Intel's iGPU component is much bigger than Trinity and delivers petty performance in comparison, certainly not Skyrim fully maxed out. Going on what it costs it is not a very pretty picture. Rumours of Intel buying Nvidia in the next 3 years, TBH Nvidia would want nothing to do with Intel knowing how they got stiffed the last time.
 


Corsair CX430M or a SeaSonic 350W. I think you are flirting a bit with a 160w PSU which will be spinning up and running its load.

 
Eventually, they will hit the clock wall, which they are close to already.
Now it will come down to tweaks for power, adjustments to the whole APU for balance as what we are seeing second gen in Richland, which to me is somewhat of a pipecleaner, effective, but just hinting at whats to come
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Thats OK. I am not sure what your original point was either :p

General Usage to me is defined by the end user. A gamers general usage is gaming a benchmarker is synthetic usage, for office usage, browsing and office based applications or accounting software etc, for a professional it would be image and content creation, rendering, 3D modelling, CAD, Photoshop etc.

I would say general usage is what 60% of the people use 60% of the time. The general trend here would be what i said earlier. Here both Intel and AMD are equal. So the difference comes to power consumption.



Richland is ......... back back back GONE!.

Excellent, if implemented right. We need more competition in the market.


Here is the other .........TBH Nvidia would want nothing to do with Intel knowing how they got stiffed the last time.

So i guess the basic question is whether Intel can improve its iGPU performance more than AMD, or AMD can match the x86 IPC of Intel.

My take : AMD will need SR and Execavator to come quite close to broadwell in x86 perf. Intel wont match AMD excavator iGPU perf with Broadwell. At a guess, they will get 60% . But they will use less than 60% power to reach there.

x86 perf win : AMD within 10% of Intel. I would call it a tie.
x86 power/perf win : Intel

iGPU win : Comprehensive AMD
iGPU perf/power : Intel
 
I think AMD will always trail in x86 which is Intels bread and butter but if they stay close like the old times then it will be more than good enough. For iGPU what the promise of a mainstream iGPU is, is no need for discrete cards which consume power and add to proliferation of space and heat. Ultimately thats less power used and you can also start making boards with no need for PCIe support, can even end up with a Micro SoC board only so it does have promise.

The real question here is, if a FX/A__ comes out with monumental iGPU and this is cheaper than the competition who are moderately ahead in x86 but only make up half your iGPU performance and use 25-30% more power. Power doesn't mean anything if performance isn't there, so you can spend a lot more, have low level graphics and save power..... that makes no sense.
 


No room for those. This is ITX not ATX. The most you can hope for is a flex-ATX design with a 200~250W PSU. 350~400W is beyond overkill. People have become entirely too comfortable throwing a bigger PSU then necessary, this is one of those things that a bigger number is not better.

Note on PSU's cause I see this mistake a lot. PSU's are designed to operate continuously under 70~80% full load, excessive spare PSU capacity is not a good thing. Running a PSU over 80% for long periods of time results in excessive stress, along with running in under 20~25% load. Ideally you want to run at 60~70% of your power budget when you design something. OEM's and system integrators have been using this rule for a very long time. Using a 350W for 40~120W worth of components is wasteful and could end up shortening the life of the PSU.

Now onto the topic at hand. When we're talking Richland we're talking DT or mobile release soon? If it's mobile followed by a four months wait then I might have to just build this anyway.
 

iGPU win : Comprehensive AMD
iGPU perf/power : Intel

Wow ... just wow.

GPU is AMD by a very large amount, mostly due to them keeping the ATI team together. Intel lags behind them by years. Intel has the money to drop into hardware R&D and seeing as GPU's are just giant SIMD vector processors, Intel easily has the ability to build them. What Intel doesn't have is the decade+ of driver development and software API implementation work. Both NVidia and ATI have mastered the art of developing performance drivers and then optimizing them for common products / games. Intel doesn't have anything that approach's that level of mastery, their still waffling around with glorified frame-buffers. I've fairly sure everyone here can cite problems they've had with bad graphics drivers and how it ruined the experience of the game / product.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


The real major winners of the race to smaller node is the GPU, which basically is a type of brute force method of computation. You dont need too high a clock frequency if you have enough cores to compensate for it. And adding more cores is one thing die shrinks are extremely good at.
So GPU's are going to continously scale with nodes. x86 CPU arch, not so much.


The real question here is, if a FX/A__ comes out with monumental iGPU and this is cheaper than the competition who are moderately ahead in x86 but only make up half your iGPU performance and use 25-30% more power. Power doesn't mean anything if performance isn't there, so you can spend a lot more, have low level graphics and save power..... that makes no sense

Not to you or me, but to the majority of notebook buyers, it makes a lot of sense.Both Broadwell and excavator will have enough grunt for heavy HTML5, flash games. And will prolly be enough for semi casual games (COD?) at medium-ish settings on 1080P. At this point, user will look at battery life.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Your point being ?
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


Ok ... so we should be able to get a 5+ghz cpu from 90 or 120nm right?
The main focus on shrinks now seems to be geared soley for mobile platforms.

1. By mobile, do you mean ARM ? If yes, then think about x86 in 90's. Every generation, you got 50% improvements. ARM is in the same phase.
2. If you mean smartphone x86, then the current smartphone x86 arch is very old. A smaller node will allow Intel to create a new arch in a smaller power envelope. Eventually, the plan of intel is to have one single arhitecture from high end E series chips to smartphone SoC's. Possibly, this will happen at 14nm.
3. By mobile if you mean notebook x86, then this has been the focus of Intel since Merom days. Because dominance of mobile computing is inevitable.
I mean just what Intel has stated it to be for. Laptiop, small footprint (all in one, mini ITX), ultrabook, and tablet.
Shrinks are not reducing power at high performance.
Because intel is adding more and more transistors for iGPU.

Increasing power draw then going for a die shrink isn't going to work. Die shrinks are not reducing thermal density, they are INCREASING it

I am completely ignorant about this. So until there is news about Broadwell processors melting, i am going to think you are a fearmonger and a troll. In case you are proven to be right, you are welcome to make snide remarks at me.

first off, since you can make snide remarks at any time, but I have to wait till proven wrong? Ill get to that in a minute.

Second I am talking about cpu power draw, not cpu+IGP.

average%20power.png

power-integrated.jpg

especially overclocked

power-consumption.jpg

http://www.legitreviews.com/article/1924/9/

Power increased substancially per mm2 since overall power of the CPU barely went down.

Here is where the melting comment comes from, here is a clue of what I actuall ment.

So again, good job on the snide remarks since you thought I actually meant "melting".

ivy-bridge-temperatures.jpg


You carry those same numbers to 14nm, your looking at 110-130C.
 


Haven't found hard dates, but if Trinity was any indication of Richland's schedule, then it could be 6 months after notebooks with Richland.

Trinity mobile was March/April and DT was October IIRC. Maybe this time will be the same.

Your call I guess, since it's not going to be THAT much of a difference.

Cheers!
 
The GDDR5 rumour for Kaveri seems strange to me but might explain the delay. They would need 2 memory controllers on the chip with only a small fraction of these chips to be able to use the GDDR5 since I would assume only the top end chips designed with all in one systems such as ultrabooks will be able to use GDDR5. There is also the interface with the motherboard. I don't know how efficient it would be to engineer such a thing. I don't know if using the existing DDR3 busses would work for GDDR5, not only that but the redesign of the boards would be needed and likely a new socket. This doesn't seem very likely.
 

tonync_01

Honorable
Feb 18, 2012
151
0
10,690


Well the 2nd gen piledriver FX cpu is supposed to be out in May, so I would guess the Richland apu will be released before that.
 


1) Agree with that.

2) The issue here is even what we have now AMD's Llano let alone Trinity is very much far ahead of Intel when iGPU performance is concerned and this scale is not linear, Haswell will be further off Kaveri than HD4000 is from trinity. So while Haswell and Broadwell for the casual user that travels and needs a notebook is more than enough, when you transend into pure performance AMD's Richland and Kaveri iGPU will smoke everything from the competition without flinching.

We have done a list of games which are capable of high settings at HD resolutions on Trinity's Devastator and the list is very impressive, AMD are on the cusp of mainstream performance if we go from Trinity to Richland then to Kaveri and the projected numbers, they will have a iGPU that will sit between a HD7770 and HD7850 which uses less power and will be nothing short of a feat of micro engineering genius.

Going back to die shrinks benefiting the iGPU, while intel is improving it is still very underwhelming, so the question is why so; as answered above, drivers, partners, GPU experience all are vitally important

esrever

The GDDR5 rumour for Kaveri seems strange to me but might explain the delay. They would need 2 memory controllers on the chip with only a small fraction of these chips to be able to use the GDDR5 since I would assume only the top end chips designed with all in one systems such as ultrabooks will be able to use GDDR5. There is also the interface with the motherboard. I don't know how efficient it would be to engineer such a thing. I don't know if using the existing DDR3 busses would work for GDDR5, not only that but the redesign of the boards would be needed and likely a new socket. This doesn't seem very likely.

I have heard hybrid memory modules will be experimented with and what is a hybrid controller. DDR3 and DDR5 on opposite sides of the DIMMS, and the CPU exclusively operates off the DDR3 while the iGPU is linked to the DDR5. The controller will also control a allocated HSA space through a ARM part.

 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


I believe that AMD can do all of this. Just not on the 28nm node. Maybe it will need the 20nm node for this. And by then, AMD could be in worse financial trouble than it is in now.

Designing of GPU and CPU are sort of on opposite ends of logic design. Even AMD did not have much experience, so it bought ATI. And look at AMD now!
Intel could do with a few GPU making company acquisitions.... But who is big wnough now? Nvidia only. And they sure as hell arent inetrested in being acquisitions. In the hypothetical case this happens, i think AMD will be in big trouble.

Going back to die shrinks benefiting the iGPU, while intel is improving it is still very underwhelming, so the question is why so; as answered above, drivers, partners, GPU experience all are vitally important

Maybe the question is how ; How Intel is going to make Broadwell a real competitor to whatever AMD is making.According to the Haswell article by Anand, Broadwell is the generation where we will see real revolutionary changes in the iGPU.

The general opinion on big-ass Haswell iGPU is that it will only on the mobile procs, and will be quite expensive. Prolly so expensive that some OEM's would rather not add that intel SKU, use a cheaper SKU and add a mobile dGPU ,which will suck power but will still be cheaper.
On desktops, who cares ? As long as the QuickSync3.0 is double as fast as QS2.0, i am happy.
 


1) Ditto, excavator will need to be on the 20/22nm dies for the node to facilitate a high end GPU component. I did state earlier AMD are experimenting with a number of methods of improving the iGPU and CPU without a) ballooning the die and b) reducing current socket space occupied. One recent leak had a SoC/CPU with GDDR5 1GB fused on to it and the surface area was actually less than current socket space, albeit I dont understand this, it will only be mutually beneficial if you were to incorporate additional DIMMS for tri or quad channel memory support.

2) Yes in the strange world that Nvidia is bought out it will likely be the end of everyone, though I also say this will never happen as a) Nvidia is profitable and b) they have mobility chips and other small device parts which will float them beyond just GPU's, Intel need Nvidia more than Nvidia need Intel and like any PLC there is no way they will want to see the Nvidia name disappear which will happen in any merger with Intel.

3) I have seen a lot of articles on Haswell and Broadwell, think they have fed the myopisism that people have in intel using "Beast", "Monster", "Giant", "Colossus" as superlatives/adjectives but think much is sensationalism. The issues is intels iGPU is becoming huge while the CPU is becoming tiny, AMD converse the Radeon core is becoming smaller, less power hungry and capable of insane level computational and rendering performance. GPU's per generation are getting about 35-50% faster than previous generation parts. All that said I do like some of Intel's ideas, the idea of cache pools for the iGPU which is big enough and very fast is quite interesting to see how it works out.

As for AMD, not that long ago they refereed to their iGPU as an infant, they have shown however a lot of endeavor and resolve in the last 18 months to pick the company out of a right mess when the previous management left it teetering. Hiring and rehiring a number of high level engineers and innovators was also a good move, was done under the guise of restructuring when they removed a lot of low level engineers.

AMD at the back end of last year traded at around 1.9pts, went up to as high as 3pts this year but has leveled out around 2.7pt trading mark amid global slowdown, this is a much better position but is projected that is will head north through the year which will be very nice.

 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810
^
As for AMD, not that long ago they refereed to their iGPU as an infant, they have shown however a lot of endeavor and resolve in the last 18 months to pick the company out of a right mess when the previous management left it teetering.

I dont think AMD is in moneez yet. It still needs to sell what it has produced.
 
Status
Not open for further replies.