AMD To Develop Semi-Custom Graphics Chip For Intel

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

UM... no.

Patience is rare, among modern investors. They tend to do things like change around senior management, demand restructuring (layoffs), sell off bits of the company (or the whole thing), etc. Nearly all very disruptive to engineering and product development.

Also, AMD has a huge amount of debt. Their profits need to be seen in this light.
 

I'm not really sure where you're going with this, but it's not clear whether these GPUs will be shared memory (IMO, probably) or not. XBox One X has a RX 580-class GPU and it does share memory with the CPU.

BTW, the original XBox One (and S) used separate graphics memory, even though it was an APU.

So, the trend is towards unification, in the low/mid-range systems. It makes economic sense not to have two different memories, if you can avoid it.
 


Something tells me that these new CPUs are going to be in very high end, ultra thin, high resolution business laptops and very low end gaming laptops, The ones right before you buy a laptop with a replaceable video card. High Bandwidth Memory (HBM2) the only time I have ever seen this being used is for graphics cards. Just the nature of modern games require video cards to have access to 2 to 4 GB of high bandwidth memory if not more. If the new CPUs are going to handle 4K video and modern gaming they are also going to need memory that is separate and distinct from the memory that the CPU has access to.

I think AMD is in a great position making video cards for Intel, (Apple Desktop?), XBox and Playstation. This bodes well for the wide adoption of Freesync in TVs and monitiors. AMD's HBM2 will be used in Intel, AMD and NVidia's graphic systems.
 


i don't think so. Vega is AMD high end solution. and AMD design Vega in a way so the chip can clock much higher than previous generation GCN. AMD spent 3.9 billion transistor on Vega to push the clock higher:

Talking to AMD’s engineers, what especially surprised me is where the bulk of those transistors went; the single largest consumer of the additional 3.9B transistors was spent on designing the chip to clock much higher than Fiji. Vega 10 can reach 1.7GHz, whereas Fiji couldn’t do much more than 1.05GHz. Additional transistors are needed to add pipeline stages at various points or build in latency hiding mechanisms, as electrons can only move so far on a single (ever shortening) clock cycle; this is something we’ve seen in NVIDIA’s Pascal, not to mention countless CPU designs. Still, what it means is that those 3.9B transistors are serving a very important performance purpose: allowing AMD to clock the card high enough to see significant performance gains over Fiji.

source
 


for the most part that's only valid for console. game development on PC still needs direct involvement from the IHV to help game developer in optimizing and fixing the issues in game (be it driver or game issue). on PC AMD having more hardware presence does not mean more games will be optimize for their hardware.
 


and funny thing is nintendo is using ATI/AMD GPU for three generation straight before they go with nvidia with the Switch.
 


Raja was supposed to make that happen. and that buyer was supposed to be intel. now the deal has been reduced from intel buying RTG into AMD will let RTG build custom GPU for intel needs. anyway i'm interested about it's performance and power consumption. this might push nvidia to release volta much earlier to the market to keep the efficiency crown for themselves.
 
AMD may be doing well right now but they need to use the resources they have to improve the products they are selling in the markets they already have. They can't afford to go after the high-end APU market just yet, and even if they could, their brand image is too poor right now to sell those. But if they did sell high-end laptop APUs it could improve their brand image.

With this Intel deal they get the best of everything. They don't have to invest loads of resources into the product; it's a simple customization of what they already have. But they do get to compete in a market where Nvidia won't even get the chance to go. This is good for the brand, good for generating revenue, and good for increasing the presence of AMD graphics which will bring small benefits from game developers and asynchronous monitor makers. They might not want to do this forever but in the short term it's a big win for AMD.

I'm not sure how this helps Intel. Maybe it'll help keep Apple processors out of Apple laptops? I'm sure Intel benefits in some way but I haven't figured it out.
 


It's the exact same thing as in consoles. Intel powers just about everything in the laptop world and if Radeon graphics are included with every new intel processor then the incentive to optimize for that hardware becomes much more important. You're missing the big picture, it is a big deal.
 


no i'm not missing anything. developer for their part will want their game to work well on all kind of hardware when it comes to PC. even when AMD did not have majority of the market share they will still want to optimize their game on AMD hardware. heck even when majority people assume intel iGPU is useless when it comes to gaming game developer still care about them! so the intensive to optimize is always there regardless of the market share. when it comes to games being develop on PC developer usually not an issue (in terms of optimizing the game on various hardware unless the game is being sponsored by certain IHV). in fact the issue is more on IHV themselves. because unlike console games on PC still rely a lot on drivers. game developer will try to fix the issue with their game. but there is nothing they can do if the issues was coming from driver itself (bug). so if IHV in particular did not really care about it then more hardware share really means nothing.
 


I was mainly referencing nintendo sticking with ARM chips. If they had gone the route XBone and PS went, that would be huge for AMD and nintendo.

nVidia tried to play it off that they didn't care about the console ship market, but I think AMD just won 2 of 3 on the merits and price of their offerings.

I think a major driving force for console makers to move to x86 was developers wanting to be able to port their games to PC easier and cleaner. Nintendo has stuck with ARM and I think they are going to continue being limited on 3rd party games, like they were with the Wii.

If AMD put a custom Ryzen and Vega chip in the upcoming XBoneX and next PS, that would be killer.

Essentially, with the consoles and this Intel partnership, their market penetration should explode.
 


After the failure of Wii U i'm not sure nintendo will want to stick with the idea of making pure home console. as i said nintendo has been using ATI/AMD GPU inside their main console for three generation straight. they go with nvidia Tegra solution not because they stick with ARM but simply AMD does not have the tech to realize the goal they want to get with the Switch. if AMD have GPU architecture that is as efficient as nvidia maxwell in 2015 it is very likely we will see nintendo to use AMD semi custom solution. on the CPU front it is not a problem because AMD also have the license to build ARM based processor. but in the end what type of CPU being use might not matter much to developer. yes developer want much easier doing port between PC and console by going x86 but majority of game developer has been developing games with various type of CPU (IBM PowerPC, IBM Cell, x86, ARM, MIPS). ARM is much more common than it was before. many developer that develop games for the switch so far none of them saying porting is difficult because of ARM. if there is any issue with the Switch CPU it will be about it's raw performance and not because of x86 vs ARM.
 


Zen APU's I think are either going to compete in the thin and light tablet/laptop market being only 15watts. Possibly even in the sub $500 laptops depending on the price of the chips.

This IntelCPU/AMDGPU hybrid will most likely compete in laptops that currently use the Nvidia MX150 graphics chips on their motherboards in the $500-$900 range. The nvidia chip uses 40watts on top of the 15watt intel chip I believe, so it can't go into a thin and light notebook, and the combo is simply too expensive for the sub $500 market. I'm pretty sure the vega chip with the HBM memory will dominate the MX150.

So essentially AMD is shooting Nvidia in the foot and stepping on their sub $900 laptop market at the very least. If the performance of the vega GPU is even better than I think and competes with the GTX 1060, then it'll step on their toes in the sub $1500 laptop market. But at the very least, AMD will take a cut on the thin and light laptops, the sub $500 laptops, and the sub $900 laptops that would have used an MX150. And because AMD is selling intel the GPU chips, not sharing their actual internal architecture of the GPU, then AMD can only win in this situation for the time being, at least until Nvidia can make a deal with Intel to sell them their chips that can interconnect like AMD's. But that'll be a few years before that happens.
 


It's not so much of an x86 vs ARM thing as it is an ease of porting and saving money thing. Like any other business, developers want to sell to as many people as possible with minimal costs. Porting games from x86 to ARM is just an expense that when developing for XBone and PS, they don't have to do. The Switch is an interesting idea, but not only the design, but the added layer of effort to port games over are strikes against mass developer support.
 


The next nintendo should use the 15w raven ridge apu, problem solved.

 
it's not stated here, but other sites have quoted them saying this would not be EVERY intel laptop but would be in the $1200-1400 range items. the cheap, budget $400 laptops would still be just intel igp like always.

from the way i am reading this, intel basically has a new way to link up add-on cards/memory and have asked AMD to provide the add-on gpu for their higher end stuff. this is not an intel "apu" with radeon graphics. they will still have their igp on the cpu as always and will switch between them depending on current laptop needs.

i think this is a great deal for AMD getting into higher ranged products they may not otherwise get into. the lower end will still be covered with the new apu's AMD has coming out. so they are really not competing with themselves at all with this deal. i've not seen anything to suggest the Zen apu's will be in $1200-1400 systems.
 

Depending on how enthusiastic Intel is about EMIB, I wonder if Intel might actually get rid of on-die graphics? I mean, iGPU takes up something like 1/3 of the die space on their consumer quad core dies. If they're going to start shipping some of them with alternate on-package GPUs anyway (wasting that die space), would it be plausible that in the future they may do away with the iGPU and just package the CPU with an on-package, discrete Radeon/Intel (or maybe even Nvidia) GPU as needed?

No idea if this would make sense, it's possible that extra packaging cost/difficulty would outweigh the die area savings, or other factors I'm not aware of.
 
Not gonna happen because Intel needs it gpu in 200-1000$ price gategories.
This is special tool for ultra thin and expensive laptops and hyprids. Surface pro and Apple pro and airbooks and devices like that.
 

Yes, but I was saying they could still pair their CPUs with their own GPUs but as two separate chips in an EMIB package (presumably without on package HBM) for low cost machines. And then for higher end machines they could package them with higher end graphics (i.e. Radeon) and HBM. I was just thinking that the on die iGPU would be a real waste in cases where they're already planning on packaging the die with another GPU.
 

Probably not. Those are limited to 15W max, if they have a fan. The fanless tablet-convertibles, much less.

These things will probably be targeted in the 35W - 45W range, meaning thicker, heavier laptops with substantial cooling and bigger batteries.


Again, tell that to Microsoft. XBox One X has unified memory, and is sharing probably less total bandwidth than this HBM2 solution will have available, yet its GPU is probably a fair bit more powerful.

CPUs just don't need that much bandwidth, so why not have them share?
 

Because Switch is basically a tablet, and AMD doesn't have a good APU for that. Certainly not one they've invested in like Nvidia has with Tegra.
 

I'm skeptical these will really fill such a big market segment.

And we still don't know Intel's long-term intentions, here - this could be just a one-off product, and not a new product line, for them.
 


it might require a bit more effort but i don't think it will raise the cost in significantly way. if that's the case then we will not going to see many indie games being port to the switch because resource is always the issue with indie developer. in fact porting to switch might not as hard as you think. the only limiting factor for the switch will be it's raw performance. take snake pass for example. the switch version only start being develop in december 2016. but the switch version of the game still being released on the same day as other platform in march 2017.
 


if for nothing more than spreading out the chips for better heat dissipation, i could see dumping the iGPU, but i'm not sure the overall heat/power usage would even drop for a simple iGPU if separated. currently the chip is created with the iGPU, removing it would then require a separate process to create just the gpu chips to install separately. not sure it would be cost effective to do it that way. AMD already has the production facilites and all they have to do is attach them to the board. much more cost effective that way. same would go if they went with an nvidia chip. let them make mit and then you just have to install it.
 
Status
Not open for further replies.