AMD CPU speculation... and expert conjecture

Page 471 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


To avoid further misunderstanding, let me emphasize that I wrote "discrete graphics cards".

The question is not "if". The question is "when", because the laws of physics and of economy are very clever about discrete graphics cards being killed.

The laws of physics say that an exascale APU is about 10x faster than a discrete GPU of comparable FLOPS. This means you would develop a 10x faster dGPU to compete with the APU. I.e. you would develop a ~3000W dGPU to offer the same performance of the 300W APU. This is the reason why Intel, Nvidia, and AMD will use APUs for exascale supercomputers and don't use discrete cards.

I already gave you a link from AMD chief engineer explaining they plan to use a 10TFLOP APU to build exascale supercomputers. What part of "don't use discrete cards" do you still ignore? Or do you really believe that all the engineers from Intel, Nvidia, and AMD are idiots and you know better than them how to build a exascale supercomputer?

Economy also explain why discrete cards will be killed:

So, where does this leave discrete graphics cards? Well, the low end market is certainly seeing reduced sales, as there really isn't enough of a performance difference nowadays to always warrant an upgrade from an IGP. As integrated graphics improve further, one can see how this will hurt sales of higher end graphics cards too. The problem is that the bulk of the profit comes not from the top-end powerhouse graphics cards, but from the low to mid-end cards which allow these companies to remain in business, so cannibalizing sales of these products to integrated graphics could make high-end graphics cards a much more niche product and crucially, much more expensive with to boot.

http://www.techpowerup.com/154374/are-improving-integrated-graphics-slowly-killing-off-discrete-graphics-cards.html

This is the same economic reason why Intel killed the big RISC guys in HPC.

All this stuff about graphics cards has been known/predicted/suspected since much before AMD brought ATI:

What if GPUs and CPUs Become One

If GPUs do eventually become one with CPUs as some are predicting, then the ATI acquisition would be a great source of IP for AMD. For Intel, getting access to IP from companies like ATI isn’t too difficult, because Intel has a fairly extensive IP portfolio that other companies need access to in order to survive (e.g. Intel Bus license). The two companies would simply strike out a cross licensing agreement, and suddenly Intel gets what it wants while the partner gets to help Intel sell more CPUs.

AMD doesn’t quite have the strength of Intel in that department, but by acquiring ATI it would be fairly well prepared for merging CPUs and GPUs.

http://www.anandtech.com/show/2055/2

Preparing for the Inevitable Confrontation with Intel

From ATI's standpoint, it's only a matter of time before the GPU becomes general purpose enough that it could be designed and manufactured by a CPU maker. Taking the concern one step further, ATI's worried that in the coming years Intel will introduce its standalone GPU and really turn up the heat on the remaining independent GPU makers. By partnering with AMD, ATI believes that it would be better prepared for what it believes is the inevitable confrontation with Intel. From ATI's perspective, Intel is too strong in CPU design, manufacturing and marketing to compete against when the inevitable move into the GPU space occurs.

http://www.anandtech.com/show/2055/3

Our Thoughts: The GPU Side

The AMD/ATI acquisition doesn’t make a whole lot of sense on the discrete graphics side if you view the evolution of PC graphics as something that will continue to keep the CPU and the GPU separate. If you look at things from another angle, one that isn’t too far fetched we might add, the acquisition is extremely important.

Some game developers have been predicting for quite some time that CPUs and GPUs were on this crash course and would eventually be merged into a single device. The idea is that GPUs strive, with each generation, to become more general purpose and more programmable; in essence, with each GPU generation ATI and NVIDIA take one more step to being CPU manufacturers. Obviously the GPU is still geared towards running 3D games rather than Microsoft Word, but the idea is that at some point, the GPU will become general purpose enough that it may start encroaching into the territory of the CPU makers or better yet, it may become general purpose enough that AMD and Intel want to make their own.

It’s tough to say if and when this convergence between the CPU and GPU would happen, but if it did and you were in ATI’s position, you’d probably want to be allied with a CPU maker in order to have some hope of staying alive. The 3D revolution killed off basically all giants in the graphics industry and spawned new ones, two of which we’re talking about today. What ATI is hoping to gain from this acquisition is protection from being killed off if the CPU and GPU do go through a merger of sorts.

ATI and NVIDIA both seem to believe that within the next 2 - 3 years, Intel will release its own GPU and in a greater sense than their current mediocre integrated graphics. Since Intel technically has the largest share of the graphics market thanks to their integrated graphics, it wouldn’t be too difficult for them to take a large chunk of the rest of the market -- assuming Intel can produce a good GPU. Furthermore, if GPUs do become general purpose enough that Intel will actually be able to leverage much of its expertise in designing general purpose processors, then the possibility of Intel producing a good GPU isn’t too far fetched.

If you talk to Intel, it's business as usual. GPU design isn’t really a top priority and on the surface everything appears to be the same. However, a lot can happen in two years -- two years ago NetBurst was still the design of the future from Intel. Only time will tell if the doomsday scenario that the GPU makers are talking about will come true.

http://www.anandtech.com/show/2055/8

Chief engineers from AMD and Nvidia know that discrete cards will be killed. Why do you believe that AMD is slowly transforming itself in an all APU company? Why do you believe that Nvidia now design APUs as well?

Don't say "no one", because this is wrong. Hundred of scientists like myself would be very happy with a PC based around an 20TFLOPS APU with a compute performance superior to 12 GTX Titan Black working in parallel.

I am also sure that most gamers would be very happy with graphics performance superior to 7 discrete cards R9-290X in crossfire.



But discrete cards will be not killed by Intel 'APUs' that you can purchase today. You are missing the trend.

Check this slide given by AMD during Kaveri presentation

AMD-Richland-vs-Haswell-GPU.jpg


Each new Intel gen has bigger GPU than previous. Haswell CPU was a minor update over Ivy Bridge (except AVX2). Haswell Iris Pro introduced a huge jump in GPU performance over Ivy Bridge GPU.

Broadwell CPU will be again a minor update over Haswell. The main emphasis will be again on the GPU side. Broadwell GPU will introduce a huge gain in performance.

According to AMD, Kaveri A10 has 40% better graphics than Haswell i5-k. And I already shown before how the Haswell i5-R (with HD 5200 graphics) is very close to Kaveri A10.

According to Intel, Broadwell Iris Pro will be about 40% faster than Haswell Iris Pro. This means top Intel GPU will be faster than Kaveri top GPU. But the interesting news is that Broadwell-K will include Iris Pro. Add a 40% from updating from HD 4000 to Haswell Iris Pro and now adds another 40% from updating to Broadwell and you get that a Broadwell i5-k chip will have ~80% better graphics than a Haswell i5-k chip or, what is the same, a Broadwell i5-k chip will have 40% better graphics than top Kaveri A10.

http://www.techspot.com/news/54763-report-details-intel-broadwell-k-cpus-iris-pro-graphics-included.html
http://news.softpedia.com/news/2014-Bound-Intel-Broadwell-EK-CPUs-Get-80-Graphics-Boost-from-Iris-Pro-401991.shtml

After Broadwell comes Skylake, which is rumored to be another big update on the GPU side, with a new graphics technology that resuscitates Larrabe old plans.

Check the above slide again, I believe that by 2018 GPUs will account for about a 80% of the total APU size.

Of course this trend towards more GPU is also observed on AMD and Nvidia designs.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Who is who?

It was said to you that jaguar is replaced by ARM on servers. AMD has said that they are replacing jaguar by ARM because the latter is faster and efficient. Anand has an article about AMD Seattle with benchmarks given by AMD. The article was cited before. AMD admists that the ARM core is >40% faster (IPC) and consumes less power.

The rumor of AMD releasing an ARM tablet is reported here (the anonymous source is from AMD)

http://liliputing.com/2013/09/amds-first-arm-based-chip-tablets-coming-2014.html

This article is also interesting

http://www.brightsideofnews.com/news/2013/12/9/amd-working-on-arm-socs-for-consumer-products.aspx

The author suggests that the new ARM APU SoC will replace the Mullin APU SoC that you mention.

Now we already know who says what...
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
Dont even try to address me by attempting to tell the readers here what I said. You cant remember what I said so stop lying.

Who was it that quoted this article multiple times saying that x86 tablet cores were not going to happen and that I was an idiot for disagreeing with you?

http://techreport.com/news/25461/report-amd-to-introduce-arm-based-tablet-chip-this-year

I never mentioned your name but you insist on making this personal. Stop trying to attack me personally.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


Juan, don't forget Beema and Mullins are supposed to be a greater than 2x increase in performance per watt.

This is typical ARM/Nvidia marketing tactic. Compare new generation ARM stuff to older competition and then talk about how you have these huge performance gains over the competition. And then the competition releases new stuff, it's better, and no one uses what that company was boasting about (ergo Tegra).

http://phx.corporate-ir.net/External.File?item=UGFyZW50SUQ9MjAxMjA5fENoaWxkSUQ9LTF8VHlwZT0z&t=1


Curiously enough, Puma isn't showing up in server, but by the looks of it, Puma would be extremely competitive with A57 cores.
 
When Intel did enter the HPC market, existent players did similar claims. "Intel will never caught us", they said... Today 90% of HPC is based in x86 designs from Intel and the then big players (e.g. MIPS, alpha...) have been killed all of them. Only IBM remains on the business with some success.

MIPS only outsells every other ISA combined by a healthy margin. They have a stranglehold on embedded low power devices these days. ARM is trying to get into that market as we speak.
 
When Intel did enter the HPC market, existent players did similar claims. "Intel will never caught us", they said... Today 90% of HPC is based in x86 designs from Intel and the then big players (e.g. MIPS, alpha...) have been killed all of them. Only IBM remains on the business with some success.

Whoah missed this...

Dude, Intel is definitely not "HPC". They are mad popular in commodity computing, vast hordes of cheap disposable blade / micro servers deployed to data centers. They are popular exactly because they are so cheap, when one breaks you can easily replace the unit and in depth service work isn't needed. HPC is ruled by IBM Power and to a lesser extent Oracle SPARC though ARM is making progress in very large distributed fabric systems (think 1000 tiny nodes).

Enterprise =/= HPC. You probably were thinking that Intel is so large in enterprise data-centers, and you'd be correct but only because they are stupidly cheap & disposable compared to the competition. We use large micro-server based ESXi clusters to run most of our enterprise servers with some expensive Oracle SPARC's that run mission critical applications on OWLS / RDMBS with some very special LOB applications. Two different products for two different problems, you don't run AD / Exchange / Sharepoint on Oracle SPARC.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I am not going to bite... and will only discuss tech.

AMD is replacing jaguar by ARM both in servers and embedded roadmaps because ARM is faster and has better efficiency.

The techreport link mention an old rumor about AMD preparing ARM SoC for tablets. The link explains why an ARM SoC would be much more competitive than one based in jaguar. The reasons it gives are the same reasons why AMD is replacing jaguar on servers: unsurprisingly an ARM tablet would be faster and offer us better battery life than a jaguar tablet.

AMD has released Beema and Mullins now, because has not ready the ARM SoC (I suspect the difficulties are in merging HSA GCN). But I have provided you two links

http://liliputing.com/2013/09/amds-first-arm-based-chip-tablets-coming-2014.html

http://www.brightsideofnews.com/news/2013/12/9/amd-working-on-arm-socs-for-consumer-products.aspx

that confirm that AMD is preparing the ARM SoC for tablets.

Thus instead being

jaguar tablet --> ARM tablet

it is finally something like

jaguar tablet --> puma tablet --> ARM tablet

due to delays. This is not very different from Kaveri which was supposed to be

Trinity --> Kaveri

but was finally

Trinity --> Richland --> Kaveri

due to delays...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


According to AMD data, Beema and Mullins will offer about 2--2.4x increase in performance per watt per APU. I.e. AMD is also considering improvements made on the GPU side. If you count only CPU, then the efficiency of puma cores is <<2x that of jaguar cores.

According to same data, Seattle offer ~2x increase in performance per watt. Recall there is (no GPU) on Seattle. Moreover AMD has only given data for first version of Seattle (8-core). The second version of Seattle (16-core) will increase performance per watt compared to first. AMD is not using puma on servers because is not a competitive core with A57, period.

I already gave a link suggesting that AMD is preparing ARM to replace Beema/Mullins in tablets.

Last time that I checked this AMD was neither Nvidia nor ARM, thus their tactics are irrelevant. AMD is following the standard procedure of comparing the new product Opteron-A to the product that replaces: i.e. Opteron-X.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I was discussing HPC market. Below I provide top500 graph showing how MIPS was killed therein.

Regarding ARM:

It is the most widely used 32-bit instruction set architecture in terms of quantity produced.[9][10]

http://en.wikipedia.org/wiki/ARM_architecture

One usually associates ARM to phones or tablet (ARM power 90% of smartphones), but you can find ARM in lots of other products: 95% of ordinary phones, 90% of portable media players, 80% of digital cameras, 45% of digital TV & Set-top-box, 70% printers, 90% of HDD/SSD...



No. I was mentioning official HPC numbers.

In this graph you can see the evolution of the HPC market (TOP500 list)

Top500-processors_620x384.png


At the left you can see the old big guys: MIPS, ALPHA...

At the right you can see how all them were killed by Intel. Only IBM remains, as I said above. If you want know exact figures:

Intel continues to provide the processors for the largest share (82.4 percent) of TOP500 systems.

http://www.top500.org/lists/2013/11/highlights/
 
No. I was mentioning official HPC numbers.

In this graph you can see the evolution of the HPC market (TOP500 list)

That's not how HPC numbers are calculated.

A total of 412 systems (82 percent) are now using Intel processors, slightly up from 80 percent six months ago.

By that definition our system, which is definitely not Intel, would be counted in that group since some of our control systems exist on an x86 platform.

That list is the top 500 installed HPCs, there are many more then 500 of those in existence. The bulk of sales are actually in the sub $1M USD range, smaller HPC platforms that are sold to organizations needing to do modeling or some other routine but I/O intensive operation. IBM absolutely crush's everyone else right now, followed by Oracle though Intel is starting to make headway. You can't run AIX on x86, and DB2 is almost always paired with z/OS or other vender specific OS. Similar situation with Oracle RDBMS and Solaris. Though Oracle went for the value market by marketing them on x86 as a cheaper less-reliable solution then SPARC.

Anyhow I see where your confusing things. Your thinking HPC is only supercomputers, which to someone reading the news it would seem that way. Supercomputers are the Formula-1 of the HPC world, big, fast, flashy and way outside the budgets of business's. Most are just mid class systems ranging in the $100~500K range, typically a handful of box's configured with specialty software. It's above enterprise but not a supercomputer. Intel's big contribution to the HPC world is the Phi coprocessor cards which lets you fit more power into a smaller enclosure. It got around x86's single biggest hurdle which was the expensive glue circuitry needed for larger scale deployments. Intel's actually been making headway into the low end HPC with really cheap deployments. Lots of places are doing cost analysis's to determine if it's cheaper to have their LOB software ported to work on x86 rather then keeping it on Power / Sparc. So things could very well be different in five to ten years.

And if you want to know the #1 users of HPC, it's financial annalists running big models. It's too much work for enterprise systems but not nearly enough for supercomputers.
 
AMD is replacing jaguar by ARM both in servers and embedded roadmaps because ARM is faster and has better efficiency.

I'm going to say it now: ARM doesn't even approach X86 in terms of performance. Power sure, but not performance. ARM looks good, because Intel simply can't scale down its chips to meet low power demands (Atom).

Compare a top of the line ARM chip against a mid-tier i5 or top-end i3: Intel wins with ease. So unless ARM scales up into the 3+ GHz range AND gains some IPC to boot, its not beating X86 in performance.

I think AMD is speaking more from the perspective of its own busted core design, rather then X86 performance in general.
 
We are pretty lax with discussion rules but it's getting out of hand again. If people wish to discuss ARM or other processor design's then make another thread on the CPU forum. This is about the speculation of future AMD design's, specifically Kaveri but open to general speculation about HUMA / HSA / Mantle due to how tied in with AMD products those are. I'm guilty of letting myself get carried away down rabbit holes, so that will be stopping right now.
 
AMD Releases Beta OpenCL 1.2 Driver For Developers, Announces HSA Runtime For Linux
http://www.tomshardware.com/news/amd-opencl-driver-hsa,26194.html
if you pay a bit of attention - the header is a two-parter. the driver is for windows. announcement is for linux.

i must have missed it.. but weren't all these supposed to be driverless? or at least windows will have a generic driver or something (like when you uninstall catalyst a generic driver takes over)? should we be waiting for 2-3 gens of windows till hsa gets it's support built in?
 


Of course you need a driver. The driver is essentially the communication layer between the OS and the hardware in question. You might have an overly generic one packaged with the OS, but its there.

Taking the case of GPU's, as of Vista, MSFT demanded all GPU's support 640x480, 256 color mode, via the generic GDI driver. Even then, you need some form of driver to communicate between the GPU and the OS.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810



I think computing will turn more modular. As the NUC form factor can take over anything without an embedded screen. Expansion will just go external, like optical ePCIe or LightPeak.

A single NUC will get you a certain level of compute. Stack 2 NUCs together to scale performance.

Razor project Christine style. http://reviews.cnet.com/desktops/razer-project-christine/4505-3118_7-35834097.html
 

whyso

Distinguished
Jan 15, 2012
688
0
19,060


If you closely check those slides its a performance increase at that TDP range. Nowhere mentioned is watt.

And performance/ TDP is pretty useless.
 

jdwii

Splendid

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


They need a small buffer (32mb) like the XBox has at least. Otherwise they're going to fall behind Intel's APUs, which would really suck. The graphics edge they have is really the only thing keeping them afloat. If they lose that, the uphill battle they have now will become a cliff.
 

juggernautxtr

Honorable
Dec 21, 2013
101
0
10,680
The initial BD screwed AMD pretty good, really the only thing that kept them afloat was ATI/graphics.
I very much doubt that Intel graphics will ever touch AMD.
Iris pro will be joke in comparison. 40% increase only puts it lvl with top of the line AMD apu current.....AMD has already 10% more gpu on chip. i expect probably another 10% GPU/PPU next round with improvements on both sides as they have been doing.

the APU will eventually win out........ even in enthusiast market, i still don't think DGPU will go away entirely but any dedicated part will be so low production it will be severely over priced for even the most advantageous person. The apu is cheaper to produce, and we are heading for SOC in the extreme measures.

AMD said it when they started with the APU "The future is Fusion" , we are seeing less and less of dedicated parts. Like i have said i think the dedicated high end GPU will only be around for gaming, cause that is only going to get that much more demanding.
As for the APU as they add more GPU to the chips the faster and more powerful they will become. the wattage compared to the dedicated parts will be less and more useful in servers of any kind.
the next apu's from AMD will be most likely be equivalent to a 270-280x dedicated component. and from the article JDWII posted I would guess that AMD is on track for having excavator ready to compete or beat it's rival's.
 

mlscrow

Distinguished
Oct 15, 2010
71
0
18,640


Would you please post a link to that article? I've been out of the loop for months and I'm super curious to know everything I can about excavator. Preemptive thanks!
 

juggernautxtr

Honorable
Dec 21, 2013
101
0
10,680


http://www.extremetech.com/computing/177099-secrets-of-steamroller-digging-deep-into-amds-next-gen-core

it's very clue giving about AMD's working on the architecture they obviously are putting some effort forth at getting it to work. i figured four gens from BD release before they had it getting to where it should be.
Edison worked the light bulb over a 1000 times before it was right.
my favorite quote "you learn more from failures than successes".
 

thanks for the link, it was a very nice read. agner's association with the article should give it more credibility. if these guys are right, then amd shouldn't design a high perf cpu based on current sr-b cores.

ddr4 shouldn't be added to amd's mainstream dt offerings because it will raise platform cost. ddr4 will be more suitable in servers and other specialized purposes.

process node won't matter much if the design doesn't improve. since brw will be mostly for laptops and lower, intel might take even longer to launch dt brw cpus. it's too bad that amd isn't able to take advantage of intel's longer refresh cycles.

at present, amd seems busy rolling out am1 socket motherboards. i saw models from msi and asrock today.

@juggernautxtr: there's a much lower limit to how many times you can 'fail' (more like underwhelm/underperform) in tech world. there's a reason why arm is looming all over amd.
 

whyso

Distinguished
Jan 15, 2012
688
0
19,060


AMD gcn is pretty good architecture. However they need to fix the bandwidth problem or they are going nowhere. Intel has a better memory controller but their architecture isn't a good (iris 5200 has pretty poor scaling from HD 4600). However I'd disagree that intel will not improve their Gen 8 graphics. Cherry trail will have 12 EU (up from 4 in braytrail) so clearly there will be an increase at the same power level. Add 14 nm (intel says 30% before optimization) and it will be a substantial jump.

A 280X component would be absolutely worthless. First because AMD must buy chips from Glofo, so they wouldn't be able to use TSMC's superior process advantage if they switched completely to APU. Secondly AMD is moving to lower power chips and a 280X class GPU and a CPU capable of driving that (which kaveri isn't quite up to) would require a 200-250 W tdp and more expensive cooling. Large chips are expensive and more prone to defect as well.

As far a power goes a 280X class APU would require 384 bit GDDR5 to stretch its legs which would massively drive up motherboard costs and increase idle power. You would also need 6 DIMMS.

Servers don't give a crap about APU/gpu for the most part, its all CPU, RAM, and I/O for them.

As for dgpu, there is little reason not to produce one. AMD is already spending all the R & D money to design the gpu architecture for its APUs its just the tape out and production costs which can likely be recouped.
 
it's very clue giving about AMD's working on the architecture they obviously are putting some effort forth at getting it to work. i figured four gens from BD release before they had it getting to where it should be.
Edison worked the light bulb over a 1000 times before it was right.
my favorite quote "you learn more from failures than successes".

About a half dozen others had a working lightbulb before Edison. He just had the sense to patent it after the fact.
 
Status
Not open for further replies.