AMD CPU speculation... and expert conjecture

Page 390 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Actually I was wanting to run it on a Sun Blade 2000 I have here. It's 2 x UltraSparc IIIi CPU's, I think 1.4Ghz each and 8GB DDR2 memory. The GPU is an XVR-1200 which is two WildCat IV GPU's on the same car with about 512MB of graphics memory. The OS is Solaris 10 and there is Java and JOGL on the system. That's why I was curious as technically it ~should~ run as long as there are no x86 or OS specific calls present.

If I can get it to work on my home box I might be able to snag some time on one of our test systems at work. Curious what numbers a SPARC T4-2 would put out. 16 cores and 128 threads and four channels of DDR3-1333 FB-RDRAM fitted with 256GB of memory.
 


Yes, you cherry picked and used it to spam your nonsense that Kaveri will be 37% slower than Bulldozer. Funny that now you change your nonsense to it-depends-on-settings. But again you continue posting nonsense and FUD.

I gave you one benchmark showing how an Richland APU offers the same performance than a FX-4350 at high-resolutions (2560x1600) and max quality settings. but you ignored and continue with your nonsense APU is only valid for playing "at 1200x800 and low to medium quality settings".

The same site that you use has more benchmarks showing how an APU is so good as a FX-4350 at high-resolutions (2560x1600) and max quality settings:

http://www.techspot.com/review/681-amd-a10-6800k-a4-4000/page7.html

You continue ignoring that Kaveri has bigger and faster L1 cache and faster (20% more) L2 cache than any FX-chip. The lack of L3 will be mitigated by the improvements in L1/L2/IMC, when compared to Trinity/Richland APU. It is also funny that you mentions games, when next gen games will be designed without L3 cache in mind (how many L3 cache has jaguar?).
 
The future for AMD (also for Nvidia, Intel...) is APU/SoC, low freq. designs, and heterogeneity, from phones to supercomputers: phones, tablets, laptopts, consoles, desktops, servers, supercomputers.

Not only CPUs will be abandoned, but APU + dGPU is only a transient configuration. In the long run dGPU will be exhibited in museums.

Nvidia, with a 80% of the HPC market thanks to high-end dGPUs like the K20x is developing a new design that doesn't use any dGPU. Therefore if dGPUs are no more needed for $100 million supercomputers, who in the hell still believes that cheap desktop dGPU (a GTX-780 is cheap compared to a $8000 K20X) will be a requirement for future desktop gaming?

Of course this is not something happening tomorrow, neither next year.
 
oohh wow, you found a benchmark thats 100% gpu bound as its 53 fps from the 4350 all the way up to 54 on the i5 3570k. Congratulations, you think like marketing does.

I would like to know what they changed in order to get those results tho.

CPU_01.png


i5 went from 65 fps to 54.

the only nonsense around here is your blog site, adding 30% to every benchmark around and claiming its "RL results" How often is any cpu intel or amd improve a set amount across the board?

I can't wait for some actual numbers to settle who is the liar here. I wonder if there is a reason Intel doesn't even have a cpu without L3 cache other than Atom.

The bigger L1+L2 cache is where kaveri gets its 20% IPC improvement from. who is ignoring it? Its not going to magically turn itself into an L3 equivalent.

As for your arguement about next gen games, why are APU quad core only and consoles 8? Kinda negates your arguement that all future games are based on consoles.
 
The future for AMD (also for Nvidia, Intel...) is APU/SoC, low freq. designs, and heterogeneity, from phones to supercomputers.

Not only CPUs will be abandoned, but APU + dGPU is only a transient configuration. In the long run dGPU will be exhibited in museums.

Nvidia, with a 80% of the HPC market thanks to high-end dGPUs ($8000) is developing a new design that doesn't use any dGPU. Therefore if dGPUs are no more needed for $100 million supercomputers, who in the hell believes that cheap dGPU (cheap compared to a K20X or a Firepro) will be a requirement for desktop gaming.

Of course this is not something happening tomorrow, neither next year.

My professional qualified expert opinion is not just no, but hell no. Either stop taking what your taking, or pass it around for us to share.

Now here's why.

What your talking about is a concept known as miniaturization. The process by which electronics gets smaller, more compact and more integrated into our lives. Common things you do today will be possible to do on a smaller node tomorrow.

There are two barriers to that which create a point upon which devices can no longer get smaller. The first barrier is human interaction, no matter how small the chip we as humans still need to push stuff and read stuff. The smallest form factor is thus determined by the intention of the device, the more complex the interaction needed the larger the form factor. That is why Office Automation will never be done on smartphones or tablets, their form factor is too small to comfortably work with. You don't know this but over here in Korea, long before Apple made the iPhone, they were experimenting with ultra phones that strapped to your wrist. These things were maybe two inches and a half inches long, one inch wide and maybe half an inch thick. Really tiny with batteries that lasted days. They were popular for about six months but generally people hated using them as they were simply too small to interact with. The Koreans had found the limit to the mobile communications form factor, it's much larger then what Star Trek has you thinking. That interaction is also why the form factor of smartphones has been growing not shrinking, due to the increased requirement for complex interaction.

The second barrier is a combination of physics hitting economics. As you miniaturize you find new and creature use's for more of what you had. Halving the size of your CPU does nothing if you end up needing two to four times as much computing power. Intel making a Pentium II from the 90's on a modern process node would produce a CPU of incredibly small size and power usage. Why then aren't we all using these incredibly small and miniaturized Pentium II's? Because we created the C2D, then the Quad Core, Hexa Core and now Octo Core CPU. Memory experienced this also. 128MB was an insane amount of memory in the 90's for a PC to have. Only servers needed 128MB of memory. As prices of memory fell down as a result of miniaturization and increasing density we found new and creative use's for even more memory, first 256 then 512, then 1GB and now 8GB or more. The exact same thing happened to HDD's and PSU's. Seriously a 400W PSU was considered server class at one time.

We aren't running around with miniature Pentium II's running with 256MB of miniature memory writing office reports on a screen two inch's wide.

So more miniaturization doesn't' mean smaller systems, it means more powerful systems of the same size.

Now before you snap back a quick response take a moment and think about who's making this statement. This isn't coming from some IT geek talking at the water cooler but from someone who gets paid very good money to design and engineer IT solutions for customers of a multibillion dollar corporation.
 


Physics working against AMD. TDP was always going to be a limiting factor, when you are relying on ramping up clocks to gain performance. And now that GloFo is basically putting a 65W TDP limit on its 20nm node, AMD has a MAJOR design headache to contend with. I'd argue, much like Intel having to abandon the Pentium 4 and going back to the Pentium 3, that AMD needs to abandon the BD arch and go back to the Phenom II, and scale it upwards. Getting Phenom II to 65W will be a hell of a lot simpler then trying to do it with BD.

Either that, or AMD pulls out of the high end CPU segment, which seems to be the way they are currently leaning.
 
^^ everyone chant JA GU AR!! JA GU AR!!
with 3 alus and 256bit fpu, JA GU AR!!
8 cores @3.0+ ghz under 80w on 20nm tsmc bulk node (for ref. - 28nm, 4c kabini has 20-25w tdp for total soc), JA GU AR!!
.....
or puma or lynx or cougar :3

being fab-limited might force amd to adopt a high-ipc, relatively lower clocked, dense cpu design. i think bd is the opposite. then again, i thought xbone(R) soc has 32MB edram for months....
 


If you are talking about Project Denver, my take was that the goal is more about eliminating the third party CPU than the DGPU. In theory the PCIe controller could be used to run dedicated Teslas rather than linking a cluster of NVidia (ARM/Maxwell) APUs together.
 


Valve, EA, MS, etc won't let dGPU die. We still aren't even remotely close to having the sort of power in a rig to do real time raytracing in games. You might be thinking dGPU power has reached a dead end and cards are too strong for most games but that's not the case.

PC gaming is a huge market and it's still growing.

And lets not forget about HSA. A GPU of Kaveri's caliber is only good enough to come close to a higher end traditional CPU.

The best example of HSA being APU only in the future being absolutely idiotic is an example from AMD itself. Remember APU13 when we saw the JPEG decoding benchmark and it was twice as fast with HSA?

Guess what else would have been twice as fast as a 2m/4c APU and it wouldn't have required HSA? A 4m/8c SR dCPU....

And guess what, that little AMD APU with a special performance case AMD felt like they had to show it off still isn't going to be competitive with a dCPU. Heck, it might not even be overly competitive with an FX 8350. Certainly not an OCed one. And that's one specific task where software was tailored to the hardware.

And you're not even taking into account that Intel is planning on giving their entire line-up a core count bump in a year or two.

If AMD goes APU HSA only, it's not going to look attractive at all compared to 6 core Intel at $300.

And as I mentioned earlier, the problem of HSA only being used to catch up to Intel and dCPU instead of blowing it out of the water means that HSA is DOA in the professional market. Meaning that if you want to make software for an HSA system you're going to only be able to target low end and mobile devices. That's not what software developers who want to solve problems like rendering, compiling, transcoding, and heavy calculations in HPC want.

What you are suggesting is basically if Nvidia stopped high end Quadro and Tesla sales, converted them to ARM APUs, and then went "hey look this APU is close to how a full size Quadro or Tesla would run if you give it specialized software! But you can forget all your existing CUDA software and all the programs you used to run! They'll be a lot slower now!"

It just flat out doesn't make a whole lot of sense to me at all.

I still am leaning towards a lack of SOI personally.

http://www.advancedsubstratenews.com/2013/07/globalfoundries-on-cost-vs-performance-for-fd-soi-bulk-and-finfet/

Take a look, 28nm SOI was a dud that didn't do anything and 20nm FD-SOI is going to offer 25% better performance. However notice how FD-SOI is going to end up cheaper than bulk.

And dont' forget, there's rumors swirling that Intel is going to switch to SOI because bulk doesn't work too well below 14nm. I would find it kind of strange for GloFo to go Bulk only.
 


I don't see supercomputing moving away from top end dedicated components. Even though the dGPU will be obsolete, there will always be the super niche market of users who need that power for specialized use. But I speculate that by then, dGPUs will be way out of the price range for any non-commercial/enterprise users.

I love me a good graphics card but I'm aware that in the future we will be telling our grandchildren stories of the days when people would pay hundreds of dollars for what will then be dinosaur equipment.
 

first off - the a10 7850k has an integrated gpu. dgpu means discreet gpu (off-die or off-package), usually used to refer to discreet gfx cards like radeon hd 7770, 7850, geforce gtx 660 or mobile dgpus like 680M, 640M etc. discreet gpus usually have their own vram unlike igpus which share system ram.

simple answer to your question: none.
 
dGPU's aren't going anywhere. Look at the focus for new graphics options: Post processing shader effects, new AA modes. Nothing really NEW as far as new features go. Why? Because the stuff that we don't do (anything dealing with reflection/refraction of light, realistic fluid effects, etc) is REALLY expensive to do. We're nearing the end of what we can do graphically.

Ray Casting is the next obvious step, but we're still a generation or two away from having the graphical power to do it. Probably at the end of the current console generation (the new one) we'll see the move. We'll need dGPU's for that. But short term, I won't be shocked if APU's close the gap on dGPU's.
 


Here's a good white paper on the challenges at 20nm.

http://www.cadence.com/rl/Resources/white_papers/custom_20nm_wp.pdf

They're estimating a 20nm node to cost 7B-10B. Who's got that kind of spare cash besides Apple?

GF has some deep pockets with their oil rich backers but they're not crazy. I've heard IBM is even considering dumping their fabs because the costs are just getting too high. The only good news is at the 14nm node the double patterning can be reused and the nodes won't be as expensive as the 28nm>20nm shift.
 


Honestly, I don't care if you mentioned the term "Roadmap" or not. Nor do I care about your marvelous predictions from earlier, which I've never commented upon.

For your convinience I quote my own post in the following. If you do have any links that points directly to AMD telling about their future desktop products, PLEASE share. I prefer information on such matters to be refered directly from AMD's official channels. If it truly is an official AMD roadmap, why wouldn't they show it on their own site?

Fun fact about AMD Desktop roadmap: I can find the server roadmap for 2014 on http://www.amd.com/us/press-releases/Pages/amd-unveils-... (Yeah... That IS AMD's OFFICIAL homepage... But the Desktop Roadmap for 2014... Nope... Not there... And mark you the detail level about when and what on the server roadmap... The "current" desktop roadmap doesn't have that kind of details... I'm just sayin'...

EDIT:
It is in fact very questionable that the "published" desktop roadmap has anything to do with AMD...? Lots of their slides are available through slideshare.net and no AMD Desktop Roadmap 2014 is to be found here either:

http://www.slideshare.net/AMD/presentations

Nor under documents:

http://www.slideshare.net/AMD/documents

So... Think I'll just wait a little longer and keep running my Phenom II 1100T before I jump to conclusions on whether or not AMD will release an Steamroller based FX-CPU and buy a FX-8350... Next couple of months or so could be interesting...
 


No! A CPU/APU cannot have a dGPU - a dGPU is a separate chip on the other side of a PCIe bus.
 


Depends on the type of supercomputer. The IBM Blue Gene is a supercomputer of networked APUs (in AMD terms). The VMX unit in those processors is geared towards graphics. The ratio of traditional INT/FP/GPU units is certainly changing though.

Supercomputers have varying roles. The Blue Gene was geared towards artificial intelligence. It wouldn't be as good for simulating black holes or big bang theory stuff.
 


How is it DOA if Intels Broadwell has been delayed to Q3-2014?
 


Do you have a Kaveri APU IF not you have no idea what the future performance will be besides what Amd marketing slides tell you.
 


Again since you don't seem to understand what i'm saying you can not use turbo as a measure since its not always consistent also i did mention the wrong frequency by 100mhz that was not done on purpose. Its still clocked 10% lower compared to the A10 6800K and yes that will lower performance which is why i'm sticking to the 15% number on average.
 

the a10 7850k does not have a radeon hd 7850 gpu inside it. the a10's model numbering stems from the sequential apu model numbering in the past - trinity a10 had 5800k, richland - 6800k, kaveri - 7850k. those model numbers did not, in any way, mean that 5800k had a radeon 5800 (discreet class) gpu inside it, same for richland and kaveri. 7850 is just a model number, specs and real world performance are much more important.

steamroller is not d.o.a., and you don't seem to have anything to back your statement up. d.o.a. means dead on arrival, like microsoft kin (in my opinion).

amd does have plans for releasing more powerful products in the future. how powerful, is dependent on various factors.
 
I feel that this thread is barely worth commenting on... It used to exciting and factual. Now it's just drama and insults. And worst of all, it's against other fans. We're arguing over speculation here... Not even actual benchmarks... I have no idea what is going on with you guys, but I'm not taking part in it. At least Hafijur picked a team and sided with it. lol.
 
Status
Not open for further replies.

Latest posts