AMD CPU speculation... and expert conjecture

Page 582 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Fresh? New? "AMD’s FM3+ Socket in 2016"? "New Post Bulldozer/SMT Architecture in 2016"?

All that has been known for months. The only new part of their fresh article is their "No DDR4 Support in Desktop Processors till 2016." But I recall reading something similar before:

I guess that DDR4 support is really aimed at the server version of the SoC
(codename Toronto) and that desktop will use DDR3, because DDR4 2400 MHz will
only bring improvements in power consumption (the bandwidth is the same).

Ahhh, I recall where I did read that, I did in my draft about Excavator :pt1cable:



12 CPU cores? Yeah sure in fantasy land. But those are the same guys who reported the rumors of the nonsensical Baeca chip: another imagined 12 cores @ 6GHz chip made on superSOI plus plus triple plus

http://wccftech.com/rumor-amd-phenom-iv-x12-170-baeca-25nm-cpu-leaked-features-12-cores-6-ghz-core-clock-am4-socket-compatbility/

This is what someone said me once:

Be careful about WCCFtech.... they do not verify anything they publish and will create their own details out of thin air because they think it would be the best strategy.
 


Hahaha, no it's not new, exciting or fresh, it's VERY old technology that dates back to the 80's and early 90's.

"Cloud computing" has been the absolute biggest bullsh!t game I've ever seen. They have convinced upper management types across the world that concepts and idea's implemented two decades ago was "new" and needed "moar money".

Back in the 80's you used to have just a terminal and you sent your workload to the "mainframe" which existed "somewhere" to process your request. Your data was stored "somewhere else" and was managed by this mythical operator you rarely saw. The costs of computing was so high that having a computer at every desk was unfeasible so instead we would use dumb terminals with powerful central mainframes doing all the work. The mainframes did your mail, stored your files, ran your applications and processed your companies reports. Nothing was done locally.

Then IC's got cheap enough that small personal computing became economically viable and we started shifting more and more work to the local sites until eventually everything ran locally. The costs of managing those local systems were realized and eventually became quite large. Now the pendulum has swung in the other direction, it's cheaper to centrally manage all your IT needs from a remote location and thus "the cloud" was "invented". And old idea with a fresh coat of paint and a new sticker.

Did anyone not use hotmail or yahoo webmail? That was a "communications solution" loaded on a "cloud", aka clustered application servers running in a datacenter accessed via common HTTP(S). Anyone every store a file on something like "X drive" or metashare? That's a "cloud based data management solution". "Cloud" solutions are just clustered application servers running in a datacenter, the only difference from their 80's and 90's incarnations is that middle management is purposely kept ignorant of the technical details. They have no idea where their data is, the condition it's in, who has access or the SOP's used to handle and maintain it and thus they can claim innocence should something very bad happen.
 


The integration is new and the way we interact with data is still developing. The idea of a client server isn't new and I know that. Also know that what is called cloud today can be said to be just what we had in the past. I am just saying some of the way it is applied is quite new. Stuff like what is being done in an MMO just aren't possible even 10 years ago. I see further integration to come.

People seem to hate the world cloud just because its a buzzword but it is now used to describe centralized computing and I don't see a reason not to use the word.
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


1. So many people complain that with APUs, they would "have to upgrade my GPU just to upgrade my CPU!!" ...And this is a bad thing? What a bunch of negative nancies. How cool is it that you can upgrade both your CPU and GPU all in one go? See the glass as half full, not half empty.

2. Integration is the future and that's been the course of things since the IBM 5150. The way I see it from here is:

CPU ---> APU <---GPU (Today's integration)
Memory ---> ? <--- Storage (Near future's integration)

And while we're throwing around wild speculation, I would also expect to see chassis and power supplies become unified in the future given the industry's new drive for ultra low power and high efficiency.
 

wh3resmycar

Distinguished


your sentiment only works if we're all filthy rich... the cost of a substandard APU (a10-7850k) already makes no sense. imagine the cost if you put a flagship CPU & GPU uarch into one chip, then every year you need to dump your entire system to move to a new one. each CPU's I've owned lasted me at least 3 years , progressing to a different GPU every year.

this thread is back again to where someone misinterprets "marketing slides" as "cold hard facts".

i don't see consumer grade APU's devouring unreal4 @ 1080p @ respectable framerates for the next 2 years or so.
 


The integration isn't new, we were doing the exact same thing a decade ago. The only difference is that all the nuts and bolts are now hidden from you and ran by cleverly coded scripts and APIs. We had MMO's back then, but they were called MUD's and relied on text graphics. I ran a BBS as a hobby that had an interactive MUD running with multiple players. Even now hobbyist's still tinker with them but by using telnet to emulate the serial port style access methods.

Seriously, we are doing the exact same things now that we did then. Newer software and more automated platforms, but all the concepts are the exact same. You need a service done and instead of doing it on a local server with locally managed credentials you run it on a remote server with remotely managed credentials. Instead of a heavy client (software on your local workstation) you use a thin client accessed via web interface (HTTPS). The code is still being executed, the data is still being processed, stored and managed. There is nothing "new" about any of it.

I don't like it when people think it's something that's "new and exciting" or some special piece of technology. I once had someone try to convince me that his "cloud servers" were not "application servers" because they ran "Widgets/Plugins" accessible over the web vs "web applications" accessible over the web. I opened up his software and pointed out that his software applications were just J2EE apps (war/ear/jar) that were deployed into his servers.
 


Sure it was; remember Ultima Online back in the late 90's? The tech as ancient.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


1. Yeah. The same people complaining about integrated graphics is using integrated FPU, integrated IMC, integrated multi-cores without any complain...

Linus said it succinctly: discrete graphics are a historical artifact.

2. Memory on package already appearing next year is the first step towards memory on die.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Your post couldn't be more divorced from reality!

1. I am discussing future APUs, more concretely APUs of the year 2020, with a ~25x better efficiency than current APUs.

2. I never said or insinuated that Intel and AMD are just going to abandon gaming PC. You wrote that, I don't.

3. The things just happened the inverse as you tell them. I first studied the topic, did the math and posted here my own conclusions about exascale many months ago. Then recently AMD, Nvidia, IBM, and Intel gave talks at a professional forum and they confirmed that I said: they will not use dGPUs for their respective exascale projects.

4. The claim about ARM will win is based in an analysis of the ISA and of the licensing ecosystem. An awarded research work presented during SC13 (Supercomputer 13) Conference by experts in HPC confirmed my thoughts about ARM replacing x86. Posteriorly, AMD expert in servers confirmed my thoughts about ARM replacing x86 in servers. Again you are telling the things in the wrong order.

<< FUTURE >>

I can predict here that ACMP will be dominant architectural paradigm from phones to supercomputers in the year 2020, with a non-monolithic modular scalable design. I can also predict that SSDs will be integrated on the SoC then. I could even predict the size of those integrated SSDs and the technology that will be used.

I can also predict which will be the architecture that will replace ACMP after the year 2020. I dubbed it RCMP and I expect it to be ready before the year 2025. I am now trying to estimate the tile topologies and the wide of the architecture. Initially I considered a 16x2w conf. but now I am considering a 12x2w confg. due to overheading limits from fusioning the lanes inside each tile.

If tomorrow or next month or next year you see some slide reproduced by WCCFTECH and saying more or less what I am saying now, please don't pretend that I "see marketing slides", "take things out of context", and say "HOORAYY!" because it will be simply wrong.
 
i was reading this
http://www.techspot.com/article/851-virtual-desktop-gpu-acceleration/
when i read palladin's posts on "cloud". :)

edit2:
According to market rumours, the yield of chips made using TSMC’s 20nm process technology is pretty low, which is a problem for fabless designers of semiconductors as nowadays TSMC charges them for the whole wafer, not individual chips (i.e., they pay for both functional and faulty chips).
http://www.xbitlabs.com/news/graphics/display/20140722203827_AMD_Vows_to_Introduce_20nm_Products_Next_Year.html
reminds me of glofo.


i know carrizo will be on bga packaging, but i don't think it'll be exclusively available for mobile pcs. i think that carrizo(65-35w parts) will show up in aio desktops, nettops/small desktops like sapphire edge, zotac zbox, gigabyte brix and may be ecs' liva pcs. it'd be even better if the higher tdp socs were socketed.
the unknown here is glofo's performance. i hope they don't screw up like they did with kaveri.
 


I wonder if the Carrizo being bga only stems from a misunderstanding regarding memory. AMD have repeatedly used dual memory controllers, so it's quite plausible that Carrizo will include a DDR3 / DDR4 controller. The result is they offer a DDR3 version to fit the existing FM2+ socket for desktop (where dGPU is common), and DDR4 bga package for mobile (where the increased power efficiency and better memory bandwidth compared to low power DDR3 so-dims will make a big improvement).
 

carrizo won't be bga only. the ulv parts will be bga. i don't know if desktop apus with exc cores will be called carrizo after reading 8350rock's claim. i was hoping some of those ulv parts will be for lga sockets too (by my newbie knowledge, assuming moar pin outs), thus interchangeable and may be upgradable. current dt apus use pga packaging afaik.
ddr4 isn't gonna improve memory bw per se. it's just that memory vendors are making higher speed ddr4 modules available at launch since lower speed versions don't offer huge performance improvement over ddr3 except lower power use. the rumored 2M/4c with 2MB L2 cache in the latest leak was seen using ddr3 so-dimm in the diagram.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


There is no yield problems at Glofo. Moreover, Carrizo will be made on 28nm at Glofo.

What Anton Shilov doesn't mention in his ruminations is that TSMC problems are related to a very aggressive roadmap. Intel is still at 22nm (which is ~26nm per TSMC standards), Glofo is still at 28nm, whereas TSMC has ready 20nm. TSMC is already selling 20nm chips to Apple.

Evidently costs are still too high for large chips as those used in GPUs, but this will change next year when production matures.

As a related note a friend of mine mentioned days ago that AMD will do K12 in TSMC 16nm. He gives high probability to this rumor.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Toronto --> DDR4 enabled
Carrizo --> DDR3 enabled

There is not bandwidth improvement: Toronto/Carrizo support up to 2400MHz DDR.
 
http://www.amd.com/en-us/products/processors/notebook-tablet/apus#

new beema apu specs from amd
​A8-6410, ​Radeon™ R5 Graphics, ​4 Cores, ​2.4 GHz(turbo!)/2.0 GHz, igpu- ​800 MHz, ​L2- 2 MB, ​ DDR3L-1866 ​tdp- 15 W

this is the lowest tdp kaveri i've seen so far:
A6-7000, Radeon™ R4 Graphics, 5 compute cores (2 CPU + 3 GPU), turbo -3.0 GHz/ base -2.2 GHz, igpu- 533 MHz, L2- 1 MB, DDR3-1600, tdp- 17 W

this is where ulv carrizo seems to be aiming at.

edit: the "12 core" teaser might point to an arm a57 based hsa-compliant, consumer soc with 8 cpu cores and 4 gcn cores, aimed at laptops. or a 4+8 configuration. iirc amd said hierofalcon socs will come out in 2H. hee hee :pt1cable:
edit3: however, i'd personally like to see a puma based soc with 8 cores and gcn 1.1 igpu. hopefully.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Unlike others, I did a set of precise predictions about Kaveri using concrete benchmarks and my predictions were confirmed up to the single digit percent error (on average)

x264-kaveri-comp.png

Himeno-Benchmark-kaveri-comp.png


Unsurprisingly Kaveri did perform as I expected

x264-kaveri-final.png

Himeno-Benchmark-kaveri-final.png

John-The-Ripper-kaveri-final.png


You can find rest of benchmarks in the page you know. Moreover I predicted a PassMark score of about 6000 points. And Kaveri was close to that result

http://www.passmark.com/baselines/V8/display.php?id=19024964763
http://www.passmark.com/baselines/V8/display.php?id=19096151266
http://www.passmark.com/baselines/V8/display.php?id=19198305737
http://www.passmark.com/baselines/V8/display.php?id=26592908459
http://www.passmark.com/baselines/V8/display.php?id=26593413065

Take the average value and compare it to my prediction. The error is again within the single-digit percent.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680
Give it up Juan. If the 7850k had been competitive with the 2500k it would have been universally praised. I would have bought one on day one. It is, in actual fact, a complete pile of meh, while the 2500k is still a great chip.
 

jdwii

Splendid
In those benchmarks he gives one is 12.5% behind a 2500K and the other is 45% behind one and in John the ripper its 25% faster so yeah on average based on 3 benchmarks out of everything else. 10.7% off of his expectations based on those 3 benchmarks alone
Now if we look here the difference is more around 35-45% slower (BIG)

So no juan you were off by 30-40% on average based on the majority of the CPU benchmarks. Not only that but a A10 6800K is faster most of the time in CPU benchmarks as we can tell here

So then we can start looking at other review sites such as Guru3d
And then Bit-tech
This processor is 30-40% slower compared to a I5 sometimes its only 12% slower others 20%. Majority is what matters not a few benchmarks and i do not care about passmark its worthless as any other synthetic benchmark in my opinion.
 

jdwii

Splendid


He uses a few benchmarks he is using a fallacy called the Hasty generalization more importantly tied to argument from small numbers- statistics of small numbers
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Some controvercy over this part because it says Beema cores but then also R5 which means 4 CU, which is double the other Beema devices.

Also that AMD page says the A6 parts have 5 cores. It should be 6. I guess that's what happens after you fire all the marketing staff. ;)

"Quad/5 compute cores*** (3 CPU + 2 GPU)"

 
Status
Not open for further replies.