AMD CPU speculation... and expert conjecture

Page 165 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


Which is a shame, really, because by not adhering to an open standard naming convention they will only end up confusing consumers. Just like how they don't want in on HSA but rather their own proprietary InstantAccess. My respect to all those out there who are salespersons in IT.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


I've posted that long ago more than once elsewhere, even before any specs of HW, or Richland had been knowned. And i'll do it again for Broadwell against Excavator.

How can i honestly do such predictions when there isn't a true clue of any spec ?

Its easy, "PERFORMANCE IS IN THE SOFTWARE" not the hardware, and tests are made one at time, in a clean system where nothing else should be running, WHICH IS NOT REPRESENTATIVE OF HOW PPL USE THEIR SYSTEMS AT ALL.

Go to a ARM systems and you don't feel less than half the performance of an APU as example, the software is quite different, much less bloated, its as snap fast as on AMD or Intel. (understand performance is in the software ? )

Intel tweaks their uarches for this, most if not all the benchmarks are done or compiled with its tools and compilers, so there is some truth in that "bold" line of yours, it shows that SR might need more than 30% to catch i7 on those benchmark software. FULL STOP

Current software, on systems full of things running in the background, is quite different from benchmarks, for once none of it uses ICC, it might be that Piledriver already wins clearly on some real software, attending the superior multitasking ability that AMD processors have always presented, and attending to the fact that Intel uarch has "shared" issuing ports for FP and Integer for execution, which seems a terrible bottleneck for several processes or jobs at the same time. Wont show in benchmarks, how users use their systems, because tests are done one piece at a time and one piece alone... which is worst than apple and oranges, its quite unrepresentative of anything, is only an indication of peak performance possibilities for a particular piece of software, that happens is more attuned for one particular uarch then else.

By itself a little more than a simple scientific curiosity, but in context represents nothing absolute whatsoever, or a clear indication of performance on usage conditions, clearly seen when measuring particular tests derived from the same applications (like some media content and or creation), even here results fluctuate a lot (what may win in some particular test may lose in another test of that same app). On top of that a little change in a single BIOS setting can have a disproportionate effect on results, an the contrary the same wise, many different BIOS settings can have almost none effect, and mobo, and memory and storage subsystems also can have a NOT negligible influence. Meaning results in statistical performance are prone to "ERROR MARGINS", particularly if you want to *isolate* a particular element out of the ensemble for a test, which is not really isolable because nothing can function standing alone.

So ppl are not measuring CPUS or GPUs in absolute terms whatsoever, they are only getting an approximation of potentials of performance, more or less with wide "ERROR MARGINS" depending on the measuring instruments, and always different with certain from test conditions to real usage conditions (and this for any vendor).

See it like motor racing, and the results for the qualifying sessions and for the race... those are always quite different... but in "benchmarketing" (it is benchmarketing without putting blames on anyone) with current tools and methods is even much more distorted than that, and will always be with no solution possible inside the current paradigma... and this without any suggestion of foul play whatsoever, the all paradigma in itself scientifically, is what is much wrong above all, not any particular player. But who rides this wave the best tends to always win... the arsole award of convincing ignorants of something that may be quite different from the truth... sorry for this last rudeness, but its a fact of life that happens to work and is very lucrative, and its not only on IT, far from it.

 

8350rocks

Distinguished
Well, the first 64 bit technology intel used was a system that flopped, they eventually just adopted for use all the AMD x86-64 instruction sets that AMD introduced (AMD was first btw). I cannot even recall what it was they called it...

I have a feeling this will go the same way...the players involved in HSA are too big and too many for Intel to overcome all of them. Much like Nvidia's CUDA is battling OpenCL, I think time will be on the foundation's side. There are many more adopters of OpenCL than there are for CUDA (scientific community is about all there is for CUDA adopters outside of individual companies).
 


By that competitor to AMD x86-64, I think you mean EM64T, which is very similar to AMD64 if I recall.....

 

8350rocks

Distinguished


That's it! EM64T...I kept thinking it was EM something but that didn't make any sense...it was very similar in a lot of ways. Though the instruction sets AMD brought out were all adopted...I am not sure what of Intel's 64 bit architecture actually made it in...certainly some of it...but how much of it, I am unsure about that.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Neither are the APUs so far. The REAL first APU is kaveri, with that HUMA and work queues that could be wonderful for OpenCL kind of jobs. Again it wont show in benchmarks because it seems the OpenCL floating around is a ghost of the real potential with not even 10% of the real potential and this even comparing intel to intel.

I think i understand why AMD put so much force in HSA. If ppl expect Kaveri will win for Intel with current benchmark software they may position themselves for a severe disappointment. Nothing will win from intel in this distorted "freak game" if intel doesn't allow it, since it holds all the cards for the show.

HSA is mostly SOFTWARE, what it intends above all is to build a SOFTWARE development ecosystem with a top nosh compiler toolchain, for a particular SOFTWARE ENVIRONMENT that is based on a particular very flexible runtime paradigma. This environment can be agnostic to both CPU or GPU ISA implementations, that is why the ARM armada is represented in force there.

The future will be HSA games and HSA applications that can run on x86 or ARM unchanged, or mostly unchanged... and i think the temptation for AMD is already big to change and stop this freak show and not allow no one to put ICC software in their machines and take conclusions from that... but only when this HSA SOFTWARE catches some fire will happen, then AMD might very well change to ARM.

The HSA implementations of the iGPUs will be quite different from any of the HSA participants, HSA doesn't need HUMA, though it helps a lot, and for sure doesn't need GPGPUs with context/interrupt handling and preemption support like if they were exactly a CPU. AMD will go this route, other ARM implementers most probably will not, at least not anytime soon.

So HSA in truth is SOFTWARE MOSTLY, though IOMMU and hardware queues for HUMA style configs are already in the standard, any GPU ISA or any ISA extension like AVX, or context preemption etc is not. This HSA Software can be quite faster than a CPU alone for the many (hmm ... all multimedia for sure) targets intended.

Them a HSA game for sure will run on Intel, the runtime is SOFTWARE, but with the proper minimal hardware support will run miles ahead on AMD, no matter if intel GPUs have more shaders and gain on other games, there will be a "detachment", only don't expect benchmarks of this, intel wouldn't allow it.

I can imagine many inside AMD sneeze at the prospect for ditching x86, after all they hold the IP for x86_64 that is the clear standard most used right now, but they may end up be forced to do it before the company goes bust... the "freak show" is killing then slowly but inexorably, if they are expecting to the last that *general users wise up*, like it seems they expected heavy mutithreading "client/desktop" software with their Orochi exercise, that may be the last mistake they do.

And in the end they can go Transmeta, for which they already bought all the IP on the time of Ruiz and get an "Hybrid" ARM/x86_64 ... but even this could be a mistake, if propped up as frontline...

The solution is HSA sofware and other SOFTWARE made with HSA tools, without any possibility of intel to touch it or influence it to his advantage ( think ARM armada feels likewise). CPU is in a dead scale path, this could be way much better, no distorted comparatives... wrong... very distorted comparatives, HSA on intel will always be quite slower, if ARM then no chance of running if Intel doesn't adopt ARM to (they wont), but by them the freak show had ended for sure and no possibility of contagion to ARM.

Sounds the Lord of the Rings lol ... So it is before the walls of Minas HSA that the great battle of our time in IT (middle earth lol) will be fought (edit), everything intel (which seems is what they always have wanted) or nothing intel... Other connotations with Lord of Rings is not really intended, but if you want to go there, choose your pick, don't blame me.

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


No they are exactly the same, EM64T is the designation of intel for the AMD x86_64 extension, which AMD first trademarket as AMD64.

Amd and intel tough they seem partners with quite extensive cross-licensing... in reality are the worst enemies possible. And this without wanting to pick sides, with much much more blame of intel than amd.

Only it seems the *culture* inside amd sometimes forgot about this or fall subservient, which has been worst than any less successful product. But now that they can see the "end cliff" approaching maybe they wont forget.

No big deal IMO, least of all of end user prejudice, but intel wil never adopt an amd name for a tech developed by amd or anyone else for that matter, not even at gun point, and this is quite illustrative of posture.

UPDATE:


Intel 64 bit architecture is the Itanium uarch or Itanium Architecture or IA-64... popularized around the net by the warm epitome of ITANIC...

A clear failure of intel among many others projects... that i'm sure they don't want anyone to remember lol
 

Gah, that is what I was thinking, but Itanium was never a competitor to AMD64 products if I recall. However, EM64T has a few differences from AMD64.
Here- http://en.wikipedia.org/wiki/EM64T#Differences_between_AMD64_and_Intel_64

I see Itanium as an architecture looked down upon far too much....

Update 1: This quoting system is an utter failure! STOP GIVING MY POSTS TO OTHER USERS!

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Rumour confirmed: AMD will be releasing the FX-Centurion Chips FX-8770 and FX-9000 together with mobos for them

MPnvMkM.jpg
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780
A good example very illustrative about representativity, and why i say performance is in the software, and the fallacy of "absolute" extrapolations.

OpenCL tests

AT
http://images.anandtech.com/graphs/graph6993/55300.png

TechReport
http://techreport.com/r.x/core-i7-4770k/lux-icd.png

Hexus
http://img.hexus.net/v2/cpu/intel/Haswell/4770K/HVK/graph-08.jpg

And this without questioning origin and the same blob for all tests, something which md5 could do, since its obvious that is LuxMark 2 is what gives Intel an extraordinary result, while the Hexus test seems to use an older version. But them what are we measuring, the software or the hardware ?... shouldn't the software to have some meaning to measure hardware be frozen in time ( a test for OpenCL, only upgraded when pertinent after a long time )... and what is really representative of software out there ?

Any vendor tweaking a piece of software for a particular uarch, if he is any good can achieve quite good results, yet none of the current software use his tweaks, not only for lack of access to code, but because current software widely used can even have something much better, but that is not really attuned for any particular uarch.

So what is the "representativity" of this ? ... its quite possible Intel iGPU has more OpenCL performance, VLIW4 was always not good in this department, though having much more GFLOPS than intel and others. But does it really matter ?... is a conclusion that is "because" this iGPU of intel overall is better that much ? ...

Hexius results shows almost no difference at all, differences easily lost in the "ERROR MARGIN"... i think many conclusions floating around are complete fallacies.
 
Richland is more a refresh than upgrade, about 10% gaming improvement and 10-20% general or synthetic increase over Trinity. I haven't seen power related figures but if it is down a bit I will sell my 5800K and get a 6700.
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
SO... It's been a while since I've been on here. Have I missed anything, or is this it?

Kaveri to launch with FM2+ and seems to be on coarse for Q3 2013. (I'm very excited for this!)

AMD to launch new Piledriver lineup! (I'm curious about this)

Intel Hasbeen is living up to our expectations, or even worse. Lol.

And G0M3R is still trying to argue intel is a better budget option than an APU and should not be compared to an intel iGPU counterpart.
 
Got a stable 4.5ghz overclock with DDR3 2400 and the iGPU at 900mhz, nice gaming scores.

And G0M3R is still trying to argue intel is a better budget option than an APU and should not be compared to an intel iGPU counterpart.

Lol but you need a discrete graphics card to make it worthwhile.

We were arguing whether the 4770R represents good value when it by and large delivers HD6670 performance on desktop but the ultimate decision is a resounding no;

The 4770R for board and chip comes in at a whopping $650 give or take for that you can buy a Board + chip + GPU that will stomp the mud out of it needless to say the 4770R is a watered down 4770K so with discrete not only do you waste $300 when you could jsut get a 4770K but you lose the whole purpose of IRIS and even for HTPC how many really need IRIS at that cost when a A4 5000 or 5200 is more than enough at around $250-300 for a complete HTPC, reaks of fail and really Intel brought the B17 to P-51 party.

IRIS is only good in notebooks but at around $800+ for 1k you can get a discrete AMD or Nvidia option that will pulverize the HD5200 pro into complete submission so again I really don't see what the point of Iris really was, or more so why to specific models and not through the line, it just seems like a "hey we have a multi billion iGPU solution that is the fastest and we will make you pay for it" The next aspect is IRIS cost Intel a fortune to make at the cost of efficiency and x86 performance it is already massive and its clockspeed is just insane how do you make it better without sacrificing more CPU performance.

As for Haswell, its not much different to Phenom II to Bulldozer, the FX8150 beats the 1100T in around 34 of 45 benches but was deemed a failure because of its IPC ratings, Bulldozer was also step one in AMD's road path so could hardly be deemed the be all and end all when it represented a radical departure from traditional CPU design, Piledriver improved and we now await SR and Excavator which will be the two that determine whether AMD's path was a success or failure. Haswell on the other end of the coin represents a mature architecture yet it loses to the ivy in a few benchmarks ..... except itunes of course:D overall Haswell only moves the marker on a synthetic level but isn't much of a upgrade and not definitively faster than ivy or sandy to boot. The only reason it will not be a massive failure is because it will be marketed a success.

Richland reviews will be out today or tomorrow as they said this week, what are the odds on every reviewer going extreme on it, setting it unattainable benchmarks and going all out for not reaching them... watch the space.

 

Cataclysm_ZA

Honorable
Oct 29, 2012
65
0
10,630


Intel Itanium and EMT64! That was a right mess indeed. I've worked on one Itantium server in my lifetime and it was okay. Intel still sells and markets newer versions of the Itanium processors and they're more commonly found in mainframe servers with custom Linux operating systems. I love the way Itantium 2 processors slot into the boards!

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


According to rumours AMD sets a TDP of 220W.



Right. Makers as Gygabyte are announcing their line of Haswell laptops and all of them use a discrete GPU. Nvidia confirms the fiasco of Haswell IRIS:

OEMs don’t seem all that impressed with GT3e, as it’s power hungry and expensive. We expect only a tiny number of notebooks will come with GT3e.

Which PC OEMs will be offering Haswell notebooks with discrete GPUs?

Haas: Every major PC OEM will be offering notebooks with Haswell and discrete NVIDIA GPUs.

http://blogs.nvidia.com/2013/05/qa-why-gamers-still-need-a-discrete-gpu-with-haswell/
 
http://wccftech.com/amd-launches-generation-aseries-richland-apus-desktop-fm2-platform/

Llano > Trinity +25-30%
Trinity > Richland +10-15%

Llano > Richland +35-40%

I am likely to be sticking to my 5800K until Kaveri, Richland is about 10% faster in production, around 20% in synthetics and about 5-10% in gaming hence why they said 10-20% gains I just need to see the power numbers to be sure.


As for Iris, GT3e fails due to pricing, if it was charged at $150 it would be unbeatable for now but Intel is about money not your needs.
 

jdwii

Splendid
I have to be honest with you guys i did not read the review of Haswell on Anandtech yet how much do you want to bet they're going to kiss some well you know to Intel. i'll edit after i read the whole summary.


"I’m a fan of Haswell, even on the desktop."
"voltage is the only real downside to the platform, then I’d consider Haswell a success on the desktop."

At best Trinity to Richland is a bigger upgrade and its not even a Tock. I laugh when i read anything from that site and honestly they need to change their site to a IGN base format, their "fans" and writing comes close to matching it.
 

jdwii

Splendid


Based on those benchmarks the CPU is 8% faster on average ranging from 5-11%. The GPU is around 15% better( i actually averaged all those numbers i have free time right now) Now we need to know power consumption as well. However for a product that i believe is getting phased out this year for Steamroller and a GCN based IGPU this is a pretty amazing refresh or tick.

http://www.intel.com/content/www/us/en/silicon-innovations/intel-tick-tock-model-general.html I posted this here so you guys know what a Tick and Tock is supposed to be.

From Sandy to Haswell(2 gens) there is a 15% improvement in performance on average with 10% improvement on power consumption.
When we look at one generation from Amd we see the same type of improvement.
 
^^ But remember, Intel focuses on the iGPU this go around, and got results. I'm seeing numbers that average to abut 40% performance increases generationally, which is significant to say the least. CPU side, maybe 10% strictly on IPC, or about what we've been seeing for a while now.
 

i see how amd claimed 30% graphics performance improvement - a10 6800k supports ddr3 2133 ram by default. i can safely assume if reviewers don't test it with 2133 kit, c.a.l.f. are gonna complain (they'd complain regardless). higher spec ram, clockrate bump across the apu and some optimized benching - voila! you got the 'one more upgrade on socket fm2' amd promised you. then rumor surfaced that kaveri will use a new socket but rumors are only 50-50.
gt3e can't be cheaper. the edram alone should be $50-80 at least. that would leave $100-70 for the cpu+igpu+rest. intel isn't so nice that they'd just throw in a 4 core cpu and 40 eu igpu with 128mb l4 cache under $150. that move makes them benevolent. intel is anything but... :lol: the best i can assume is intel releases a core i3 with hd gfx 5000 in october-december when consoles are out and no one cares about $170 ht-dual cores anymore... :whistle:
i speculate that this is why amd decided to skip embedded memory on the the gpu and apus. then there're the issues of 'who's gonna make the dram', 'how much price does it add to the apu', 'will anyone buy a dual/tri module pd-based apu with edram @$180~', 'how are we gonna use the dram, bigass unified l3 cache or for igpu only', 'do we even have the tech to properly power-gate that big cache when we can't even rein in cpu power use' ?? and so on..
 
Only one source has said that DRAM is dropped the rest suggests like AMD's slides DDR3/4 and GDDR5 support, my assumption stands, I believe the GDDDR5 will be embedded on the motherboards at 512MB/1GB@3500-4000mhz the area used is going to be smaller than the socket itself and you can slap a few VRMS and a passive cooler on top then just move the PWN and CPU fan power connectors. so the cost will be on the motherboard and since the Gigabyte UP4 is only $120 add $50 onto it and you have a really high end motherboard for $170 which is still not a lot but when the iGPU beats a HD7770 thats the gains, needless to say I have heard the top Kaveri part will retail around $160 but the performance is there to justify it. Long story short its easier to embed on the motherboard than it is to make a SoC and board to support or better than going all in one BGA's. Also why release FM2+ if there is no fundamental change.

 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


I'm just going to say, its a little cocky to say that they were first. Yes they are, but trying to make that a point to try to win, is not right.

Also with CUDAand OpenCL, OpenCL is not as much as used as Dx11 and Dx10, which is what CUDA is better adapted for than OCL. AMD in a similar way is better with OpenCL than Dx. Food for Thought.
 
Status
Not open for further replies.