AMD CPU speculation... and expert conjecture

Page 573 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

jdwii

Splendid


Almost as bad as TFLOPs. Like i said if TFLOPs was the only measure a GPU would be faster then any CPU in everything and a 280X would be 25% faster then a 770GTX all the time.
 

jdwii

Splendid


So in 2020 you think 1080P gaming will still be mainstream instead of 4K growing or what about Ray-tracing? Na as it stands even a 295X can hardly do 4K 60FPS(but what about 120FPS) and nothing is good enough for Ray-tracing.
 

jdwii

Splendid


Actually it is faster in IPC compared to the others also that is irrelevant
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


Just add about 15% to results.
Do you have any better idea?
 

no one really publicly talks about that far ahead except for manufacturing and mfg. tools people. long term roadmaps are usually kept internal and only partial ones published every now and then.
right now, 1080p pc gaming (especially desktop) could easily be entry level if a few factors weren't against us consumers. one of them is the ongoing display price fixing. another one is the slow moving display standards war. you'll notice that in mobile devices this is much less of an issue. 4k is making fast headway but it'll still be stymied by panel pricing and standards for a while.
i think current gpus are bound by manufacturing process and memory bottleneck. if those two open up, gfx performance increase will ramp up again. we're in the middle of a transition from usual low-res gaming (1080p and lower) to high res.(4k and upwards), multimonitor, and headmounted vr gaming. current crop of gaming gpus will not be useful when the transition ends or stabilizes.
as for ray tracing. i don't really know. i read about other technologies like global illumination gaining favor among gpu makers. some of the articles i read say that ray tracing is very dificult to implement in real time with current technology. high performance gpus may take a temporary back seat in favor of mobile. from the upcoming gpus it seemed to me that both amd and nvidia are just slapping on more vram, wider bus and better power management instead of gunning for performance. hawaii was a happy, surprise exception. hawaii type gpus may not come out until tsmc has fully worked out their 20nm process.
 

jdwii

Splendid


Perhaps if you even explain what you mean more, instead of disagreeing on the facts
 

jdwii

Splendid


Agreed even more so when you state high-end performance(not just the magical TFLOPS number people through around) is being sacrificed for low end hardware. I'd be more happy to see them make lots of money with low-end mainstream crap and put that towards their research for higher-end equipment.
 

jdwii

Splendid
Info on my 770+8350fx(OC 4.3Ghz) build on idle this thing uses 81 watts with C6 enabled and with it disabled 105 watts on idle. With Prime95 only running this goes to 280 watts peak picking the max power consumption heat option. With prime95+Furmark running this thing uses 570 watts(a lot). During chrome and doing videos my rig uses 150-170 watts. I bought a Kill a watt meter(P3) and its actually kind of cool i have my PC on a separate connection with a extension cord and i can always see this meter on everything i do. I was surprised to see my rig use 85% of my Power supply which means i should get a new one and more powerful one i'm extremely happy i did not buy a 280X now since it uses 30% more power. However i know that this PC during gaming will not be as high(did not test it yet).
 

Fidgetmaster

Reputable
May 28, 2014
548
0
5,010
What PSU?...gaming you never get that high of consumption/load like the majority of everyone thinks, they assume you are at Max/peaks all the time haha...you probably could of gotten away with a 280x if the psu is a decent unit?
 

jdwii

Splendid


I have a Antec earthwatts 650 unit which is now 3-4 years old.
 
rumor: AMD Carrizo APU on the 28nm Node Will Have Stacked DRAM On Package – Alleges Italian Leak
http://wccftech.com/amd-carrizo-apu-28nm-stacked-dram-alleges-italian-leak/
+NaCl
Report: PC Gaming Hardware Market is Worth More Than $21.5 Billion Worldwide – Double the Value of Console Market
http://wccftech.com/report-pc-gaming-hardware-market-worth-215-billion-worldwide-2/

AMD A10-7800 ships in Medion AKOYA E4000 E desktop
http://www.fudzilla.com/home/item/35213-amd-a10-7800-ships-in-medion-akoya-e4000-e-desktop
europe! getting closer.... it's much closer to u.s.a. than japan... depending on how you measure :p
 

8350rocks

Distinguished
You know, actually looking back at some of the wccf articles, there is actually quite a bit of truth in some of them. I cannot comment on stacked DRAM, though I can say, they are pretty much accurate with what they know about the next gen architecture (though it is limited).
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I do run Gentoo so I do get all that goodness. But the thing is, you simply can't tweak software like that in Windows. It's an advantage of Linux. And you don't really have to run a distro like Gentoo to get that either. There's no reason why you can't just download programs to /opt and compile them yourself. You know, for all those programs that are performance sensitive and see large gains.

Gentoo just makes it a lot easier (honestly, once you get it installed) to optimize specific software. Edit one file and then use the package manager (portage) to take care of everything else.

Simply put, you won't be compiling Aero, Windows Kernel, .Net framework, etc optimized for your hardware. And doing so is far more supported in Linux.

Which is why I always say that if you really care about rendering or transcoding performance, you'll spend the time to do it in Linux as opposed to wasting money on hardware.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
@jdwii, not only the IPC is at Haswell level but some of the ARM cores recently presented for HPC have double FP throughput than your FX CPU. This is why I am tired of laughing when you pretend that ARM is only for phones and that x86 is for 'serious' work.

TFLOPS are a popular metric, but as any metric it is needed some expertise to correctly interpret the values. Evidently a 111 TFLOPS machine doesn't have to be obligatorily 11% faster than a 100 TFLOPS machine, because other arch. elements such as the byte/FLOPS ratio affect how much of the maximum FLOPS are usable for large workloads that doesn't fit in cache, for instance... I have to ask, do you even know basic stuff such as what is a driver and how it affects performance? Because you sound as you don't know.

I said "same data"... Not my fault if you cannot read properly.

Nvidia makes most profit from its HPC/server product line, not from selling Titan cards to four fanboys. Nvidia will stop making gaming dGPU when the compute dGPU will be killed. Don't be surprised, this is by the same reason that AMD stoped making FX-Steamroller when the HPC/server Opteron CPU line was killed. Only some few believed then that AMD would produce FX-Steamroller out of nothing.


@de5_Roy, I already suspected that you would use your ignorance as excuse to begin a new series of personal attacks.

Also, Ubuntu is not a "x86 o.s."; Ubuntu runs on PowerPC and ARM64 as well.

@8350rocks, continue dreaming about surpassing Skylake.

@colinp, I am explaining you how things are. We know that dGPUs will be killed. Everyone in the industry expect this to happen by about 2020, Nvidia included. The concrete year is irrelevant, it will happen yes or yes.

What part of the APU will be much faster you don't get still? Check the FLOPS, the efficiency factor, and the die areas, and stop ignoring the arguments.

FYI, sometimes predicting the far future is easier than the near future. I don't know what you will do tomorrow at 15:23 hours and 5 seconds, because there is lots of uncertainty. But I know that by year 2140 you will be dead.

@blackkstar, You have ignored anything that I have said and anything that AMD has said... and then repeat the same litany!

@szatkus, I have shown some benchmarks where Haswell is 70% faster than Ivy Bridge at same clocks.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
@Cazalan, precisely ATI joined to AMD to remain in business. From Anand's analysis of ATI acquisition:

Preparing for the Inevitable Confrontation with Intel

From ATI's standpoint, it's only a matter of time before the GPU becomes general purpose enough that it could be designed and manufactured by a CPU maker. Taking the concern one step further, ATI's worried that in the coming years Intel will introduce its standalone GPU and really turn up the heat on the remaining independent GPU makers. By partnering with AMD, ATI believes that it would be better prepared for what it believes is the inevitable confrontation with Intel. From ATI's perspective, Intel is too strong in CPU design, manufacturing and marketing to compete against when the inevitable move into the GPU space occurs.

The AMD/ATI acquisition doesn’t make a whole lot of sense on the discrete graphics side if you view the evolution of PC graphics as something that will continue to keep the CPU and the GPU separate. If you look at things from another angle, one that isn’t too far fetched we might add, the acquisition is extremely important.

Some game developers have been predicting for quite some time that CPUs and GPUs were on this crash course and would eventually be merged into a single device. The idea is that GPUs strive, with each generation, to become more general purpose and more programmable; in essence, with each GPU generation ATI and NVIDIA take one more step to being CPU manufacturers. Obviously the GPU is still geared towards running 3D games rather than Microsoft Word, but the idea is that at some point, the GPU will become general purpose enough that it may start encroaching into the territory of the CPU makers or better yet, it may become general purpose enough that AMD and Intel want to make their own.

It’s tough to say if and when this convergence between the CPU and GPU would happen, but if it did and you were in ATI’s position, you’d probably want to be allied with a CPU maker in order to have some hope of staying alive. The 3D revolution killed off basically all giants in the graphics industry and spawned new ones, two of which we’re talking about today. What ATI is hoping to gain from this acquisition is protection from being killed off if the CPU and GPU do go through a merger of sorts.


Nvidia is currently pushing up the Tegra APU/SoC line for same reason. To be precise, Nvidia started to plan the development of APUs about the year 2006 when tried to obtain a x86 license.
 

aww, it's not a personal attack. i asked for proof and in reply i got lies and i called it out. i already suspected you'd cry personal attack after getting caught lying, instead of providing credible proof. if someone, as ignorant as i, can catch you lying, imagine what that says about you. :D

edit: instead of trying to start a side-argument (in futility) over semantics, why not provide proof and explanation of your claim?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
@8350rocks Ooh Come on!

Wccftech only stole articles from the whole internet including forums and usually without giving credit. They publish anything including junk. I enjoy when they say one thing in one article and the contrary in another article a pair of weeks latter.

They are the guys that did spread misinformation about Steamroller FX, the guys that posted the ridiculous Baeca article, and the guys that claimed that AMD had confirmed them that mobile Kaveri was a revolutionary hybrid of Steamroller and Excavator architecture

This is what an AMD representative said me once:

Be careful about WCCFtech.... they do not verify anything they publish and will create their own details out of thin air because they think it would be the best strategy.

Once said that, their recent leak about Carrizo stacked RAM has a strong basis, because the original source is a motherboard maker. The main issue here is not technological, but price. HBM is a too novel technology and too sophisticated, thus will be expensive. AMD is evaluating all variables.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
@de5_Roy, experience shows that it is a waste to time trying to explain anything to you, because you don't give the minimum :-D You always return with insults and hallucinations about caughting lies. I gave a pair of links with needed info. You cannot understand HSA? I remain unsurprised, really.


@blackkstar, there are lots of reasons to not use Gentoo. The same reasons why Gentoo was once the third more popular distro well settled among other main top distro, but has been falling successively up to current low #50. Even the once Gentoo Linux project leader abandoned and began Funtoo.
 

your parroted links contained zero indication of how this

works. none. you claimed twofold and failed to verify either, and then you lied and got called out. simple. mocking my ignorance does not make you right. nowhere in the hsa information says that the cpu and gpu can cooperatively manipulate the same data at once. the cpu and gpu (and other elements) share resources, take turns to process data and share memory space.
if you can prove that amd has claimed and cpu and gpu can really cooperatively manipulate the same data at once, you should be able to find that in more than one web page that doesn't even contain any hint of your claim. and you should be able to explain it yourself, which, you have repeatedly failed to do.

edit: while you're at it, do provide an explanation to this long overdue claim:

 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
You don't understand HSA, you don't understand why GDDR5 is faster than DDR3, and you don't understand how to get FPS from a bar graph, despite this is a 10 year-old exercise. During some time I tried to explain you things the best I could, but I received free insults from you in response. You can now cry for weeks and try the ridiculous tactic of "lies lies lies", but I am not going to explain you anything more.

Of course, I will continue correcting your posts, as your last Ubuntu-is-a-x86-OS nonsense. :LOL:
 

you spread lies about amd and amd technology and give them a bad reputation. this is why you get called out. you can't tell memory access sharing from memory access violation like this one

this is why you have failed to prove your own claim, lied, and got caught. no two processing elements will manipulate the same data in a memory address at the same time. even in hsa model, the cpu finishes processing and passes the pointer (to the memory address) to the gpu to finish its work - Not On The Same Data, Not At Once. here it's explained clearly:
http://www.tomshardware.com/news/AMD-HSA-hUMA-APU,22324.html

if you have proof of your claim, please, show them instead of lying and dragging this out longer.

this:

isn't about gddr5 being faster, it's about the impossible feats of ddr3 and gddr5 "handling" "input", "output" instead of data, and of gddr5 "handling" "input and output" on the same clockcycle. you have long failed to explain how this worked.
 

really? so how does that work and fit with hsa paradigm then? do enlighten instead of calling names.

edit2: to make sure: your parroted links do not contain any explanation over same data being anywhere other than the same memory address, just marketing fluff. neither did the promo slide. so my accusations still stand.

and, where is the explanation for the second part?


don't avoid this one too. :)

edit3:
i think that if the same data isn't in the same address space, then it's just regular memory access as long as there's no other access violation. nothing to do with hsa.
 
Status
Not open for further replies.