AMD CPU speculation... and expert conjecture

Page 632 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

jdwii

Splendid


Well i'm playing that game right now(beating it one last time before 5) and i get 30FPS in some cases but usually its 50-60- 90-100 in small area's. This is with very high settings or high and 30 or so on detail and view distance.

http://www.tomshardware.com/reviews/build-budget-gaming-pc,3943-18.html
I'm just not to sure if i would want to pair that CPU with a high-end GPU like that games in a few years are just going to be unplayable. I stick to my old statement for a budget 400$ build but for 500$ i would probably pick a 6300+265. GTA5 will kill this dual core Pentium.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
According to AMD, Kaveri A10 is 68% faster than Haswell i5-K on BF4 @1080p medium settings, both using integrated graphics. According to Charlie, Broadwell GPU will be 40% faster than Haswell. Adding that desktop K series will be upgraded to Iris Pro level graphics one obtains that Broadwell i5-K will run BF4 faster than Kaveri. In fact, the Haswell i5-R already offers more than 80% of the performance of Kaveri in BF4.

But let us be conservative and assume that Broadwell i5-R will only run BF4 as fast as Kaveri. Skylake graphics will bring another 40% gain over Broadwell... The claim that Skylake will only be on par with Kaveri is nonsense. It is the same kind of nonsense coming from those believing that Zen will be faster than Skylake on CPU terms.

The people that pretends that the decisions behind KL CPU are marketing are plain wrong. The transition from multicore to manycore is motivated by well-known scientific/technical stuff which is independent of brands and, of course, of marketing. What was really hilarious was reading the "KL doesnt even have an IGP". This proves again this people didn't understand anything of what was said there.

I also find interesting that some people believes that 200--250W APUs will be used for running linux games on laptops. I want see that foolish mobile stuff

,,.15qf2.gif
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


That is Haswell i5ULV level of performance!
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


When they get Maxwell in there at 20nm Nvidia might finally break into that market.
 

jdwii

Splendid


WAY better compared to the older CPU 760K. Its quite clear this quad core is a better buy compared to a pentium. Now if only it could be priced at 60$ instead of 80$(my guess in the U.S).
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


Who ever said that? There is so little information on carrizo that it's absurd to expect any hard numbers right now.

EDIT: Also want to note that AMD is finally getting serious about efficiency http://www.simplyhired.com/job/sr-fellow-design-engineer-job/advanced-micro-devices-inc/ql2bl5xq3d
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Supposedly by limiting the number of PCIe lanes on the entry/mid processors. Making you buy workstation/server class CPUs if you want full bandwidth to PCIe slots.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780

I brought up SHW because it provides actual evidence that iGPU gets over-represented. You can talk about anecdotal evidence all day long. But let me ask you, how many gaming PCs choose LGA2011 over Intel "mainstream" socket? All of those systems on mainstream socket get counted toward intel iGPU market share, even if it's not being used.

I see people use SHW as evidence that Intel iGPU is used commonly for gaming. And my point is that it's not because you have no way to differentiate between systems that only have Intel iGPU and systems that have Intel iGPU + dGPU. And Intel iGPU gets used in a lot of gaming systems.

Yes you will have casual PC gamers, but the market is niche as it falls awkwardly between mobile phone/tablet gaming and hardcore PC gamers. SHW is the most reliable data we have for people who use their PCs for gaming as things like JPR count all systems.

I'm not even touching the subject that people more than likely have more than one computer and one is probably a laptop and one is probably a gaming PC (if you are a gamer). I personally (I hate anecdotes but I'm throwing this out there) ran Steam on my laptop solely for communicating with friends while I am away. And SHW pestered me constantly to submit results for my mobility x1600 laptop with Core 2 Duo. So that's one result where I'm giving market share to SHW for Intel dual core + ancient mobile radeon that I don't even game on.

If someone can provide a better source of gaming PCs that actually use Intel iGPU I'd love to see it, but AFAIK SHW is the best we have and it's not very reliable.

If anything I view a strong iGPU as nothing more than a gateway drug to dGPU.
 


Well, you can get a rough estimate by subtracting out the total amount of people who use discrete AMD/NVIDIA GPUs when using an Intel CPU. So if you have 1 Million people using Intel iGPUs, and 900k of those are using dGPUs, you can reasonably guess about 100k are using the iGPU.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

Delusions of grandeur.

The results are simply breathtaking – the Intel Core i5-4200U running at 1.6GHz scored 54861 points.

so Denver at 2.3 ghz (44% faster) scored 19% lower ...
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860

who said anything about limiting this to the "HPC market"?

http://s9.postimg.org/bmb7v6g7j/image.png

I already gave the link with Intel plans to kill gaming dgpus by 2015 or so.

yes, that's you that said that. now your straw man backpeddling trying to pretend its just hpc.

In addition who is always stating apus being used for this exascale computing, what do apus have?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The problem with that slide is that it doesn't give the baseline. Is that 30% compared to 17W Kaveri mobile? To hypothetical 15W Kaveri mobile? To other?

I believe that they chose to mention performance at lowest 15W point, because the performance gain will not scale up. I think this is the reason why Carrizo desktop was canceled. I couldn't provide enough performance over Kaveri desktop to justify launch.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
Therefore some people really believes that Intel pretends to kill desktop gaming GPUs by releasing the KL CPU, which is aimed to server/HPC. And their nonsensical suggestion is that the KL CPU would integrate IGP. LOL

And those that criticize Nvidia Denver because their CPU, constrained to work at tablet level TDP, is able to match the performance of Haswell CPUs, working in not constrained environment, --and using two times more power even with a 22nm FinFET advantage (Denver is 28nm planar)-- are simply showing their ignorance of anything tech related.

People who is serious know that Nvidia Denver CPU is a winner.
 

jdwii

Splendid


I'm pretty sure its not healthily for juan to make fun of himself in this manner. It was quite clear what was said by him. But to be fair he didn't exactly say it himself he just kept saying Intel intends to kill off discrete video cards by 2015.
 

8350rocks

Distinguished


NVidia could not even get OoO execution into that chip, but it is automagically a "winner" because juanrga says so?
 

8350rocks

Distinguished


Anyone who knows what is actually going on, and has paid any attention, already knows exactly what juanrga is...it has been painfully obvious for quite some time now...
 

ADAMMOHD

Honorable
Nov 12, 2013
32
0
10,540

Denver is 2core2thread and that guys wrong, denver is 2.5ghz and i5 is 2core4thread and i5 turbo boosts to 2.6ghz
 

jdwii

Splendid
http://static.techspot.com/images2/news/bigimage/2014-08-11_19-58-09.jpg
Compared Denver to a weak Celeron clocked at 1.4ghz with 2 threads, 2 cores 2013 3Q release
Then compared Denver to baytrail also a Celeron clocked at 1.6Ghz 4 core 4 threads 3Q 2013 release

Project Denver looks interesting but it’s usually compared to a low end X86 core just like any other Arm processor. Perhaps one day things will change(K12 maybe?) i do expect its single core performance to be lower compared to Intel's X86 performance. It’s not really a prediction it’s just using NVidia’s own benchmarks. Also it sure took them a LONG time to make this design i thought Arm was such as simple design compared to X86?

I remember all the hype this thing got way back when Amd made bobcat.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
I am not really surprised that the same people who was unable to understand the claims about dGPUs the first time, and unable to understand the further explanations during months is not going to understand anything now. Pretty sure everyone agrees on this now.

Neither I am surprised that same people who did make the silly claim that ARM cannot scale up is now attacking Nvidia Denver. Despite the fact those guys don't have any idea about what Nvidia could or couldn't do, we know that OoO has three big problems: it doesn't scale up well, consumes many power, and uses lots of die area.

What those guys don't know is that lots of interesting alternatives to OoO are under research and development. The most modern techniques provide about same performance than OoO, but consume much less power and occupy less area.¹ Nvidia engineers have proposed a very interesting OoO alternative: DCO. In one sense DCO resembles last decoupled/DLIW techniques to me. One of the more interesting aspects of Denver architecture is its ability to achieve kiloprocessor-level processing in future, because this cannot be obtained by existent commercial OoO processors (e.g. Haswell only can manage about one hundred of flying instructions due to scaling issues mentioned above).

Nobody would be surprised that some of the most respected analysts in the microprocessor industry claim that Nvidia Denver CPU is a winner. Well I mean nobody that was not a rabid fanboy with an evident hidden agenda.

¹ E.g. Flea Flicker Multipass, using in-order pipeline, provides near 90% of the performance of an idealized 6-wide OoO core, but only consumes a small fraction of the power used by the OoO core and the logic requires between 2x and 10x less transistors. But this is not my favorite modern technique. ;-)
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
I find amusing the half dozen of individuals complaining in forums because Denver, a core aimed at mobile is being compared against Haswell processors that consume two times more power. I think that only way to make those individuals feel good is if 9W Denver is compared against a 140W Haswell CPU; only then they could repeat their mantra "see ARM is slower than Intel x86! I said you!".

It is better if we don't mention them that the 80W server-class ARM processors provide higher SpecINT scores than 100W Xeons. Sssshhhh!
 
Status
Not open for further replies.