Dying Light: Performance Analysis And Benchmarks

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
its hard to say whether its gameworks or not. i have a amd gpu (7850 oc 2gb) with intel cpu(i7 870) and i can easily pull 60 fps on mid-high at 1080p. its really hard to say but it seems that hyperthreading is surely helpful in this game. its not bad mouthing AMD but amd processors are general single threaded limited compared to intel.
 
AND IT CONTINUES
nvidia-vs-ati.jpg




Why isn't it okay to just win some and lose some? Plus none of us have any real stake in either company. What's the point?
 


Mantle was made to make AMD crap cpus look better vs. Intel and at the same time give a leg up on all since it only works on GCN. We had no need for it, as OpenGL can already do this stuff. See Cass Everitt's/John Mcdonald's speech at steam Dev days.
https://www.youtube.com/watch?v=-bCeNzgiJ8I&html5=1
http://www.slideshare.net/CassEveritt/beyond-porting
The front page says it all in the slides. How OpenGL can massively lower driver overhead. The room was full of devs with eyes wide open. They just didn't know you could do this stuff (well, Carmack does...LOL which is what he said on stage previously-no need for mantle). This just proves Mantle was a power grab that failed. They also had to know it was coming in DX12 as NV said they'd been working on it with MS for 4yrs and they had working drivers and a forza demo last march.

So forbes publishes an anti NV article claiming crap and extremetech themselves say this in the first P:
"This kicked off a series of events, including our own inability to duplicate similar results "...ROFL. So lots of worry, no proof. AMD just can't afford to do the same, and Mantle isn't open, they couldn't fund it to fruition. Intel was told to take a hike 4 times while trying to get access. IN order to run the code you had to make a GCN gpu, and Intel/NV would never do that. Therefore it isn't open. Are some of the ideas used in Vulkan? Sure, but now in a way all can use. PCPER article shows it isn't Mantle, it's completely different. Its comic you're quoting AMD's OPINIONS of Gameworks as fact. But in your same page 3 when you ask devs:
"Developers say: Hurting AMD probably wasn’t the primary motive."

I'll go with devs when the whiner is the one who consistently mismanages their budget and due to this has no money for R&D for core stuff DONE RIGHT, like gameworks, gsync, better DirectX drivers etc. We'll see if they get it right on freesync (LOL@that name) once drivers are finally available (monitors already out, but no drivers, CF drivers a month later etc). As Sweeney said, there is no need for a company to give up source to valuable IP they developed. But NV will give it for a fee. What do you expect; they are in business to make money. AMD apparently isn't as they've lost 6B in the last dozen or so years. Cuda makes MONEY, OpenCL doesn't. Same story. Open is great, but if you try to live that way for all things you end up broke.

Everyone in the industry wants to control things; AMD just doesn't have enough money to get their stuff pushed as much. You can blame that on consoles, stripping R&D from core products, and paying 3x for ATI what they should have paid. AMD needs to concentrate on making better drivers (like NV CLEARLY did with DirectX) instead of whining.

Mantle was going to do the same thing as gameworks (if it succeeded), but they failed. It only runs on AMD GCN cores (not even older AMD products, so specifically you need GCN or go fly a kite). That means NO NV or INTEL without them paying for a GCN lic. They never intended to release it OPEN and it NEVER has run on anything but GCN. You can bet your arse if they did, NV/INTEL would have had to pay a lic fee to MAKE GCN compatible cores that would run on it (same IP fee Intel pays to Nv now basically). That is business suicide, so no surprise neither did it nor planned to (Intel tried, but no code access because it would give away AMD GCN secrets and as noted WASN’T OPEN). Neither side would want to put GCN in as a defacto standard as it would hurt them every rev from then on as AMD always would have it first. Also don’t forget it was to give AMD’s lagging cpus a leg up on Intel.

At least you can pay for gameworks access if needed, or they have a choice to just ignore it and not even support it's enhancements (as Sweeney noted, devs do it with eyes wide open). AMD/INTEL have played this game before many times (as the article mentions for AMD it was bullet physics, NV physx, Intel Havok etc). AMD went into consoles hoping it would give them a leg up on optimizations out of the gate if games were all made on CONSOLE first (amd gpu/cpu) vs. everything else as they were with the last gen pretty much. But sales sucked for a year and most realize MOBILE/PC have unit share and devs went there first now. OOOPS. GDC this month still shows this; PC's garner 56%, Mobile at 49, and everything else below 30. Gameworks MIGHT tilt things NV's way (if abused), but it was designed to help small devs who can't afford to do all their effects in house quickly or cheaply. AMD could get the code if desired too, but as noted for a FEE. Again, don't forget Mantle was also going to help give a leg up to their crap cpus vs. Intel and again this is why they told Intel go fly a kite multiple times. It was always designed to make their GPU/CPU look good, not merely to help us gamers get good games. 😉

I don't see Intel sharing stuff like Quicksync with AMD either. Everyone does this crap if they can afford to. AMD tried and failed with mantle, and as a last resort handed it to Khronos (headed by Nvidia...LOL) hoping for the best and then it was gutted (HLSL etc..Changes on both cpu/gpu sides) by the GROUP effort.

Extremetech's conclusion:
"But effective vigilance requires clear-eyed analysis — not a predetermined declaration of guilt or innocence. While AMD’s concerns are valid, Nvidia deserves a chance to make a case for how GameWorks can be good thing for the gaming market."

You've got them predetermined as guilty, when as we've seen with dx11/12 overall, it's probably just crap drivers (hardocp said they haven't updated since the game hit, no surprise on perf then).
http://www.anandtech.com/show/8962/the-directx-12-perfo...
Its clear NV spent heavily on optimizing DX11 and 12, while AMD wasted money on mantle which affects almost no games (NV went after almost all windows games via dx11 enhancements, and clearly dx12 too). NV was chosen to demo dx12 with an XBOX1 game for a reason (forza demo). They already had good DX12 drivers, and you see this in anandtech's review also. Even the lowly 750ti is wasting the 290x in DX11 star swarm (never mind the 980). NV’s dx11 work affects almost all games on windows.

http://www.pcper.com/reviews/General-Tech/GDC-15-What-V...
Note how DIFFERENT PCper's take is on how much mantle is in Vulkan. While PCworld just let AMD talk and glamorize themselves (anandtech did the same), PCper shows it's a completely different animal and the most hyped features are removed (HLSL):
"HLSL was quite popular because Windows and Xbox were very popular platforms to target a game for. When AMD made Mantle, one of their selling points was that the Mantle shading language was just HLSL."
"Khronos is doing something else entirely." Yeah, they totally removed it's main selling point. "Rather than adopting HLSL or hoping that driver developers could maintain parity between GLSL and it, they completely removed shader compilation from the driver altogether."

The PCworld article is comic. AMD has had unexpected success getting mantle used in games? Only when AMD paid them...LOL. Whatever. Note Neil's comment in there now (head of Khonos and NV mobile division and came up with OpenGL ES, also Jon Peddie's comment in there, no advantage). But read PCper's review of Vulkan and understand it is NOT Mantle and AMD will have no advantage here; they'll have to make better drivers to get any advantage now. Also they'll have to make a better cpu, as mantle won't be helping them get a leg up on Intel (everyone gets Vulkan/DX12 gains if they optimize for them). You grow your company by putting out a better product than your enemy and getting pricing power. Gameworks creates better games, not worse. It allows small devs to use pre-packaged code that just works for some effects, so they can concentrate on the rest of the game. A big dev might have the resources to make things that work everywhere, but a small one appreciates the packaged effects they don' t have to take the time to create themselves.
 


Actually I have stock in both companies at various times. I wish I would have bought AMD ~two months ago (when it hit below $2.50), but missed it by not tracking it for a month or two. I only use them as a channeling stock these days (few weeks tops) as management is just so bad, I don't have any faith they'll get out of the hole they've dug (will get bought for a song one day or just go bankrupt if they wait too long to sell). Personally I'd like to see them both making a Billion+ as that would mean much better gpu gains yearly (more R&D) but AMD stupidly keeps cutting prices forcing NV to do the same. AMD can't keep losing money the way they have for the last 12 years (6Billion in losses). At some point they need to CHARGE an appropriate price for a product and keep them there for a lot longer, NV will do the same as they've said they'd like AMD to stop the stupid price war that they can't win. Anyone who votes this down needs to understand AMD doesn't make money! I don't like high prices either, but they need to make money and NV hasn't made 2007 earnings in the last 8yrs. The price war is stupid and AMD management should be fired for thinking they can win it. In debt, losing cash, and thinking you can win with the other guy having a dividend, 500m profits, 3.7B cash etc? You're an idiot if you think that, but AMD management doesn't seem to read their own balance sheet or Nvidia's.

Great pic BTW :)
 
I run dying light on 1920x1200 on my

I3-4130 and my R7 260x on high on slums, then tweaked down the shadows and stuff to medium while the textures are still on high with a little bit back on view distance o n old town. Still the game runs just fine on my radeon card.
 
Why does the next OpenGL API (Vulkan) use the Mantle code as its basis?

Vulkan source code:
vkCmdBindDescriptorSet(cmdBuffer, VK_PIPELINE_BIND_POINT_GRAPHICS,
textureDescriptorSet[0], 0);
vkQueueSubmit(graphicsQueue, 1, &cmdBuffer, 0, 0, fence);
vkMapMemory(staticUniformBufferMemory, 0, (void **)&data);
// ...
vkUnmapMemory(staticUniformBufferMemory);


Mantle64.dll code:
grCmdBindDescriptorSet(cmdBuffer, VK_PIPELINE_BIND_POINT_GRAPHICS,
textureDescriptorSet[0], 0);
grQueueSubmit(graphicsQueue, 1, &cmdBuffer, 0, 0, fence);
grMapMemory(staticUniformBufferMemory, 0, (void **)&data);
// ...
grUnmapMemory(staticUniformBufferMemory);


Uh... From PCPER:
So in short, the Vulkan API was definitely started with Mantle and grew from there as more stakeholders added their opinion.
http://www.pcper.com/news/General-Tech/GDC-15-Khronos-Acknowledges-Mantles-Start-Vulkan

Doesn't rule it out as secondary motive. Remember that on page4 developers say AMD has a right to be concerned. And you're really deceptive by saying I just quited AMD's opinions. I clearly quoted the dev's opinions as well. You're disgusting.


Want to see some whining? Whining is when your competitor's cards are beating your own, and you blame them for cheating on benchmarks, even though the setting is optional and your own cards support it as well.

Ultimately the decision to run FP16 Demotion has always been up to the end-user, as the use of Catalyst A.I has been optional since its first inclusion in Catalyst drivers - now many years ago - and as it appears predominantly beneficial or benign in the majority of cases, we'd suggest you leave it enabled.

We also can't help but wonder why NVIDIA attempted to make such an issue over this rendering technique; one that they wholeheartedly support and simultaneously condemn. In the two titles that their hack functioned with we saw no image quality nor performance loss, and rather the opposite: a performance boost. Why haven't NVIDIA included support for FP16 Demotion in their drivers until now for specific titles? Why choose now to kick up a fuss?


Read more: http://www.pcauthority.com.au/Feature/232215,ati-cheating-benchmarks-and-degrading-game-quality-says-nvidia.aspx#ixzz3Ups9xT38

Talking about crappy drivers should go both ways. Not only when AMD makes mistakes. Also, AMD's drivers are NOT bad. They're usually a bit late, that's true. But don't confuse the two.

How exactly did they fail when in less than a year, there are eight games that support it, and at least three more that are rumored to going to support it?

It can. The simply did not invest in writing a driver for VLIW cards. nVidia could write drivers for it.

You're whole argument is baseless, considering I already showed that Vulkan is basically Mantle. And you simply made up the GCN nonsense. It's not true.

Oh so it's ok for AMD to pay nVidia, but not the other way around.

If it failed, why are games still using it? Seriously, the thing was still in beta............

nVidia's track record speaks for itself. The example I posted earlier indicates what policy they have. And so does the GTX 970 fiasco. And yet people keep licking their asses.


Why would AMD say to focus on DX12 rather than Mantle if they were focusing on Mantle instead? Oh right... Mantle 'failed' -_-

And I'll just leave this thread here...
http://hardforum.com/showthread.php?t=1812378

My god... You really ARE disgusting. You're acting as if HLSL was only a benefit for AMD... From your own article link:
The Khronos Group has also tweaked the GPU side as well. Before now, we had a war between GLSL in OpenGL and HLSL in DirectX. HLSL was quite popular because Windows and Xbox were very popular platforms to target a game for. When AMD made Mantle, one of their selling points was that the Mantle shading language was just HLSL. This meant that game developers could keep using the shader language that they know, and AMD would not need to maintain a whole separate compiler/interpreter chain.

This means that it was chosen because HLSL was popular. Note that HLSL was created by Microsoft for Direct3D. NOT AMD (Proof here). And it also means that nVidia could use it since their GPUs are definitely enabled for HLSL. You really have this predisposition that Mantle was ONLY for AMD while the evidence points in another direction. Remember that Mantle was developed in cooperation with DICE, a 3rd party.

Two articles...

AMD states Mantle is open:
http://wccftech.com/amd-mantle-api-require-gcn-work-nvidia-graphic-cards/

nVidia states it's not interested in Mantle:
http://www.kotaku.com.au/2014/07/nvidia-has-no-interest-in-mantle-doesnt-do-anything-you-couldnt-do-before/

Conclusion:
nVidia rejected Mantle, but could have supported it.
 


Intel tried to get mantle 4 times, AMD told them buzz off. You don't even read your own articles (or my post), or get the points of my post. From your own article look at the title, requires GCN:
"Of course heres the thing we are not sure about. Mantle was clearly designed with GCN in mind, so when AMD talks about other vendors being able to utilize Mantle does that mean that Mantle will work on their current Architecture? Or will the actual architecture of rival vendors (Nvidia) be need to be modified to support Mantle? If its the later then this is a very subtle move from AMD’s side pushing towards a Red Future. Another thing we dont understand is what was up with all the apparent hints that Mantle will be GCN only. Unless AMD suddenly decided to make Mantle, Multi-Vendor (Unlikely) AMD had been planning this all along yet all information previously pointed towards a GCN Only Mantle API."

All they have here is questions, no answer, just what AMD says, which as of this day is STILL not true. It still ONLY works on GCN. We only have them to believe on that it would EVER work on anything but. Now they've shelved the whole thing so there will never be proof it can do otherwise.

From your second link...AGain look at the title, Mantle does nothing you couldn't ALREADY do, which is exactly what Nv said, exactly what Cass Everitt showed in the vid I linked from Steam dev days. Again, AMD rejected Intel. No proof NV could do anything without building a GCN core. All you have is AMD's word and a slide. I can make a slide saying I'm the richest man in the world, but at some point I'd have to prove it to be true. There is nothing in the kotaku article that says they could support it without building a GCN core. Also they already knew you could do the same with OpenGL (I'm not talking Vulkan here, Cass' spech was LONG before that came, it's about OpenGL 4.0+), and they were already working with MS on DX12, hence them being used to demo forza last march.
"With Mantle, it’s not really doing anything that you couldn’t do before."

Keep drinking the kool-aid. I digress...
 
It's hard to acknowledge the truths in this case amd was left out and you need intel nvidia to run properly the game. I like amd but this obsession they have for apu's is letting devs turn to new tech in the desktop cpu front. K12 needs to happen now not in 2 years.
 

APUs and SoCs is where the bulk of consumer and office computing is going, so AMD does not have much of a choice to take that seriously. There is also Intel doubling up on the IGP front with Broadwell and Skylake soon with some models having that 64-128MB on-package L4$.

As much as AMD could use a miracle right about now, they cannot really afford to release it before it is done - I doubt AMD would survive another launch plagued by mysterious CPU bugs.
 


I still think AMD could make a wild comeback if they would just release a PURE CPU with massive IPC as Intel is currently completely infatuated with adding more gpu and as such (pulling an AMD here), ignoring GAMERS and Power users. All of us just turn the onchip gpu off. AMD could leap above Intel quite easily if they stripped the gpu and put all those transistors into a PURE CPU. Those chips would garner HIGHER PRICES :) I would buy one myself and I'm sitting here cringing over being forced to order a haswell today (literally today...LOL). But that build is temporary to get me by for a bit, then will be handed to my dad and I'll build whatever later this year (hoping something AMD), but probably another haswell as I have 16GB I won't toss out for Intel's new mem socket to get into broadwell. Having said that I could easily dump the mem (worth more than I paid currently) if AMD put out a MONSTER IPC cpu and it required DDR4.

They have the guys to make an IPC monster, they're just not smart enough to do it in management. All of my complaints regarding AMD are management related 🙁 AMD's die is ~245mm^2 IIRC (but a die shrink could keep it here or drop it and put out a dominant cpu), Intel's is below that, just make it all cpu and win back gamers (many of whom are in IT). We would evangelize the crap out of AMD given the chance again. I don't believe AMD will ever have pricing power in apu (not with Intel coming down and ARM coming up, AMD gets squeezed to death), so they need a chip they can make some big bucks on (CPU ONLY).

http://hothardware.com/reviews/AMD-Kaveri-Update-A107800-APU-Review
47% of Kaveri 28nm is GPU. You could easily see a double in cpu perf if that all went to cpu only. Intel can price their apu to death, but you couldn't price a true CPU WINNER to death, AMD would get their just rewards just as before years ago when they smacked them around for a few years (the glory days, I used to sell them easily vs. Intel). They could charge $400 for the top chip and maybe more. I'm not talking 225w either...LOL. Get that back down to reasonable 100w at least for a $400 monster. QUAD core not 8 as there is too many places 8 just gets killed because most consumer stuff is aimed at 4 or less. Blow the doors off broadwell/skylake. That should be the priority.
 

If achieving "massive IPC" was easy, everyone would be doing it. For gaming and most other interactive processes, what matters most by a wide margin is single-threaded IPC and this is not something you can get around of by simply throwing more cores at the problem. Even if AMD made a 32 cores chip, it would still lose most mainstream software and gaming benchmarks to the i5/i7. If AMD wants to earn gaming market share back, they have to fix their per-core IPC. Extra cores are useless if the software is not sufficiently finely threaded to make any actual meaningful use of them. While most modern software may have 50+ threads, it is usually only 3-4 threads of the lot that account for the 90+% of the application or game's total CPU usage - use Process Explorer, view the Thread page in the process' properties to see how much CPU usage each thread actually generates.

Also, AMD just about sworn to bet the barn on heterogeneous computing, so going back to producing "pure CPU" chips would mean setting that barn on fire.
 


Meh......I went to the moon, then to mars and then came back.
 


I don't get it. You just basically said what I said. NOT 8 cores or more...I said get IPC up, do it in 4 cores and under 100w. I said reduce their cores from 8. Who said throw more cores at the problem? Are you responding to someone else? Transistors are not the same as cores 😉 The 47% wasted on apu could be dedicated to raising IPC etc. Apple does it all the time with bigger dies, hence their chips have more transistors for cpu and their dual regularly beats quads, and now the tri-core doing much the same (and at lower speeds in both cases). Everyone can do it with the right team, they just choose incorrectly caving to "blue crystal" marketing people instead of the engineers (ask some of the old guys from Intel about this). IE 8 cores on a phone, instead of something more geared to IPC like Apple's dual/tri core chips at far less speeds.
http://www.anandtech.com/show/8716/apple-a8xs-gpu-gxa6850-even-better-than-i-thought
Bought PA Semi and added some people then got the job done. Probably one of the best ~278mil Apple ever spent 😉 Nvidia just put out a dual that does a pretty nice job of competing with quads also, just not quite as desktop like as Apple's custom ARM core. But apple is working hard to get Intel out of their macs etc too, so going straight for desktop like chips. Next stop for them 4ghz, and in a mac :) Maybe some laptops first as they do it, but you should get the point.

Well if you bet on the wrong barn (makes you no money, for who knows how much longer), perhaps you should light it on fire (at least temporarily)...LOL. Nothing wrong with admitting you made a mistake (cough, ATI 3x price, cough consoles instead of core cpu/gpu/drivers, cough laying of 30% workers because of this crap, cough, cough). You don't have to stop making apu's, you just have to make some money first so you can make a BETTER apu. They've currently bet the farm on something that will get squeezed to death by Intel/arm race (down and up). There will NOT be any form of pricing power with these two racing to the middle pushing profits further down (killing more engineers no doubt). They can't do anything better than these two in this area for a while, and Intel is willing to throw away $4B+ a year (AMD's total revenue yearly) on this low end junk to stop arm.

32 cores? Did you even read my post? Complete opposite of what I said to do.
 

Here: "47% of Kaveri 28nm is GPU. You could easily see a double in cpu perf if that all went to cpu only."

The only way to convert 47% of the CPU into a significant total IPC increase is to add cores. Re-designing the pipeline to increase per-core IPC does not require doubling the space used per core. If you make the cores nearly twice as big, this roughly doubles signal propagation times across the core, which means having to either use longer pipelines (I hope you remember what happened when Intel tried that... the Willamette P4 had to reach 2GHz to start consistently beating the 1GHz P3 in benchmarks) or slower clocks.

Improving the per-core IPC is not as simple as re-allocating transistor budget, especially if you do not want to sacrifice clock frequency nor increase pipeline depth which could end up yielding worse overall throughput from the IPC x clock product (frequency loss > IPC gain) or the execution pipeline stalling more often and longer due to data dependencies.
 


Ha, only difference is that I actually went 😀

Highly recommend!
 


Sure I remember, as I was selling WHITE BOX Asus motherboards with no name on them because Intel had everyone so scared to sell an AMD board (no name on the board or box!). You're taking an extreme mistake Intel made in response to AMD (the ghz race, prescott (presshots as I call them) etc...LOL - blue crystal marketing crap I mentioned) and acting like people need to go that far. They don't, that was a mistake that was mostly caused by AMD actually doing HIGH IPC at the time and beating Intel in everything. You do not have to add cores, you can allocate transistors any way you want to make 4 cores BETTER, just as Apple, Nvidia, Qcom do in their custom ARM cores. With the right team you can get the caches, prediction, pipeline etc correct. AMD needs a potent quad, just like Intel has, but with nothing for gpu. Apples tricore runs 1.5ghz, but is beating 2.3-2.5ghz quads on the same arm tech. Before A8x, they were doing it with dual core's running 1.4 IIRC: yes, shown below

http://www.anandtech.com/show/8554/the-iphone-6-review/3
"With Cyclone Apple hit on a very solid design: use a wide, high-IPC design with great latency in order to reach high performance levels at low clock speeds. By keeping the CPU wide and the clock speed low, Apple was able to hit their performance goals without having to push the envelope on power consumption, as lower clock speeds help keep CPU power use in check. It’s all very Intel Core-like, all things considered."

With 7% clock speed bump they were able to get 20-55% improvement (clearly more than just the 100mhz was done).

http://www.anandtech.com/show/8666/the-apple-ipad-air-2-review/2
http://www.anandtech.com/show/8716/apple-a8xs-gpu-gxa6850-even-better-than-i-thought
Apple's evolution covered in detail. Note the A8x is about the size of A5 (much smaller than A5x). They don't have to use every transistor from gpu, the point is they have a LOT to play with to get IPC up, no matter how they accomplish it. If you're cherry picking a failure and saying everyone has that problem, explain how Intel is currently kicking the crap out of AMD when doing it properly 😉 Since AMD gave up, they have no need to optimize the crap out of the cpu side or add transistors to it.

I guarantee you if Intel decided to drop gpu from the chips, and dedicate the transistors to CPU ONLY, their cores would go up by more than 5%, which is about what we get from sandy, ivy, haswell, and it looks like broadwell as again the go gpu with everything gained from the die shrink. If I thought broadwell was going to be awesome I probably wouldn't have a Devil's Canyon coming today :) But I'm not sure AMD has the money to fund it, though again, they do seem to have the people they need now.
 

No, they wouldn't.

Intel already has some of the highest single-threaded IPC of any CPU architecture ever made if you exclude CPUs that use explicitly parallel instruction set architectures and to get there, they already need out-of-order execution that looks up to 192 instructions ahead to find 4+ instructions to stuff down available execution ports on every cycle, which is also one of the deepest re-order queues in existence in a modern practical design. The look-ahead depth is primarily dictated by how much logic Intel can cram in the instruction scheduler without having to add a pipeline stage in the scheduler and incur an extra latency penalty hit there. It is the same general pattern across the chip: they only add features when they can afford to do so without incurring extra pipelining penalties. Intel got their lesson with the P4 and are in no hurry to repeat it.

ARM chips are still several years of development behind current x86 chips and not all desktop-style performance enhancements have enough bang-per-watt to make sense on SoCs, so there is nothing unusual about different in-house ARM cores having different performance characteristics - the custom ARM core market is still in a state of wild flux. Not so long ago, most ARM-based cores were strictly in-order without speculative execution nor L2/L3 cache and their performance thoroughly sucked compared to desktop chips. Intel axed many of those same performance enhancements in their first attempts at competing against ARM on power to take a shot at mobile devices and then had to find more power-efficient ways to add most of them back in to catch up on performance.

There is no magic tweak to increase per-core IPC overnight unless your current design is grossly sub-optimal. When you start butting against theoretical limits, such as pipeline stalls due to data dependencies and the instruction mix in a typical x86 application, it becomes a steep uphill battle.
 


You always talk extremes. It is ridiculous to say Intel couldn't improve if lopping off the gpu and dedicating everything from that to CPU. If you could do it overnight, there wouldn't be a point to AMD trying to get the job done now would there? Intel would just answer 24hrs later. But that isn't how it works. Jerry Sanders once said, building cpus is like russian roulette, only it takes 5 years to figure out if your dead. Denver was in the works for 5yrs (scrapping an x86 version first, but still). K12 is mid to late 2016 probably (arm one maybe Q1, x86 says later), and it didn't just start development and they could have a monster FX coming here; we'll know next year I guess. You should tell intel to no longer attempt to make new cpus, as Invaliderror says they've hit a wall (I say artificial due to nobody like AMD pushing them, but ARM is coming). We will disagree (as usual) forever on this. We're not pushing theoretical limits here. We're pushing no reason to improve limits...LOL.

http://wccftech.com/intel-skylake-benchmarks-leaked-sisoftware-sandra/
Just a rumor/leak, but if true skylake might give up to 20% cpu.
"The CPU arithmetic score is actually quite good, nearly matching the i7 4810MQ which has a base clock of 2.8Ghz. Suggesting that intel will likely introduce worthwhile IPC improvements of roughly 20% with Skylake over Haswell."

Impossible...right? 😉 I say maybe 15% but whatever, at this point more than 5 would be nice...LOL.
http://vr-zone.com/articles/unlocked-intel-skylake-desktop-cpus-arriving-by-q3-this-year/86221.html
"Intel’s Skylake architecture will bring a number of new features to the table, including higher Instructions per clock (IPC)"
I could go on, but nobody seems to think it's impossible. We should expect better from a TOCK.

Intel keeps improving while spending everything on gpu (even if I hate the ~5% yearly they spit out). If you look at them 5yrs from now I'm fairly certain their cpu will score more than today's by ~25% or more quad to quad (just as they've been doing). Impossible if you were correct (maybe even a bigger jump if someone forces them to move on CPU instead of gpu) and again they're ignoring cpu basically. 95% of their attention is on stopping mobile moving up, not making desktops better. Intel slowed down cpu enhancements the day AMD said "we give up". I'm done here. You're wasting my time. I have a PC to upgrade. Sad, I just bought Devil's Canyon and according to you it's the last cpu I'll ever buy as Intel can't improve on it.../sarcasm
 

Haswell was a tock and it was a 5-7% single-threaded IPC improvement. Sandy was also a tock and it was a ~7% IPC improvement, Ivy was a tick and it still had a ~5% improvement from refinements Intel managed to sneak in between major architectural shuffles. Intel's history of doing 5-7% improvements regardless of tick or tock goes back more than four years.

Another major omission to keep in mind: your 20% IPC improvement is relative to Haswell. You forgot Broadwell in-between and Broadwell will likely be a 5-7% improvement over Haswell due to a dozen architectural tweaks including an execution issue ports re-shuffle. The optimist in me does not expect Skylake's IPC to be more than 10% better than Broadwell without the optional 64-128MB L4$.
 


Thanks for making my case. Again, you say no IPC improvements then explain exactly what I said to you to begin with...LOL. It's impossible to raise IPC according to you a few posts ago, but now they do it yearly just like I said. Where is that wall you mentioned? And AMD is so far behind, I'd say abysmal IPC and could do a massive IPC increase. My statements still stand (AMD gave up, Intel slowed down cpu), I expect more than 5% from each rev (especially TOCKS), though as noted we seem to get about that tick or tock. But they keep hindering it by going all gpu (broadwell seems same, and even skylake looks like possibly all gpu again, or 20% IPC so who knows..LOL all speculation at this point). I didn't omit broadwell, I just don't expect much from it, which is why I just gave up & ordered Devil's Canyon (and a Gbyte UD5H-BK), and will likely order another for dad later (I can wait on broadwell results for him, maybe even skylake since it's this year supposedly).

A guy just wrote an article on Motley Fool a few days ago asking why they're even making broadwell unlocked as (if correct in his assumptions) it's clocked far lower than i7-4790 to hit that 65w. People are showing 3.3base+3.8 turbo in a few places also (what the???...). Why include something that is dead to me? His thoughts are exactly why I made my move a few days ago (newegg's certainly fast, 2 days...WOW and I ordered at 5pm). Broadwell seems aimed at gpu and lowering power, not adding perf for enthusiasts. Skylake at least goes back to 95w (some perf in there for cpu maybe?) but probably just because it will have full IRIS (assumed). 4790 seems to be the best bet for quite some time, so I bit the bullet.
 

I never said impossible. I said increasingly difficult and expensive because they are approaching the theoretical limits.

There is only so much instruction-level parallelism that can be extracted from a single execution thread even if you had infinite re-order queue depth, infinite cache and infinite register files. This is true for any software compiled to any instruction set on any architecture. A real-world CPU on the other hand has practical limits on all of those parameters and more, which get pushed back with each design iteration and process shrink as they become able to squeeze those tweaks in without negatively impacting other more performance-critical parameters.

When you have to double the size of a cache, branch target buffer or other major resource only to gain a 2-3% performance improvement, the cost-to-benefit is starting to scrape the deep end. That's what I meant by steep uphill battle. The design cost of each extra 1% grows exponentially.

Even if they could magically double the CPU core's transistor budget without affecting timing closure or pipeline length, the resulting CPU would still be nowhere near twice as fast.
 


InvalidError said:
somebodyspecial said:
I guarantee you if Intel decided to drop gpu from the chips, and dedicate the transistors to CPU ONLY, their cores would go up by more than 5%

No, they wouldn't.

Sure sounds like you're saying impossible (that is a categorical denial there - NO THEY WOULDN'T) to go up past 5% even if all transistors were dedicated to CPU (which happens to be about 47% more transistors, be it for cache, etc whatever). That is why I said your statement was ridiculous. You don't have to get 2x faster (here go your extremes again) to beat Intel's IPC. AMD is behind, but not THAT far and they've done it before. You seem to be missing my point though. AMD needs to do what it takes to BEAT Intel while they're distracted & chasing gpu side. If that means scraping the bottom, that's what you do to get pricing power so you can make some money for a while. If all the benchmarks started showing AMD winning by 5% (or whatever %, instead of losing by a lot more) they'd start getting recommended again by many like me (people in IT etc). There's nothing magical about making a PURE cpu without gpu.

You keep wasting words telling me how hard it is, rather than getting the point that it is doable and that is exactly what AMD needs to do. 😉 With Samsung/GF sharing tech now, pumping them out wouldn't be a problem (like they had years ago when on top & production constrained). A large portion of enthusiasts would love to buy a GREAT AMD cpu. Un-tracking this, as the conversation is pointless now. 😉
 
Status
Not open for further replies.