AMD CPU speculation... and expert conjecture

Page 719 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

logainofhades

Titan
Moderator
DX 12 will help, but it will not change the landscape of things. DX12 will benefit i3/i5 as well. Yes, AMD will see a gaming performance benefit, with DX12, but they will still be behind Intel. AMD needs its new arch and it needs to be far better than what it has now. FX, for the vast majority of people, isn't noticeably better than Phenom II X4/X6. Those with such systems, would be wasting their hard earned money on FX. The only true upgrade they have is an i3 or i5, depending on which PhII they have.
 
sigh... do you imagine perf/price will improve for amd cpus and socs? and by how much?
 

logainofhades

Titan
Moderator
Unless AMD severely cuts its price, not really. You can only help a 2+ yr old architecture out so much. With its horrible TDP, AMD pretty much has to have an aftermarket cooler, and that eats into the pricing. If AMD were smarter, and paired the FX 6300 with the FX 8320's stock cooler, it would give them a slight edge on the low end. It would make recommending FX 6300 much easier. Generally, I only do that for low budget folks living near a microcenter. The crappy all aluminum heatsink they currently sell with is only good for a paperweight.

 


And what you just described is lazy programing. Not everyone is running a Haswell chip. There are still Core 2 quads, Phenom, FX, Nehelam, etc, out there. Why not throw in a few lines of code to make the game run well everywhere, not just take the lazy way and run two cores?

All that said, I'm NOT taking a position on this question. I'm simply pointing out that the program running well on two core Haswell chip was an invalid argument and doesn't absolve the developer of any responsibility for making their program perform well on other hardware. If the power is there, and they choose not to use it, that can fairly be blamed on them.


 

logainofhades

Titan
Moderator
Programmers are lazy, but when the vast majority of users are Intel, why would you spend time coding for a small selection of people using AMD? AMD's IPC is so pathetic, that even Core 2 quad is technically faster, clock for clock.
 
As I said, you wouldn't just be coding for the FX. There are plenty of core 2 quads, Phenoms, and other slower quad core+ chips out there. I also suspect this group represents a decent chunk of users.

Again, that's the argument. I'm not sure which side I fall on at the moment, but it is a valid argument.
 
Its not about programming for intel or programming to AMD, its about using a compiler from this decade.
 


And yet in the same graph, we easily see that the same E8400 keeps up with a current i3... so is the i3 totally rubbish as well now? I mean actually the i3 is a fairer comparison (dual core vs dual core).

I think the key point here is Skyrim has *terrible* threading capability. The i5 pulls ahead, most likely due to turbo boost. The AMD chips, the i3, and the C2's are all clumped together- the software simply isn't written to make use of modern processors. Having said that, in the developers defense, even a lowly Athlon 2 (which is equally old as the E8400) is *sufficient* to play the game smoothly. It's probably a case of they optimized it to where it needed to be, could they improve it by making better use of modern processors? Absolutely, but did they need to?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Any future design I know is using sockets. No engineer use slots.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


DX12 and OGL-next will free CPUs from doing certain work but surprise! game developers will find ways to maintain CPUs busy again, for instance by introducing better IA, better physics effects, by increasing number of draw calls in the future games... In the end the Intel CPUs will be again ahead of AMD CPUs.

Low overhead APIs are not a magic solution to AMD's lack of competitiveness. The only solution is... faster cores. :whistle:
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Yeah it was nice when you could get some cooling on the back of the slot as well like they do with dGPUs.

Of course they could design motherboard trays differently to allow a heatsink to attach to the bottom of the motherboard. The ground plane in a motherboard can sink a decent amount of heat on it's own.
 

jdwii

Splendid


Note i don't own a I7 920 sadly(like doing tests) but i notice most of the time a 8350 is around a I7 920 in terms of modern gaming.
 

jdwii

Splendid


Probably the same since CPU's don't physically get faster and its not like 12 gains are only noticeable on one vender keep in mind they already did these tests and it doesn't change anything its just the gap got smaller between the 2.
 

jdwii

Splendid


I agree and sadly i might have to live with stock cooling until the 212+ gets back in stock, i heard some pretty bad horror stories about the I7 4790K heat issues using the stock cooler.

Note that i and many others way more qualified then me did benchmarks and proved the difference stays the same between CPU. If X<Y with 11 it will still be X<Y in 12 just by a lesser amount.


Edit and about making a game work great on a wide range of hardware is a good point but to call the programmers lazy is such a crazy thing to say i mean they don't even make those decisions on time. Not to mention most are probably being used to make console games anyways the PC teams are probably a lot smaller then the console ones.

Of course more programmers doesn't always mean more done i've yet to find a single programmer in my life who would rather work in a team then be alone.
 

I laugh every time someone brings up Skyrim, it was a sh!tty console port that plays horribly on PC at stock. Once you start tossing on mods, especially AI and other "world" types the performance characteristics change a bit. The older i3's are no longer sufficient, its down to fx6 and i5 or higher. Newer Haswell i3's could probably handle it just find though owing to their superior HT implementation.

Honestly it's mostly just people wanting to hate on something, they don't need much of an excuse to do that. BD was primarily a server orientated CPU and it does remarkably well, for it's cost. It didn't sell well because the server market is dominated by a few OEM's whose business deals revolve around combined Intel packaging they get sold at a discount (MB chip + CPU + NIC + supporting hardware). Intel markets a total package to OEM's and can guarantee supply while AMD struggles with this. Now it seems AMD is going back to their budget / value roots looking to offer a competitive product at a reduced cost, which they seem to be doing well.
 


That's only because the tests were done using silly combinations like Pentium G / i3 + Titan / x80 GPU. In realistic situations the GPU will be the performance limiter not the CPU. The short term implications is that "you can game on a dual core!!!!!", which will last all of one year. The long term impact is that developers will start looking at other things to pack onto the CPU and that's where things get interesting. It's very difficult to parallelize the primary logic loop of a game but it's not difficult to parallelize things like AI, physics and environmental effects. Dynamic environments lend themselves particularly well to wide processors. We'll start seeing those additional resources start to be utilized more. Also please understand that the last six years have been console hell for the PC market, nearly every major game developer went out of their way to design everything for consoles because the mantra across the industry was "PC is dieing, all hail consoles!". PC versions of games were hastily thrown together ports with dirty patch code being used to "make it work with minimum manpower" right before release. That seems to be changing now with developers realizing there is a very profitable market segment to sell to, chiefly guys who grow up with Nintendo and early PC games who now have nice paying jobs and can afford high priced powerful toys called "gaming rigs". That market segment absolutely hates trash console ports and will vote with their wallets.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


programmers love to try and implement an artificial maximum for dual core cpus. that way they can turn around and say "SEE, WE DON'T NEED MORE CORES". what they are thinking is 'man, I hate programming for quad core cpus.'

skyrim Is plagued with laziness in the programming, its a copy-paste engine with a few graphic tweaks thrown in. its cheap, fast and programmers can concentrate on content and ignore the rest. Software devs are after the same goal as any other company, profit. why spend money on tweaking when you can just ship it and call it done.
 

Reepca

Honorable
Dec 5, 2012
156
0
10,680


Consider one met - as a homeschooled 12th grader, I can honestly say that I would love to work in a team. A project with only one person's ideas, one person's knowledge and one person's opinion of their own ideas for critique (to say nothing of one person's memory and thinking capacity) is awful to work on. It's ironic since one of the things I hated most in public school was "group projects", but now that there's meaningful stuff to be done I wish I had others to help.

I'm sure that as I meet other programmers with whom I can feel frustrated, exasperated, and generally hateful towards this will resolve itself.
 

Reepca

Honorable
Dec 5, 2012
156
0
10,680


Question: When you talk about parallelizing AI/Physics/Environmental Effects, are you referring to data-parallelism or task-parallelism? For example, I have a hard time imagining one thread drawing and one thread handling physics to be very effective - the position of the objects depends on physics, so drawing must wait on the physics-handling thread, and it ends up serial anyway. If it's data-parallel, on the other hand, it seems straightforward enough to have a different thread handling the interactions between one object and the surroundings for each moving object (or for a group of objects, since there are likely more objects than cores and there isn't anything to be gained from having more threads than physical cores).
 

ah, the cooler thing. with cpu overhead reduced in dx 12 games, do you think heat dissipation and noise will be problems? especially with cpus like fx6300 performing closer to i5s and athlon x4 -Ks performing closer to core i3s. since both fx and athlons are unlocked, won't it be better to upgrade to an after market cooler? i wouldn't have brought up cooler upgrade before, but dx 12 reducing cpu overhead and possible improvement in perf for price - makes me think cooler upgrade will be worth the money. even in the worst case scenario i.e. dx11 like cpu-bottlenecked performance, users would get better non-gaming application performance. i am not talking about raw gaming performance. i am talking about a sub $120-150 fx 6xxx or athlon x4 cpu performing close to $190 + core i5 cpu.
checked the newegg prices, you don't need a microcenter to get cheap cpu as amd has lowered the price already only the kaveri apus are slightly overpriced.
 

Reepca

Honorable
Dec 5, 2012
156
0
10,680
Perhaps a bit off-topic, but someone should start an HSA thread for those interested in discussion/news/programming about it. There aren't exactly any HSA programming tutorials/introductions on the internet - is the requisite software even finished yet?
 

truegenius

Distinguished
BANNED


or we can say that their new flagship fx8350 is close to their old flagship x6 1100t
even after using all the advantages like smaller process, higher frequency, more cores, power recycling (resonant mesh), new instructions, aggressive turbo, near optimal core voltage, better memory controller
http://www.anandtech.com/bench/product/697?vs=203
and shocking is x6 1100t can blow fx out of water just by uncore modification
heil k10 :mmmfff:
 


AMD made a bet that software engineers (like me) could magically make things parallel, which would improve performance. Two problems caused this to not work as AMD expected:

1: Games aren't CPU limited, and haven't been since the Core 2 days. If the CPU is getting its work done, adding more cores has ZERO benefit; IPC/Clock drives performance from that point forward, and that's exactly where BD is lacking.

2: GPGPU took off about the same time, so the types of productivity software that would have benefited AMD the most instead went the OpenCL/CUDA route instead, as the gains were higher.
 


If the processing flow is not parallel, you can't extract any speedup by adding more processing elements. ITS THAT SIMPLE.
 


Here's the problem: AI and Physics get really complicated really fast. That's why, to this day, we don't see complex AI's that interact with eachother, you instead have stand alone AI processing. Same reason why multi-object dynamic physics is VERY carefully scripted. Right now, you can more or less tread each AI/Physics instance as it's own independent thing, and you don't have many interactions between them. But once they start interacting, oh boy, processing requirements jump through the roof. That's why no one has really attempted much in these areas in over a decade.

In terms of the horsepower required, I think especially physics will eventually become more complex then even rendering, Ray Tracing included. That's why I was a very early supporter of Agiea PhysX, since I believe the only way developers will ever move beyond the very simplified physics we have now is if you have a unified API and dedicated HW to do the task. But that ship has long since sailed.

What I want, is when you shoot a bullet in a FPS, it's path is determined not by linear models, but by bullet/gun environments, stage environment, weather conditions, etc. Whether a bullet passes through obstructions is determined by the individual bullet/physics equations, not by some hardcoded value thats set for a selection of weapons. I want damage determined by the type of bullet combined with the impact velocity/angle it hits the target at, as well as where it hits. And so on. But nope, we're still stuck in the age of hardcoded damage values.

Part of it is simplicity, part of it is time, part of it is laziness, and part of it is that doing this gets REALLY expensive really quickly. And while I want all this, I also accept we're not even close to having the processing power to actually do this at any reasonable framerate.
 
Status
Not open for further replies.