I am comparing FX to a 4.0ghz E8400. Even the 3.4ghz C2Q held its own well. That is just pathetic.
And yet in the same graph, we easily see that the same E8400 keeps up with a current i3... so is the i3 totally rubbish as well now? I mean actually the i3 is a fairer comparison (dual core vs dual core).
I think the key point here is Skyrim has *terrible* threading capability. The i5 pulls ahead, most likely due to turbo boost. The AMD chips, the i3, and the C2's are all clumped together- the software simply isn't written to make use of modern processors. Having said that, in the developers defense, even a lowly Athlon 2 (which is equally old as the E8400) is *sufficient* to play the game smoothly. It's probably a case of they optimized it to where it needed to be, could they improve it by making better use of modern processors? Absolutely, but did they need to?
The i3 is also clocked 700mhz slower. Clock for clock Core 2 is showing to be superior to FX. That is the point I am making. AMD has not improved at all, in IPC, despite what fanboys think. They went from worse, with Faildozer, to about back where they were, with PhII, with Piledriver. They raised the clock speed way up, like Intel did with P4, and increased its multithreaded capability. Yes the future is multithreaded, but that future still hasn't happened yet. Single threaded performance does still matter, and AMD has failed to improve on that. I am not anti AMD. I have owned plenty of them, and still have one. I just see the reality of the situation. I want the AMD, that actually competed, back. They need to slow up a bit on thinking so much about the far future, and a little bit more of the here and now. All that new future stuff is cool, but useless if it isn't supported. The ratio of future tech, vs the here and now, needs a better balance than what they have been doing.
Honestly it's mostly just people wanting to hate on something, they don't need much of an excuse to do that. BD was primarily a server orientated CPU and it does remarkably well, for it's cost. It didn't sell well because the server market is dominated by a few OEM's whose business deals revolve around combined Intel packaging they get sold at a discount (MB chip + CPU + NIC + supporting hardware). Intel markets a total package to OEM's and can guarantee supply while AMD struggles with this. Now it seems AMD is going back to their budget / value roots looking to offer a competitive product at a reduced cost, which they seem to be doing well.
BD was a piece of nonsense from bottom to top. There is no need to discuss BD again, the team responsible for it was fired and AMD already admitted in public that "Bulldozer was a fiasco".
I'm confused, this is supposed to be a 'bad' game for AMD? Looking at those charts, it tells me that an ancient Ahtlon II x4 640 is ample to play Skyrim at 1080p full settings, with above 30fps minimums....?!
Not as good as Intel. But yes, I am finding it ironic people are complaining about a game that a Athlon X4 and C2D can comfortably play.
But when AMD's latest hardware isn't really any better than Intel's hardware from 6+ years ago, there is a problem. Hell, they aren't even all that competitive with their own previous generation hardware. A 3.4ghz C2Q is within 2-3fps of a 4.2ghz FX 4300 and 3.9ghz FX 6350. AMD isn't making better products, they are making worse to similar performance, to what they were 6yrs ago. Phenom II X4 @ 4.0ghz beating FX 4350 @ 4.2. Even overclocked to 4.7, it isn't significantly faster. The 6350 @ 4.5ghz isn't much better either. This is why AMD has been losing market share, on CPU side of things. They are not advancing, they are running in place. Throwing more cores at it, does little, when most software doesn't use the extra cores. They need a boost in per core performance, and badly so. I miss the old AMD that was competitive, and kept Intel on their toes. The K7 and early K8 days vs PIII/P4 was an awesome time to be a PC enthusiast. Instead, they got cocky, with K8 X2. Intel released Core 2, caught AMD with their pants down, and they have yet to recover. Constantly playing catch up. Intel's business practices probably didn't help, but AMD didn't exactly do itself any favors either.
Note i don't own a I7 920 sadly(like doing tests) but i notice most of the time a 8350 is around a I7 920 in terms of modern gaming.
LOL...what game benches are you looking at? It is usually somewhere around a 2600k or 3570k depending on the game and benchmark.
I'm confused, this is supposed to be a 'bad' game for AMD? Looking at those charts, it tells me that an ancient Ahtlon II x4 640 is ample to play Skyrim at 1080p full settings, with above 30fps minimums....?!
Not as good as Intel. But yes, I am finding it ironic people are complaining about a game that a Athlon X4 and C2D can comfortably play.
But when AMD's latest hardware isn't really any better than Intel's hardware from 6+ years ago, there is a problem. Hell, they aren't even all that competitive with their own previous generation hardware. A 3.4ghz C2Q is within 2-3fps of a 4.2ghz FX 4300 and 3.9ghz FX 6350. AMD isn't making better products, they are making worse to similar performance, to what they were 6yrs ago. Phenom II X4 @ 4.0ghz beating FX 4350 @ 4.2. Even overclocked to 4.7, it isn't significantly faster. The 6350 @ 4.5ghz isn't much better either. This is why AMD has been losing market share, on CPU side of things. They are not advancing, they are running in place. Throwing more cores at it, does little, when most software doesn't use the extra cores. They need a boost in per core performance, and badly so. I miss the old AMD that was competitive, and kept Intel on their toes. The K7 and early K8 days vs PIII/P4 was an awesome time to be a PC enthusiast. Instead, they got cocky, with K8 X2. Intel released Core 2, caught AMD with their pants down, and they have yet to recover. Constantly playing catch up. Intel's business practices probably didn't help, but AMD didn't exactly do itself any favors either.
Note i don't own a I7 920 sadly(like doing tests) but i notice most of the time a 8350 is around a I7 920 in terms of modern gaming.
LOL...what game benches are you looking at? It is usually somewhere around a 2600k or 3570k depending on the game and benchmark.
not even beats i5-2500k , let alone newer i5 or i7
show me if it does ( not just some handful, but majority )
8350 does not rock in games when compared to i5
i am also wondering if we will see rejuvenation of ph2 x4/x6 in dx12
not even beats i5-2500k , let alone newer i5 or i7
show me if it does ( not just some handful, but majority )
8350 does not rock in games when compared to i5
i am also wondering if we will see rejuvenation of ph2 x4/x6 in dx12
My FX 8320 @ 4.0ghz, is slower than my 3.5ghz i5 2400, in WoW and D3, when paired with the same card. I have not tested any other games, as those are the only two those systems really play. The i5 runs at 2048x1152, vs the 1680x1050, of the FX. Blizzard titles in general, heavily favor Intel, though.
But when AMD's latest hardware isn't really any better than Intel's hardware from 6+ years ago, there is a problem. Hell, they aren't even all that competitive with their own previous generation hardware. A 3.4ghz C2Q is within 2-3fps of a 4.2ghz FX 4300 and 3.9ghz FX 6350. AMD isn't making better products, they are making worse to similar performance, to what they were 6yrs ago. Phenom II X4 @ 4.0ghz beating FX 4350 @ 4.2. Even overclocked to 4.7, it isn't significantly faster. The 6350 @ 4.5ghz isn't much better either. This is why AMD has been losing market share, on CPU side of things. They are not advancing, they are running in place. Throwing more cores at it, does little, when most software doesn't use the extra cores. They need a boost in per core performance, and badly so. I miss the old AMD that was competitive, and kept Intel on their toes. The K7 and early K8 days vs PIII/P4 was an awesome time to be a PC enthusiast. Instead, they got cocky, with K8 X2. Intel released Core 2, caught AMD with their pants down, and they have yet to recover. Constantly playing catch up. Intel's business practices probably didn't help, but AMD didn't exactly do itself any favors either.
i wonder how arguments like this will look like when directx 12 and compliant gpus and games come out. 😗
Probably the same since CPU's don't physically get faster and its not like 12 gains are only noticeable on one vender keep in mind they already did these tests and it doesn't change anything its just the gap got smaller between the 2.
That's only because the tests were done using silly combinations like Pentium G / i3 + Titan / x80 GPU. In realistic situations the GPU will be the performance limiter not the CPU. The short term implications is that "you can game on a dual core!!!!!", which will last all of one year. The long term impact is that developers will start looking at other things to pack onto the CPU and that's where things get interesting. It's very difficult to parallelize the primary logic loop of a game but it's not difficult to parallelize things like AI, physics and environmental effects. Dynamic environments lend themselves particularly well to wide processors. We'll start seeing those additional resources start to be utilized more. Also please understand that the last six years have been console hell for the PC market, nearly every major game developer went out of their way to design everything for consoles because the mantra across the industry was "PC is dieing, all hail consoles!". PC versions of games were hastily thrown together ports with dirty patch code being used to "make it work with minimum manpower" right before release. That seems to be changing now with developers realizing there is a very profitable market segment to sell to, chiefly guys who grow up with Nintendo and early PC games who now have nice paying jobs and can afford high priced powerful toys called "gaming rigs". That market segment absolutely hates trash console ports and will vote with their wallets.
Question: When you talk about parallelizing AI/Physics/Environmental Effects, are you referring to data-parallelism or task-parallelism? For example, I have a hard time imagining one thread drawing and one thread handling physics to be very effective - the position of the objects depends on physics, so drawing must wait on the physics-handling thread, and it ends up serial anyway. If it's data-parallel, on the other hand, it seems straightforward enough to have a different thread handling the interactions between one object and the surroundings for each moving object (or for a group of objects, since there are likely more objects than cores and there isn't anything to be gained from having more threads than physical cores).
Here's the problem: AI and Physics get really complicated really fast. That's why, to this day, we don't see complex AI's that interact with eachother, you instead have stand alone AI processing. Same reason why multi-object dynamic physics is VERY carefully scripted. Right now, you can more or less tread each AI/Physics instance as it's own independent thing, and you don't have many interactions between them. But once they start interacting, oh boy, processing requirements jump through the roof. That's why no one has really attempted much in these areas in over a decade.
In terms of the horsepower required, I think especially physics will eventually become more complex then even rendering, Ray Tracing included. That's why I was a very early supporter of Agiea PhysX, since I believe the only way developers will ever move beyond the very simplified physics we have now is if you have a unified API and dedicated HW to do the task. But that ship has long since sailed.
What I want, is when you shoot a bullet in a FPS, it's path is determined not by linear models, but by bullet/gun environments, stage environment, weather conditions, etc. Whether a bullet passes through obstructions is determined by the individual bullet/physics equations, not by some hardcoded value thats set for a selection of weapons. I want damage determined by the type of bullet combined with the impact velocity/angle it hits the target at, as well as where it hits. And so on. But nope, we're still stuck in the age of hardcoded damage values.
Part of it is simplicity, part of it is time, part of it is laziness, and part of it is that doing this gets REALLY expensive really quickly. And while I want all this, I also accept we're not even close to having the processing power to actually do this at any reasonable framerate.
Doesn't any game with bullet drop have a bullet-position-vs-time that is represented by an equation with degree of 2 (ie not linear)?
I'm all for better physics / simulation technologies, but imagine if a game had to take into account the gravitational force of everything, not just the planet below you. Any game with more than 2 or 3 objects would become immensely complex (so uh... best of luck, Star Citizen...). And if a game had a variable number of objects, you would need to build an entire abstract mathematics engine since adding an object changes the equation itself, not just some variable.
One area I would definitely like to see improvement in is collision detection. Hit boxes can be very close to the real thing, but they can never actually match it - why not do the math for checking for collisions with the geometry of the object itself? And inter-frame collision checking - is that a thing most games do nowadays?
^ add real life destruction physics for all objects ( AOE3 have miniature version of this 😗 )
imagine GTAx with this 😛 , "panzer" yeah you can run but now you can't hide, run for your life now 😀
I am comparing FX to a 4.0ghz E8400. Even the 3.4ghz C2Q held its own well. That is just pathetic.
And yet in the same graph, we easily see that the same E8400 keeps up with a current i3... so is the i3 totally rubbish as well now? I mean actually the i3 is a fairer comparison (dual core vs dual core).
I think the key point here is Skyrim has *terrible* threading capability. The i5 pulls ahead, most likely due to turbo boost. The AMD chips, the i3, and the C2's are all clumped together- the software simply isn't written to make use of modern processors. Having said that, in the developers defense, even a lowly Athlon 2 (which is equally old as the E8400) is *sufficient* to play the game smoothly. It's probably a case of they optimized it to where it needed to be, could they improve it by making better use of modern processors? Absolutely, but did they need to?
The i3 is also clocked 700mhz slower. Clock for clock Core 2 is showing to be superior to FX. That is the point I am making. AMD has not improved at all, in IPC, despite what fanboys think. They went from worse, with Faildozer, to about back where they were, with PhII, with Piledriver. They raised the clock speed way up, like Intel did with P4, and increased its multithreaded capability. Yes the future is multithreaded, but that future still hasn't happened yet. Single threaded performance does still matter, and AMD has failed to improve on that. I am not anti AMD. I have owned plenty of them, and still have one. I just see the reality of the situation. I want the AMD, that actually competed, back. They need to slow up a bit on thinking so much about the far future, and a little bit more of the here and now. All that new future stuff is cool, but useless if it isn't supported. The ratio of future tech, vs the here and now, needs a better balance than what they have been doing.
Ok, but then lets also consider it's widely accepted that Skyrim uses a particular set of code that is worst case for AMD. In other titles I'm sure you'd see decent IPC gains with an FX 8350 vs Core 2. Also, it's worth considering that AMD designed the architecture to run at high clocks with sensible power requirements- how much power was that Core 2 using at that speed? More than the FX I'd wager.
I'm not trying to say that the IPC of the FX is where it should be, we all know it's not, however I also think some of these issues are often blown out of proportion. As with all things building a good system is about balance. Where a CPU can provide enough power to keep the GPU busy, then it's good enough. If you look at average performance then the FX6XXX and FX8XXX parts are ample to keep up with mid range graphics boards (R9 280X, GTX 960) so make the basis for a decent gaming rig at a reasonable cost, with the added benefit of a bit of extra oomph in threaded applications compared to the cost equivalent i3 from Intel. Obviously if we start looking at high end graphics cards, or other situations where the bottleneck is pushed firmly back to the CPU and specifically single thread performance, the FX is a worse choice than the i3. I would suggest though that as we look at the newer games, the situation *is* changing. The other thing to keep in mind, more and more these days new titles are built on existing engines. As the new gen engines incorporate better threading, the subsequent games will have this by default. The Battlefield series is already a good example of a game where the now old FX 8XXX parts are performing *above where you'd expect* based on older titles.
I think if anything AMD's mistake with this range was they got the balance a bit too skewed towards multi thread, whereas Intel hit the nail spot on. Hopefully with the next gen cores AMD will strike a better balance.
Also, it's worth considering that AMD designed the architecture to run at high clocks with sensible power requirements- how much power was that Core 2 using at that speed? More than the FX I'd wager.
^^ Which is sad, considering that the top C2Q's were what, 45nm? And the early ones were 65nm, right?
This is correct. Conroe/Kentsfield was 65nm, and Wolfdale/Yorkfield was 45nm. A part of me misses my old Xeon 3210. Clocked @ 3.6ghz, it was a pretty solid chip.
The FX never hit AMD's performance targets. Remember, it was planned as a 4ghz, 8 core, chip at launch, with IPC matching the phenom. That was 2011. Had they achieved that goal, FX would have been very successful. Maybe not the best gaming part, but a great overall cpu.
To this day, four years later, they have not reached the FX performance goals. Given how poorly it has performed, turning into a quality low power part is an accomplishment.
Funny how even though AMD has far fewer resources, Carrizo is a more elegant design. One die, simple package.
Broadwell is a 2 die, package with a wart (FIVR & 3DL pcb) on the bottom that looks like a mess.
And the originally planned desktop version of Carrizo disabled part of that "elegance" on die to add an external chip on mobo with the lacking ports/services on the die, because you cannot have a single optimal die from 15W to 65W. AMD has designed a chip optimized only for a tiny market (15--35W) and can integrate everything needed in a single die.
Broadwell is designed to work from 3.5W (tablets) to 160W (servers). You cannot do a single die for that. It is not a question of resources but of physics.
Moreover, AMD has stated clearly that their future products will be multi-die: e.g.; with six different dies on one package. Because will be focused to serve a broad range of customers from tablets to HPC. Will you consider AMD's future products inelegant by using multiple dies?
But when AMD's latest hardware isn't really any better than Intel's hardware from 6+ years ago, there is a problem. Hell, they aren't even all that competitive with their own previous generation hardware. A 3.4ghz C2Q is within 2-3fps of a 4.2ghz FX 4300 and 3.9ghz FX 6350. AMD isn't making better products, they are making worse to similar performance, to what they were 6yrs ago. Phenom II X4 @ 4.0ghz beating FX 4350 @ 4.2. Even overclocked to 4.7, it isn't significantly faster. The 6350 @ 4.5ghz isn't much better either. This is why AMD has been losing market share, on CPU side of things. They are not advancing, they are running in place. Throwing more cores at it, does little, when most software doesn't use the extra cores. They need a boost in per core performance, and badly so. I miss the old AMD that was competitive, and kept Intel on their toes. The K7 and early K8 days vs PIII/P4 was an awesome time to be a PC enthusiast. Instead, they got cocky, with K8 X2. Intel released Core 2, caught AMD with their pants down, and they have yet to recover. Constantly playing catch up. Intel's business practices probably didn't help, but AMD didn't exactly do itself any favors either.
i wonder how arguments like this will look like when directx 12 and compliant gpus and games come out. 😗
Probably the same since CPU's don't physically get faster and its not like 12 gains are only noticeable on one vender keep in mind they already did these tests and it doesn't change anything its just the gap got smaller between the 2.
That's only because the tests were done using silly combinations like Pentium G / i3 + Titan / x80 GPU. In realistic situations the GPU will be the performance limiter not the CPU. The short term implications is that "you can game on a dual core!!!!!", which will last all of one year. The long term impact is that developers will start looking at other things to pack onto the CPU and that's where things get interesting. It's very difficult to parallelize the primary logic loop of a game but it's not difficult to parallelize things like AI, physics and environmental effects. Dynamic environments lend themselves particularly well to wide processors. We'll start seeing those additional resources start to be utilized more. Also please understand that the last six years have been console hell for the PC market, nearly every major game developer went out of their way to design everything for consoles because the mantra across the industry was "PC is dieing, all hail consoles!". PC versions of games were hastily thrown together ports with dirty patch code being used to "make it work with minimum manpower" right before release. That seems to be changing now with developers realizing there is a very profitable market segment to sell to, chiefly guys who grow up with Nintendo and early PC games who now have nice paying jobs and can afford high priced powerful toys called "gaming rigs". That market segment absolutely hates trash console ports and will vote with their wallets.
Question: When you talk about parallelizing AI/Physics/Environmental Effects, are you referring to data-parallelism or task-parallelism? For example, I have a hard time imagining one thread drawing and one thread handling physics to be very effective - the position of the objects depends on physics, so drawing must wait on the physics-handling thread, and it ends up serial anyway. If it's data-parallel, on the other hand, it seems straightforward enough to have a different thread handling the interactions between one object and the surroundings for each moving object (or for a group of objects, since there are likely more objects than cores and there isn't anything to be gained from having more threads than physical cores).
You gotta stop approaching things from a human PoV, humans are very serial thinkers to the tune of cause -> effect -> observation -> reaction. Physics is extremely dynamic and parallel, where time is the only serial element. Each interaction would be done simultaneously but only if that interaction happened within the exact same time slice and would use a number of threads equal to the number of simultaneous interactions within the same reference frame. This is how physics simulators work but doing it in real time is computationally expensive requiring massive parallel capabilities. The real issue is creating an API or other model that allowed the input and output of the physics to be useful to the logic of a game. It's extremely hard for humans to wrap their minds around dozens if not hundreds of things happening simultaneously so instead we think of it as a series of events handled in sequence which can work well without our own mind.
A good way to think of it would be how a database serves requests. Each request is a single serial task but the database receives many hundreds of these tasks every second and some of them interact with each other. So while each physics interaction is serial in that moment, there are many interactions happening simultaneously that can be processed separately, for that time slice / moment. Then with the results of those interactions you process the next time slice / moment, then the next and so on / so forth. The issue then becomes balancing it out so that you don't get caught behind since if it takes you longer then 10ms to calculate out the physics that took place in those 10ms your going to start having lag issues. You want to keep the interactions simple enough that you can do them quickly and not get caught, but complex enough to be useful.
I'm confused, this is supposed to be a 'bad' game for AMD? Looking at those charts, it tells me that an ancient Ahtlon II x4 640 is ample to play Skyrim at 1080p full settings, with above 30fps minimums....?!
Not as good as Intel. But yes, I am finding it ironic people are complaining about a game that a Athlon X4 and C2D can comfortably play.
But when AMD's latest hardware isn't really any better than Intel's hardware from 6+ years ago, there is a problem. Hell, they aren't even all that competitive with their own previous generation hardware. A 3.4ghz C2Q is within 2-3fps of a 4.2ghz FX 4300 and 3.9ghz FX 6350. AMD isn't making better products, they are making worse to similar performance, to what they were 6yrs ago. Phenom II X4 @ 4.0ghz beating FX 4350 @ 4.2. Even overclocked to 4.7, it isn't significantly faster. The 6350 @ 4.5ghz isn't much better either. This is why AMD has been losing market share, on CPU side of things. They are not advancing, they are running in place. Throwing more cores at it, does little, when most software doesn't use the extra cores. They need a boost in per core performance, and badly so. I miss the old AMD that was competitive, and kept Intel on their toes. The K7 and early K8 days vs PIII/P4 was an awesome time to be a PC enthusiast. Instead, they got cocky, with K8 X2. Intel released Core 2, caught AMD with their pants down, and they have yet to recover. Constantly playing catch up. Intel's business practices probably didn't help, but AMD didn't exactly do itself any favors either.
Note i don't own a I7 920 sadly(like doing tests) but i notice most of the time a 8350 is around a I7 920 in terms of modern gaming.
LOL...what game benches are you looking at? It is usually somewhere around a 2600k or 3570k depending on the game and benchmark.
^ add real life destruction physics for all objects ( AOE3 have miniature version of this 😗 )
imagine GTAx with this 😛 , "panzer" yeah you can run but now you can't hide, run for your life now 😀
Me thinks i got made fun of for loving AOE3, actually i still get stutters in that game when its locked to 60fps and my new 970 has coil wine can't ever win, at least my CPU runs fine on the stock heat-sink i think i won the lottery with this CPU.
8350 rocks to add more to the comment later games that use all the 8350's potential (rare as heck to me) performs around a I5-I7 but i play many many games and even new ones like ATTILA plays very bad on the 8350.