AMD CPU speculation... and expert conjecture

Page 339 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Here is something to consider:

The very first generation of games on the new consoles will also have to be back ported to PS3/XB360 (or ported forward to new consoles) as well for the massive target audience it allows. By doing so, they're going to have lower standards for the performance to accommodate the fact that they don't want to make the same game entirely twice. Like GTA5 for example...they're going to reuse as much of the existing structure and code as possible. In many ways they'll have to change a lot; though if they run DX9 and dumb down the specs, they can likely reuse a lot of the existing graphics code.

Now, what you're going to see, again, like PS3 transition after PS2, is once the market penetration between the 2 has diverged sufficiently that the installed base will support large AAA titles (think probably later first gen, early 2nd gen games), and as developers learn to better harness the hardware, the FPS will increase and the resolution with it.

Ultimately, you cannot neglect that there are over 150 million last gen consoles in the US alone...even given stellar holiday sales, you might see an installed base between the 2 around 10-15 million, perhaps a tad more. As a game publisher which market do you want to hammer a bit more heavily? I know publishing houses like Bethesda, EA, and Activision are going to target numbers...
 

In the end though, the consoles are not really that powerful.. They will end up in the same mess 5-6 years from now, trying to get Crysis 3 to run on a ~7900GTX and ~2900XT is not exactly the easiest task out there....
 
Crytek went back to PC only with Crysis 3 because Crysis 2 was not nearly as demanding as Crysis before it. Additionally, they typically make Crysis their "look what we can do on a PC" title to try to get people to license their engine.

EDIT: In PC hardware terms, because of the low level access, I equate the consoles to a non-extreme i7 and a HD 7950 roughly...
 

More like 8350 and 7870XT if you ask me..
 


Then I suppose in your world, the shrink from 32nm to 28nm will bring no power savings. What then is the purpose of shrinking the die?

There have been no official word on whether its bulk or fd-soi, other than your opinion and an article posted over a year ago.

As for your constant ERMGO ARM IS SUPER HIGH END becase Nvidia is making a gpu supercomputer with a single ARM core ... people who need fast servers/workstations will stick with 3+ ghz x86 systems. power saving means nothing compared to labor costs.

ARM server is a niche market that is not proven, yet you speak as if it was already dominating the market.

p.s. looks like Nvidia is feeling the pressure of the 290X. http://www.tomshardware.com/news/nvidia-gtx-780-price-drop,24886.html#nvidia-gtx-780-price-drop%2C24886.html?&_suid=1382995055544028305646344288965
 
I have been reading through this thread for the last few days, weaving my way around an awful lot of ARM stuff. Is it possible to have an Arm thread for all this ARM discussion? I am finding it tedious to maneuver around.

I appreciate a lot of the good info presented.

 


As opposed to MIPS which actually used to power the SGI super computers before the cheap x86 came along and broke that ecosystem. 😉
 


The irony is the 290 (not X) > GTX 780 and it's still $50 cheaper after the price drop than the 780.

780 will have to come down to $399 to even be valuable for the money....LOL!
 


You do realize if Xbox 360 CPU was placed into a desktop-class OS environment that it would be a complete dog by todays standards, right?

http://www.techhive.com/article/112749/article.html?page=7

Put this into perspective. PowerPC around that time was getting slaughtered by old AMD FX K8 chips.

Mind you, this is THE BEST case for PowerPC in photoshop

PPC 1.8ghz: 27 seconds single core
FX-51: 2.2ghz 21 seconds single core

PPC is slower per clock than K8 architecture. Jaguar is faster than K8 per clock, and I'm not even accounting for the added instructions. Mind you FX only really supported SSE 2 and Jaguar supports TONS of instructions like AVX, SSE 4.1, etc.

To put it bluntly, 1.6ghz Jaguar core with optimized code should blow 3.2ghz PPC Xenon core out of the water.

iMac G5 had a Power970 CPU in it so it's safe to compare.

The CPU in 360 is like a core and a half of a 3ghz K8 CPU running at best SSE 2 instructions. All Xenon has going for it is extra instructions added by MS and honestly it looks like a barbaric AVX version.

So can we please drop the "ZOMG 8 CORE JAGUAR IS ONLY BARELY FASTER THAN 3.2ghz Power970 CPU WHICH CAME OUT IN 2003!!!" FUD already?

It has almost nothing to do with BD, SR, or anything. The only thing it has to do with SR is that it's going to tell us how software will look like when PS4 and XBone get close to EOL.

I am, however, completely unsurprised that gamerk ignored my comparison shots of CoD2 which released on Xbox 360 to Crysis and the other titles that top the charts of 2013 in graphical fidelity and ignored my point that the games that are coming out for PS4 and XBone will look horrendous by the time those consoles get closer to EOL.

It's mega-FUD, it needs to be dropped. Every console that has ever existed has had launch titles look far far worse than the titles that come out at EoL of the console.

I think that some of you are forgetting that if you are running one thread, or a game with two threads, that a shared decoder isn't going to be that horrendous of a bottleneck. It's going to make the biggest impact when you are cramming all the cores full.

At least that's what I'm taking away from it (please correct me if I'm wrong).

However I don't see fixing the front end being shared giving an even uplift in performance to every workload. For all we know AMD might have split this up and found it didn't do anything for single thread (or multi-thread). Imagine if everyone is all excited for 30% faster SR and then it is only 30% faster in multi? It would be a disaster.

I am eager to see how SR performs in APUs. I think it will answer a lot of questions people are having. I hate to say this but I'm starting to wonder if there is no HEDT SR because it wasn't meeting performance goals OR if it just is a bad time to launch it or if they're just trying to capitalize the best they can off of PD.

I would imagine PD has not been a huge profit-maker for them so it would make some sense for AMD to postpone replacing it so they can milk it for more profits.

I don't think I've posted this in a while, but here's a little reminder of where AMD sits on amazon sales charts in CPUs in regards to popularity:

http://www.amazon.com/s/ref=sr_hi_eb?rh=n%3A172282%2Cn%3A541966%2Cn%3A193870011%2Cn%3A229189&sort=popularity-rank&ie=UTF8&qid=1383000135


Do you want to do a fun exercise?

Lets take a look at AMD's top offerings! Oh look, FX 8350 most popular AMD product.

But this is just CPUs right? Surely APUs are in their own category!

NOPE! APU is on the next page.

Please tell me more about how AMD is just going to get rid of their three most popular products on Amazon and replace them with those really, really popular APUs.

5800k is less popular than 4960x...
 


The INT gains are likely from the dual decoders. The FPU numbers may be due to a previously unrealized issue with the FPU that needs adjustments.
 
Err, whoever posted the X360 CPU rant. I never associated it with it being slower than the Jaguar, I was just saying that it was a pain to port/develop games on them right now.. 8-core Jaguar is ok, not the best, but good enough for a console..
 




Unless the 780TI is faster than the titan it won't matter much
 


@$700/ea. instead of $1000/ea. it will most certainly not be faster.
 

It is moar about the price drops, the 780 is competitive, but it is thankfully hurting Nvidia's Corporate @$$ to charge $500 for a G*110 like they did in Fermi 😛 As for the 780[strike] Ghz Edition[/strike] Ti, that is still a terrible deal.
 


It can't be 30% faster at everything because they only beefed up the instruction decoder. The back end execution resources are only slightly improved.

Programs that were highly parallel to begin with which often fit in the microOp caches like video encoding, rendering, archiving shouldn't see much performance gain.

You should see a larger improvement in games and general multitasking.
 


Unless you want 3D gaming 😛
AMD needs to get a better solution for that btw cause right now they aren't even an option. 3D gaming setups aren't exactly budget builds, you need some very expensive GPU horsepower to get a quality experience and thus it's a very high profit margin though niche market.
 
Most users don't 3D game. IMO that's probably why AMD hasn't done much with 3D. For Oculus Rift's sake! I suspect the future of AMD might do something with it Haha!

I just want to be able to play 1440P crysis 3 @60 FPS first. I think a R9 290 and a 8 core steamroller with 2133Mhz RAM could do that for me 😀
 
Status
Not open for further replies.