AMD CPU speculation... and expert conjecture

Page 183 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


Believe it or not, I do agree with you, and I think we have found common ground. AMD has a lot of catching up to do, no doubt. Intel didn't exactly play by the rules, however they got a huge lead on AMD with Sandy Bridge (good God almighty Sandy Bridge was awesome when it debuted!!) and they have the R&D funding that AMD can only dream about.

I do however think that Intel got a little to self assured, too high on their own hype, too lazy with Haswell, and therefore have given AMD a valuable window of opportunity. If Steamroller performs well, Intel may not be able to close that window in time. AMD was given a little new life with the Sony and Microsoft contracts, and they have had a long time to study Ivy Bridge to try to match it. With Haswell being no improvement over Ivy Bridge if AMD's goal was to catch i7 Ivy Bridge, then they have successfully caught up to i7 4 core Haswell. That is a lot of IFS, however AMD does have a golden opportunity that Intel isn't likely to give them again anytime soon.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810
Don't get me wrong either, even though I don't like Intel as a company, I do like their products (they make fine processors) and am very glad that they are around. If Intel were to say start making processors only for servers and mobile solutions it would actually be bad for us all. Without Intel making desktop processors AMD would get incredibly lazy and we would have a reversal of fortunes technologically. Intel pushes the envelope and produces processors that not too long ago would have been akin to supercomputers, AMD tries to catch them, and Intel is forced to produce better processors that AMD again tries to catch or maybe even do a little better than. Without that Intel vs AMD competition we would probably still be sitting around on first generation dual core processors.
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360
It might sound crazy but I think AMD should release 12 and 16 core CPUs. If they could offer 12 core CPUs at $700 and 16 core CPUs at $1000 they might just bring Intel crumbling down in that market. Of course they'll need a die shrink, maybe 2. I'm actually curious why they will go to 28nm when Intel skipped straight to 22nm. Doesn't make much sense to me.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


AMD has shipped 12/16 core CPUs for years.

$700 for a 16 core

http://www.cpu-world.com/CPUs/Bulldozer/AMD-Opteron%206376%20-%20OS6376WKTGGHK.html

 
OK, after reading this drizzle, I had a decent rig back in the day, also tried playing the "crysis" of the day back then as well, Oblivion.
Had a AMD 185 and a 1900XT, near the top, only a FX60 and a XTX were faster, and Oblivion was barely able to run with all eye candy at the time on it, no other rig would do it.
At the time, they still hadnt learned to squeeze everything out of the consoles yet, were nowheres near it, so enough already, if they wanted, using whats in the new consoles, they could easily make games that would stump current PC HW, like an Oblivion or a Crysis, but devs have to learn the HW and SW abilities/limits, have the engines, and still make room for the lowest common denominator .

I personally believe AMD has a nice opportunity to catch up to Intel as well here, as we see fabs jumping process at accelerated rates as well as AMDs newer designs and plans.
My guess is, it wont matter in a few years, that by then, power will be in more control, so battery life wont be nearly as critical as we see today, and good enough will be had, the difference between AMD and Intel wont be alot, not a gap, more like what we saw in reverse back in the day, when P4s lost to those Athlons, not by alot, and the P4s actually did do a few things better, usuable things on DT for average Joe.
More on par is three words I also think youll be reading/seeing describing AMD in the future, and the sense of a price/perf situation will rise
 
OK here is my best guess as to where we will be in early/mid 2014 based on what I have read/heard. The PS4 will run games like a high end PC I would be surprised if its as good as a PC with a Titan but a 7970 is likely due to the way games can be written for specific hardware and to a lesser extent not running a multi purpose OS, a $500ish PC could run games as well if every PC user had exactly the same hardware and background programs running but thats never going to happen. But the next gen. GPUs will be getting a mid range PC closer. AMD will close the gap to Intel in overalll CPU performance and with HSA may beat them universally in games, Intel could respond but if its only gaming the market may be small enough for Intel to let AMD have it rather than spend more on there CPUs. This is only my guess and feel free to tell me why I am wrong.
Also I never got an answe to what does this mean " the TSX extensions that enable transactional memory. Intel has stripped out the VT-d device virtualization and vPro management features in the K series, as well." from http://techreport.com/news/24950/intel-removes-modest-free-overclocking-from-standard-haswell-cpus
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Because Haswell is faster than IB.

The short story is forget Haswell, all Steamroller has to do is come close to or beat Ivy Bridge and its better than Haswell by default.

Forget an existing product that is generally faster than the competition (at a higher price point) , but wait for a theoretical product that nobody knows when will be released ?

Basically no matter what Intel will exclaim Haswell an overwhelming success.

For the FormFactor Haswell was designed for(notebooks and Ultrabooks), it is quite good.

For Notebooks, you get more CPU perf at better battery and much better graphics. For ultrabooks, you get similar/minor CPU improvements and almost double the battery life, basically due to much lower idle power.


Of course even if it does Intel fans will rally around the "low power banner" and exclaim that performance isn't as important as power usage.

And the AMD fans will rally around the "CPU perf isnt everything banner" , and exclaim that a faster GPU is better for "perceived" performance.





From what i have seen, Haswell is faster than IB in almost all benchmarks. How is it "faring worse" ?
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360
Haswell performs better clock per clock, but Ivy overclocks better, and Sandy overclocks MUCH better than Ivy. And I meant they should ship desktop chips with 12 and 16 cores, not server chips clocked below 3GHz. Opteron might be a good value for a workstation or server, but not too much for a gamer.
 


You would like a desktop 12 or 16 core Piledriver (monolithic core) even less than the Opteron 6300s. The only reason that AMD can even reasonably offer chips above about 12 cores is because they use a dual-die MCM rather than using one enormous monolithic die. The use of a two-die MCM greatly increases yields as one monolithic 8-module die would be over 700 mm^2 and essentially impossible to manufacture due to yield defects. Just ask NVIDIA about how well 500+ mm^2 dies yield, and this would be significantly bigger than something like the Full Fermi. I would also hazard a guess that they'd have to set the clock speed bar so low to get any chips were defect-free to pass. You wouldn't want a $5000 16-core chip with a maximum turbo speed of 2.5 GHz, which is likely what woul d happen if there was a monolithic chip instead of the MCMed Opterons.

The MCMed Opterons are not bad chips at all, especially if you get one you can overclock. They are still the same K10s, Bulldozers, and Piledrivers that are in their desktop counterparts, they are just underclocked to keep TDPs reasonable. People getting ES units and special BIOSes for G34 boards have been able to get them up to roughly desktop chip speeds and they perform very, very well but burn VRM-popping amounts of juice in doing so. They are essentially two CPUs stuffed in one socket after all and are going to draw a lot of power at "full" speed. Probably not what a gamer really would want but it's out there if you really, really want it.
 


I strongly recommend the Supermicro H8QGL-iF or H8QGl-6F if anybody wants to do this. These boards have many advantages over other G34 boards:

1. They are able to be overclocked with either ES chips an a tool like TurionPowerControl- just like any other G34 board. However, you can overclock non-ES chips (currently only Opteron 6100s) using HT ref clock overclocking as well using a well known third-party BIOS using any Supermicro G34 quad board.

2. The VRMs in the H8QGL line are stouter than in any of the rest of Supermicro's quad G34 lineup. This was figured out when people overclocked Opteron 6100s in H8QGLs vs. H8QGi/H8QG6 and the H8QGL's did better. All of the non-1U H8QGLs have three EPS12V connectors to feed those VRMs, which are really needed if you push the CPUs. Not all of the rest of the quad G34s do, I know the H8QGi only has two.

3. The H8QGL-iF and -6F have two SR5690 northbridges and six PCIe x16 slots which is better than any other quad G34 board.

4. They are the only G34 boards I know of that can run RAM at the full DDR3-1866 speeds supported by Opteron 6200s and 6300s. All other boards are limited to DDR3-1600 maximum because they have 8 or more DIMM slots per CPU. The Opterons have to follow the same rules as desktop- more than one DIMM slot per channel, even if it is empty, means no DDR3-1866.

I have an H8QGL-iF and it is a terrific board. It fit easily in my Chenming ATX-801F server case with the only modification being drilling and tapping the motherboard tray for the three lowest standoff holes. Unfortunately it only has one 12-core CPU and it's not even an ES (a 6234.) It still runs pretty snappily though. Someday I'll put four 16-core Piledrivers in there and turn it up to 11...someday...
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360
I only come here to learn about upcoming processors but it makes me sad that I have to wade through pages upon pages of arguments. Can Toms please open some sort of battleground thread where all these warriors can hold all their fights?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Except that an i7 performs as an i5 only for games using four threads or less.

Except that the demo was not running on the PS4 but in AH with less than a third the performance the console. Except that some bugs and different kinematics did a mere visual comparison useless.

Except that the dev kit for the PS4 uses a 8-core FX chip. And we know from games as Crysis 3 that a FX 8-core chip is at the level of a i7 and beats a i5, because the latter only has four threads.

Except that it is easy to compute the minimum PC equivalent (*). I did many time ago and I got a i7 plus a GTX-780 as the minimum PC to play console ports. I did before the E3 started. And at E3 Microsoft has selected an i7 + GTX-780 + W7 as PC comparable to the Xbox One

According to reports, many if not all of the demo units at Microsoft’s E3 press event and at the booths were PCs with Xbox One comparable hardware.

http://gamer.blorge.com/2013/06/16/controversy-over-xbox-one-games-running-off-windows-7-at-e3/

Of course XboxOne hardware optimizations and cloud will do those high-PC outdated in a pair of years.

(*) The hard part is to compute the performance gain introduced by the custom components found in the consoles and in the cloud system. The cloud offers a theoretical 4x boost for the Xbox One, but nobody know how many of that theoretical performance will be finally used by the games
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010
At the moment, I can only say: I'm getting a new gaming rig with Two GTX 780s, and a Water Cooled i7-3770k which is going to be overclocked. With the other hardware and OS, the rig is ~$2600. If the PS4 is going to be about as good as this with just one GTX 780, I'd just rather stay on my old Core 2 Quad @ 2.5 and AMD 5570 and wait a year. *hides in dark corner* >.>
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


That is a test of the desktop 4770k, which consumes more power than the 3770k. The TDP of the 4770k is also higher because runs hotter. Haswell desktop gives about the same performance, but is more expensive, runs more hotter, has poor OC, and consumes more power than Ivi Bridge equivalent.

That is the reason why people is using funny names as Failwell, Hasbeen, Hasfail...



Still invalid you mean...

That 'PS4' demo was running on AH with less than one third the performance of the real PS4. Also only 1.5GB VRAM were accessible, whereas the PC version of the demo requires 2GB. Developers had to cut down some effects such as the number of particles.

Epic received the kits some weeks before the GDC show and due to lack of time couldn't do any optimization for the console. The kit also used 'beta' APIs. In fact the demo shown at GDC did run with bugs. For instance, the difference in the lava flow was due to tessellation being broken in the 'PS4' version...

Moreover, the 'PS4' version targets 1080p @ 30fps, whereas the PC version was running at sub-1080p @ 30fps.

You cannot call other fanboys and then write about stuff you don't know.
 

walkeith25

Honorable
Jun 16, 2013
8
0
10,510
I would appreciate any help
5.jpg
06.jpg

07.jpg
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
Lol, If this FX 9530 (or what ever it is called) is priced at $400, I may as well just pick up a i7 3930K from microcenter. Or just buy a FX 8350 and OC it. I don't even know what AMD is doing anymore lol.
 

rmpumper

Distinguished
Apr 17, 2009
459
0
18,810


AMD are doing the same thing as Intel with their 3960x and 3670x CPUs when a lot cheaper 3930k is available.
Intel doing it - genius.
AMD doing it - WTF?
 

it proves that amd never stopped drooling at the high end pie and will gouge customers as much as intel if any opportunity presented itself. amd and intel are both corporations. amd's not a charity/non-profit org run by pc enthusiasts... it's not a charity, period.

hopefully it'll put a little damper on intel's stranglehold on h.e.d.t. market if it's successful. i assume that oems were pining something from amd for a long time as well. people want to buy 6-8 cores at $180-230 or cheaper(if the older cpu roadmaps were executed properly).

i honestly thought steamroller... i.e. top end kaveri (early rumored to have 6 cores and gddr5 etc.) would be the one to compete against at least mainstream core i7. now i am hoping that steamroller fx might do something. otherwise we regular users will keep losing.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The new chips are the FX-9370 and the FX-9590 and will be faster than the i7-3930K and the $1000 series i7-3960X and i7-3970X in software that uses its resources. A two threaded software is using only a 1/4 of the potential of a FX chip, the rest of the chip is ignored. A game that begins to use the dormant potential in FX chips is crysis 3

500x1000px-LL-7d31c35c_proz.jpeg


FX-9590: 76 FPS (approx.)

Yes you can buy a FX-8350 and try to OC it to match the new FX-9370 or the FX-9590, but then you can OC the latter as well.
 
The way content was designed for last gen consoles...you're right. They used serial threads on multicore CPUs that had capability to run 4-6 threads at max, and had a moderate GPU for the time.

Now, we have the ability to use the GPU whenever we want to process things that the CPUs are slower at running. That's why they went with a configuration of more cores and slower clocks.

Except the GPU already does far more work then the CPU, so there isn't much left for the CPU to do aside from feed the GPU and run the easy to run threads (audio, UI, OS, etc). So I doubt you are going to see MORE work put on the GPU, since that would quickly bottleneck. And 60FPS is the IN thing now, which is going to farther limit what devs can do.

By the same token, programming for serial operations, when that is a waste of resources, doesn't make sense on this generation of consoles. Programming will change to adapt to the hardware. Things will go much more parallel...running 14 threads on a console would likely see those threads split and run simultaneously between GPU and CPU. Why run 4 threads at a time and have many waiting to run, when you can do as many as you foreseeably need at once?

Except some tasks CAN'T be broken up. And you have to worry about all the extra bottleneckes, race conditions, priority inversions, and the like that can occur under VERY specific circunstances that make debugging a PITA. Then people whine and complain that the game engine stinks, how the game was released as a Beta, etc.
 




PC games are ALREADY using a good 60+ threads normally now; its not that hard to create a thread to do a specific task.

Now, more often or not, only four or five do any meaningful amount of processing. And of those, maybe two or three can run at the same time. Hence the "badly threaded" argument.
 


Not really. You'd likely run right into "software lockout", where you spend so much time context-switching within the kernel, absolute performance starts to decline.

http://en.wikipedia.org/wiki/Software_lockout

Same basic problem as multiple parallel threads: Resource contention. Hence why you don't see liner scaling. Only in this case, the more cores you add, the worse the performance increase gets, until you hit the threshold where you start to LOSE performance.

Besides, 4-core systems aren't bottlenecked yet, so you'll see ZERO performance benefit (and likely, performance loss) by going to 16.
 


Uhhh...no, since Etherent (not even the connection; the actual ethernet link) would become the primary system bottleneck. Anything relying on external access will slow down your processing.
 
Status
Not open for further replies.