AMD CPU speculation... and expert conjecture

Page 232 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Cool! Yes I recall now seeing the 3000MHz ads in some sites. I checked the module that I mentioned before and you are right, it was tested at 1.65v.
 


Not really. MSVC > GCC, and the benchmarks prove it. GCC is a DOG by modern compiler standards.

As for your linuxforge link, I note that single operation performance at the end of the day has VERY minimal impact on application performance. Instruction optimization has a much greater impact on overall performance. So trying to measure, say, the time to context switch for a given compiler tells you NOTHING about the final result. This is where GCC stumbles the worst.

Finally, if you actually READ the detailed description: The benchmarks with the "D" indication were run with the Dispatcher in place, where you can see that SSE3 in particular generally gave no benefit to AMD [Though SSS4 and above did]. The benchmarks without the "D" were run without the dispatcher in place, so all CPU's would be treated more or less equally. Hence why all the ICC modes show up twice in the final results.

And its again worth noting even then, ICC still outperforms every other compiler for AMD by a wide margin.

This test is using old compilers. Specially GCC has really picked up speed in the 4.7 version. MSVC has also some improvements in auto-vectorization in MSVC2012. So difficult to say how the current compilers compare. Plus the tests themselves use old code.

Maybe because the article is a few years old? Don't have a more recent one to cite, though I doubt either MSVC or GCC has caught up to ICC by any significant amount. I'm sure a few benchmarks have improved, but its not like ICC is standing still either...Also remember the benchmarks were compiling against different compilation modes; its quite possible the POV results in the latest GCC go away if certain optimization levels are not used.
 


MSFT is where Commodore was 30 years ago.

Oh, wait a second...

What it comes down to though, is corporate users and gamers will be using Windows a while yet. Corporate, because they are very iffy about changing software, nevermind Windows has the best security of any OS [and its not even close], and gamers, because OpenGL stinks, limiting development for other OS's. [Really, its the most obtuse API I've ever used. Doesn't even reflect how the hardware works anymore. No wonder Value is doing an OpenGL->D3D wrapper when it ports its games to Linux, rather then using a native OpenGL API.] Nevermind the backlog of legacy apps.
 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Yes. It would be difficult for a third party compiler developer to optimise their compiler to produce best possible code.ICC would remain faster than GCC and MSVC (specially for Intel procs).

How likely is it that AMD will give Intel Compiler dev team, a detailed guide to the internal hardware implementation of their processors, in order that they can optimise code for AMD procs ? ;)
 

8350rocks

Distinguished


It's much better than it was...but it's not D3D. The OpenGL board needs to get their act together and put out some newer stuff. The latest versions of OpenGL suffer primarily from the issue that, you can do things 5 different ways, and none of them are clearly the "best" or "fastest" way to do things. If they would put together an entire suite of tools like DX then that would really shore up some holes for them, however, they would still need to bring forth a few things. OpenGL 4.2(4.3?) has all the functions of D3D 9 and many of the functions available in D3D 11. The language is a bit archaic in some respects, and they could stand to kill off the older legacy garbage left in the API as most of that is convoluted and antiquated (or streamline it into a newer configuration).

Once CryENGINE 3 gets ported to Linux, I am going to dig back into OpenGL again and see what I can get it to do...I would love to do AA/AAA Title on Linux and Windows.
 


As a dev, you know what I want my graphical API to look like? Glide. Best. API. Ever.

As for CryEngine3, I expect Crytek will do the same thing Value did: Make a D3D to OGL wrapper, rather then re-code the entire gamengine to OpenGL.
 

8350rocks

Distinguished
That is a very valid point...I haven't worked with anything on Value's Linux Engine though...do you still use all the D3D API tools and then it's modified to interface with OpenGL? Or is it a situation to where you're stuck using both tools?
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I've said something about this earlier. Compare market shares. I do not feel like looking them up yet again but AMD's situation looks something like this:

server marketshare: ~5%
overall marketshare: ~17%
steam gamer marketshare: ~27%.

AMD's market presence is far, far stronger for gamers. Why would you focus on server chips, which AMD is struggling with, when gaming chips are holding level or actually gaining marketshare? Doesn't that seem kind of backwards to you?

I lied, I did google it:
http://www.trefis.com/stock/amd/articles/193473/heres-how-amd-can-recapture-its-lost-server-market-share/2013-06-28

http://www.slashgear.com/pc-sales-to-decline-in-2012-for-the-first-time-in-11-years-10251339/

Alright, so 9.7 million x86 CPUs shipped in 2012, AMD had 4.4% of those, ~426,800
Desktops were 352 million, AMD had 16% of those, 56 million chips.

So, does someone want to explain to me why you still think AMD is going to focus on server? Even if AMD makes 10x more per Opteron it's not enough. And this isn't even accounting for AMD's (relatively) large market share in Steam hardware survey.

Looking at that it seems completely idiotic to make a server chip first and then adapt it to desktop. It should be desktop chips adapted to servers as you're going to sell more desktop chips.

I mean, correct me if I'm wrong, but all those numbers work out, right? Unless I really screwed something up, I see no reason for AMD to put priority in Server chips over Desktop like they have been where Desktop chips are just re-purposed server chips.
 


Question: Who care's about market share? All that measures is product you sold in the past.

Product Margins is the ONLY real statistic that matters: How much you make per product sold. If the margins are high enough, even though there's a very low market penetration, you could easily out-profit other, higher penetrating but lower margin product lines. And the margins for sever chips are far and away above the margins for desktop/mobile parts. Why do you think SPARC and MIPS are still hanging around, making money? Why do you think ARM is desperately trying to get into the sever space? Margins.

Even though AMD sells fewer chips there, I can guarantee they make more profit overall from selling sever chips then they do selling their desktop ones.

Besides, AMD chips are not well suited for mobile, and desktop is slowing down (at least near term). Server is really the one market they could theoretically grow in.
 


Haven't toyed with it recently, though I'd imagine the toolchain hasn't been touched in any significant way. Any D3D API's probably go through the same wrapper.

Sub-optimal? Yes. But cuts development costs and eases development.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Market segments with the largest growth potential are indeed these "light" servers. That is what AMD is focusing on and it does make sense for them.

Coupled with the fact that even the higher end servers for compute are going to a higher GPU to CPU ratio. The need to cram as many high cost CPUs together in a 2U/4U rack is waning. AMD doesn't have the funds to make big risks in a market that is moving sideways at best.

I think if Cray didn't have a sizable order for Bulldozer a year before it was ready, it may have been cancelled.
 

8350rocks

Distinguished


Hmm, interesting...D3D on Linux...LOL...never thought I would see the day that would become a possibility. Though it's really a good thing, as OpenGL hasn't kept up.
 


I think I posted this in one of the old VGA threads last year... But I'll put it here again.

http://www.phoronix.com/scan.php?page=article&item=mesa_gallium3d_d3d11&num=1

I haven't kept track on the development for Gallium, but they're slow to say the least.

Cheers!
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780
That is interesting... a Gallium3D "state tracker"...

Direct3D used to be supported but by the "wine" layer... kind of "emulation"... now this could be much better, wonder how it fares!..
 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010


Two points:

- Most Sandy Bridge E 3930ks overclocked to 4.5 Ghz, would do a 1.28 Ghz max. A little search and run finds that many people recommend, and it used at ~ 1.26 V-core. For 5 Ghz, I can see it getting ~ 1.30 -1.32, however nothing more than that, or else you are literally burning your CPU to a crisp.

- I did not say that the Intel were completely under control, but, as for the record, they do get hot. However under load (if you are comparing the CPUs of a certain time frame, say, 2012), AMD's chips will run slightly, if not, more hotter than an Intel under load to do similar amounts of work. Obviously, if you are a noob, and can't get the lower end V-core on a certain clock, then your CPU is gone within a few months, and it'll run hotter making it run worse.

Obviously, most CPUs that are overclock more than ~ .5 Ghz wont work on a stock cooler, and thats when you get a Water Cooling System!

I watched a video (actually two), a few months back with the corsair h100i with a 3930k and 3770k. Both CPU's on stock V-core, were able to OC pretty high. I believe that the 3770k got to 4.3 Ghz, and the 3930k got to 4.0 Ghz. This is obviously on the edge where it'll stop working and restart, so, .1 or ,2 down would make it stable. Thus being said, increasing the V-core past 1.4, especially on a 3930k, is just a stupid mistake, unless you are using liquid nitrogen and going past 6.0 Ghz.

Also, even if the FX-8350 got over 8.0 Ghz, was it under load? Keep that in mind, as it was not. The only few tests that used load that high, failed because it got too hot. (Estimation) 8 Physical cores clocked that high, could probably do as much work as a non HT 3930k running at ~6.5-7.0 with 2 less cores, allowing it to do that same work but at a much lower temperature.

You may think I'm just applying less clock = lower temps, but just look at a 4.0 Ghz i7-3770k vs a 4.0 Ghz Fx-8350, the i7 will run cooler when on the same Water cooling system, maybe even a fan system.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780
I still believe AMD roadmaps are not "misleading" (can be wrong of course)...

Kaveri "mobile" will be here with some vendors ready for the cristmas season.

What will be late is the "desktop" version, a "different" fab process strikes the mind, simply 28nm "bulk" is already too late on Glofo to be "reasonable"... they are 2 years behind TSMC.

Of course "bulk" is crap for high performance designs, only Intel has the money to push it to decent levels, and they need finfet, which will take yet 2 years more for others to catch up.

But FD-SOI is kind of straightforward and is much better than bulk even with finfets... Glofo pointed long ago 2014 as the time for full production ramp-up, if that is the case, nothing is really late.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Don't know never bought one, but AFAIK those systems also don't fit in any "case"(box)... the condensers are "bulky", check that out before you jump into one.

 

GOM3RPLY3R

Honorable
Mar 16, 2013
658
0
11,010

+1 I could agree that hes not a troll. The only bad thing is that he exaggerates sometimes, but that's it.
 


Aren't that the successors to the compressors used to cool the Northbridge in this classic THG video ? I think they started to be integrated to cases.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Interesting read. I waited Carrizo to be based in a kind of Steamroller '2.0' (somewhat as Richland was Piledriver with some teaks), but they confirm that Carrizo will use Excavator cores. Also interesting that Carrizo will be 65W. Kaveri top model is 100W.

 


When you have an 8350 with way better packaging (Better than Intel IMO), you're gonna have a bad time.
 
Status
Not open for further replies.