AMD CPU speculation... and expert conjecture

Page 659 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

abitoms

Distinguished
Apr 15, 2010
81
0
18,630


That's actually great. Some might say we've run into a bottleneck but I would say the current consoles are well on the way of fulfilling one of their roles, namely making it easy for the programmers to make the maximum use of its resources. Now the game developers can focus more on the actual game.

(For e.g. the article linked in your post talks about how the number of AI NPCs have been drastically increased. This will surely make for a richer game experience)
 

cemerian

Honorable
Jul 29, 2013
1,011
0
11,660


and here i am with a dual core pentium and gtx 670 both overclocked running it on 1080p high settings, ssao, fxaa and fps jumps around, 32-50fps almost all the time and console fanboys where thinking a dual core pc wont run this game lol, but i have to say the game somehow doesn't look that great, yeah its a good looking game in some places while in other it just looks plain ugly, i miss the old times when they actually released consoles with huge losses and made money back with game sales, what ever happened to that, now the new consoles are already at the edge of what they are capable of, what will they do in another year when pc hardware will be a couple more gens ahead of them...
 

abitoms

Distinguished
Apr 15, 2010
81
0
18,630


I could be wrong, but i think the point of consoles is to provide game developers a fixed hardware target to aim their game/software development at. The benefit of not having to cater to larger denominator is that, ceteris paribus, the game doesn't get diluted in terms of performance, or time-to-market or graphics quality compared to a PC-version. Remember that I said ceteris paribus since there is more potential for things to go right in a console-game development. How much of that potential gets translated to reality is totally left to the game studio.

(btw i read your comment again and I should appreciate that you have made a positive observation and a negative observation about the game performance/quality in your system. we need more posters with balanced opinions :) . Appreciated.)
 


Well, that improvement makes sense, given consoles went from a 7800GTX/2900XT variants to something around 7870 level performance. Especially since on DX/OGL, copying a texture is relatively cheap; that's how L4D got so many zombies on screen: Take a few base models, and swap parts around.

The limits are on the CPU, given there was a downgrade in performance this generation over the last one. It's not like devs don't know how to squeeze performance out of X86 by this point, so there aren't alot of tricks they can do to get anything else out of the CPUs.
 


Well, that improvement makes sense, given consoles went from a 7800GTX/2900XT variants to something around 7870 level performance. Especially since on DX/OGL, copying a texture is relatively cheap; that's how L4D got so many zombies on screen: Take a few base models, and swap parts around.

The limits are on the CPU, given there was a downgrade in performance this generation over the last one. It's not like devs don't know how to squeeze performance out of X86 by this point, so there aren't alot of tricks they can do to get anything else out of the CPUs.
 

8350rocks

Distinguished
You are not thinking ahead enough for the next gpu from AMD. I cannot say more, because, well...because.

AMD currently has an advantage over nvidia recently announced and all of you have over looked it.
 
The limits are on the CPU, given there was a downgrade in performance this generation over the last one.

That is not true. The current CPUs being used are better in overall performance then the ones used in the 360 and PS3. Jaguar is more advanced and has better processing power then both Xenos and Cell. It also has more memory bandwidth and better prefetch / instruction execution. Heck Xenos was designed on a 90nm process and then later scaled down to 65 and eventually 45nm.

There was no "downgrade", not remotely. What has happened is that high end PC hardware has advanced significantly faster then low end, and consoles are just low end hardware in custom designed chassis. Software designers are trying to do more with the hardware then they ever did before this is evident in the amount of AI and environmental effects present. You, and others like yourself, are now comparing the recent consoles to what you can get with a good gaming PC which is horribly biased and unfair to the consoles. The really funny part was that if it was an Intel Atom (that's the comparable Intel CPU) inside it, people would be making all sorts of excuses to justify the low performance consoles have. Consoles have always had low performance and always will, it's why they are consoles.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


If its announced then do tell. ;)
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


At the time, ~300mm^2 28nm chip was the biggest you could make. In fact, fully enabled GK104 GTX 680 yielded terribly, to the point where they were very hard to find. Nvidia claimed it was demand, but GTX 670 came out with part of the die disabled and there were plenty of them. But if the chip was bigger, like GK110 size, it would have been a disaster.

Once the process matured, you saw both prices drop as yields improved. You also saw bigger dies eventually show up. 7970 and GTX 680 high launch price was due to poor yields on 28nm.

The fight when AMD and Nvidia was on the same process node ended up being about who could use die space as efficiently as possible. Mainly because an AMD GPU of the same size of an Nvidia GPU would yield similarly (though not exactly the same as you'd have different densities and stuff).

However, AMD moving to Samsung/GloFo (I would consider them both the same since they are going to share technology from now on) while Nvidia stays on TSMC means that the days of AMD and Nvidia competing on the same process and having the same constraints will be over. If TSMC or SamGloFo have significantly better yields, performance, etc over the other one AMD or Nvidia will have a huge advantage over the other that didn't exist before.

And as others have mentioned, Nvidia going after Samsung legally is going to basically cut Nvidia off from SamGloFo. So if SamGloFo ends up kicking TSMC's bottom, Nvidia is going to be stuck on an inferior process with no where else to go. If the difference between processes is big enough, and we end up in a situation where 600mm^2 SamGloFo chip has no yield issues and 600mm^2 TSMC chip has terrible yield issues, Nvidia will be stuck with a huge, expensive chip that's difficult to make while AMD can flood the market with abundant 600mm^2 chips and do serious damage to Nvidia.

And then we have the HBM rumors. WCCF (so huge grain of salt of course) was calling for ~550GB/s of bandwidth. There's going to be no contest between Fiji and GTX 980 at higher resolutions like 4k thanks to that. In fact, you're going to probably see things like Fiji humiliating GK204/200 in things with MSAA or high resolution. It won't matter how efficient or great GM200 is. If the GPu is starved for data due to lack of bandwidth, it's not going to matter at all if the GPU is twice as fast as Fiji. It'll starve.

And of course, we have to consider that bandwidth numbers for GPUs are all theoretical and real world performance is not what they claim. And it's been documented at Tom's that Nvidia GPU (at least Kepler) falls much shorter in real world bandwidth performance than AMD and you can see it when Nvidia does well at low resolution (like 1080p, sorry guys) with no MSAA and then at eyefinity or 4k AMD can in some games (not all) walk away with a victory, even when they do not do so well at lower resolutions.

Things are going to get very interesting. I have a really good feeling Nvidia knows what AMD is up to and they've marketed Maxwell parts based on efficiency as opposed to raw performance. My guess is that Nvidia knows that Fiji and company will make Maxwell look awful for raw performance so they're bracing for the marketing angle of "but GTX 970 is so much more efficient even though it's slower so it's still good!" It just seems so weird to be in a position where Nvidia has the efficient parts and AMD has the big, hot, power guzzling high performance parts. My theory is that you can tell how much of a beat down AMD is going to deliver by telling how much Nvidia focuses on non-performance related things like efficiency and software benefits. I'm expecting to hear a whole lot of "but even if the Nvidia is slower the drivers are better, you have FXAA, G-Stink, etc" And I realize to some people those aren't selling points or things that I or you will agree with, but some will be all over those. But my point is that Nvidia won't have performance to argue against AMD with so they're preparing to push their other technologies to add perceived value to their products beyond raw performance.


EDIT: maybe a better name for Samsung + Global Foundaries is GloSung?
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


AMD may have taken the strategy of Xilinx and started partitioning their GPUs in multiple die to combat the yield issues. Since they require an interposer for HBM they could go with say a 1024SP die, and bridge 4 of those together. Each one with a HBM stack. They would get to purchase 1 GPU die in bulk and then in the packaging process provide multiple tiers of product.

Keep in mind Tonga is 359mm2 and 5 Billion transistors for 1792SP. If Fiji is really 4096SP that could put it at ~10 Billion transistors. Yields would likely be disastrous for a 10B monolithic die.
 

Embra

Distinguished


Is 4096SP about equal to the HD7990?
 


Well AMD's strategy of late is 'building blocks' for semi custom and such, so this would be plausible....
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


I recall that but not all tape outs go to production. AMD's not really in the position to eat yields like NVidia did.
 

jdwii

Splendid


http://www.eurogamer.net/articles/digitalfoundry-2014-assassins-creed-unity-performance-analysis

CPU is stressed so much the xbox one comes ahead wow.
 

jdwii

Splendid
About the Nvidia is doomed thing just remember how long Nvidia had ATI beat with the 8800Gtx took Amd 2 gen's i think(Ati 4870) to beat it. If i remember correctly Nvidia was going to skip 20nm and go for 16nm FinFET however from the sounds of it that will probably not happen for 2 years perhaps we might see a couple of products from Nvidia being built with it but not the whole generation.
Old but still from 2014
http://www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-may-skip-20nm-process-technology-jump-straight-to-16nm/

Mobile parts from Nvidia using 16nm in 2015
http://www.tomshardware.com/news/tsmc-apple-nvidia-denver-finfet,27538.html

http://www.kitguru.net/apple/anton-shilov/samsung-to-produce-14nm-finfet-chips-for-amd-apple-qualcomm-report/

"It is very intriguing what AMD could produce at Samsung using 14nm process technology. In case AMD indeed manages to make a mobile SoC at 14nm FF node in the first half of next year, it could probably finally enter the market of tablets with a rather competitive offering. Unfortunately, based on AMD’s roadmap, it only plans to release 20nm project Skybridge accelerated processing units and SoCs next year. While it is possible that AMD could make a surprise with a 14nm chip, the company is not known for surprises…"

My main concern from Amd is they are to quiet and the history of that is never good(either won't be out for awhile or they have a dud), when the 7970 was finished Amd rushed it as soon as possible to show it off since they knew they had a winner.

If it wasn't for Jim keller and now Samsung i wouldn't have to much faith left for Amd
 

8350rocks

Distinguished


20-22nm never really was a winner...Intel never got much out of it, and others soon discovered there were some interesting issues with the node at that point that made others more appealing(odd inability to produce better yields, also being a half node shrink instead of a full node, and using a hybrid process or finfet seems to be a better option), hence 28nm hanging around for a while.
 


Sounds like the game is terribly optimized and they tried to polish it more on xbone. Not only that but it runs like crap on PC because it is a gameswork title.
 

jdwii

Splendid
^^^^ But the game is reported to use 4 cores very well on the PC and i'm guessing if its like watch dogs 8. When looking at the graphics its some of the best i've ever seen in my life.

Not really even into these games anyways tried them and said no several times now but it is very popular.
 

8350rocks

Distinguished
@esrever: I always thought it was interesting how nvidia was supposed to have better drivers...but the nvidia branded pc ports are always garbage. Meanwhile, the AMD ports are usually really playable, and yet AMD supposedly has bad drivers?
 


Let's see when it comes out, how many of the threads are for graphical load :p

Cheers!
 


The driver situation has been this way for over ten years and it's an issue with the original ATI software. People keep forgetting that AMD didn't create their own graphics platform, they bought it from another company and then integrated that entire company into themselves. So when we speak about "AMD" we need to remember there are two major divisions that operate separately from each other, the one that makes CPU's and the one that develops graphics engines. The AMD/ATI graphics folks have always had sketchy drivers when compared to nVidia though they are still light years ahead of Intel's own graphics drivers.
 
Status
Not open for further replies.