Microsoft to Unleash More of Xbox One's GPU Power

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

iatemyelf

Honorable
Jun 14, 2013
48
0
10,560


you're confusing directx on console hardware and directx on PC. there's absolutely no reason for microsoft to switch out their own low level API for a competing low level API.
 

innocent bystander

Distinguished
Nov 13, 2009
64
0
18,640
If the tomb raider reviews are any indication it doesn't sound like they had a choice... there seems to be a rather large practical performance difference between the two systems right now.
 

stevejnb

Honorable
May 6, 2013
609
0
10,980
Meh, either desperation or marketing speak on their part, or both. Seems obvious their console is lagging slightly in power this gen and they really should play to its strengths rather than attempt to play a game they likely won't win. Besides, more powerful consoles usually don't "win". The most successful console ever, the PS2, cost more and was less powerful than the competition. Lots of other factors come into it.

And lastly, if you want power, get a PC.
 
I like how everyone is quick to compare the slowest temporary memory (RAM) when talking about the two units, but the topic of the faster temporary memory (cache) or core clock isn't mentioned much. These are both points where the Xbox One is superior from a hardware specification standpoint.

From a practical standpoint, the unit is near silent even while gaming, has more games and more services out of the box.

The Xbox One is a great gaming machine for the entertainment center.

But I agree with stevejnb... If you really want something superior, get a PC.
 

warezme

Distinguished
Dec 18, 2006
2,452
57
19,890
From a hardware benchmark, both the PS4 and Xbox360 are boat anchors for the gaming industry as a whole. MS chose to set the XBox360 to the gaming standards of the year 2000 at 720P and 30fps. Really? Sony was a little more ambitious but only slightly. Because of these "modern" relics, the gaming industry can kiss off the dreams of 4K gaming ever being a reality unless they develop strictly for the PC world.
 

grimzkunk

Distinguished
Mar 10, 2011
9
0
18,510
You guys don't seem to understand that it's always gonna get bigger. There is need for bigger screen in every house. You just don't realize it yet. Samsung will do it again, and Apple will follow (let's hope). I think the expression "tablet" will have to fall someday if we want some faster improvement. MS and Samsung released this SUR40 touch table too early, but I'm sure everyone will have those kind of things in their living room in a near future.
 

xenol

Distinguished
Jun 18, 2008
216
0
18,680
On the note about the usage of DirectX on the XB1, I would think Microsoft created a specialized version of it just for the XB1, because they already know the hardware that's going in it. If it is an optimized or embedded version of it, then Mantle is kind of redundant for the consoles.
 
Sep 22, 2013
482
0
10,810
Latency is hardly such a major factor here that DDR3 in Xbone will actually give the system a real world advantage.I'm pretty sure we don't have any details on the actual latency of the RAM in either system, so this is ALL speculation.Why MS went with DDR3 vs. GDDR5 is a mystery; for a device that is 99% graphics oriented it just doesn't make sense.I think this move (CPU reservation) is probably a response to developers who have found this system's limitations vs. PS4 to be a technical difficulty.It seems like yet another marketing ploy that will have Xbox fanboys saying "yay" and everyone else calling BS. I think it will work about as well as their "clock speed increase" just before release.
 

zakaron

Distinguished
Nov 7, 2011
105
0
18,680
 

mortonww

Distinguished
May 27, 2009
961
0
19,160
A 10% performance boost doesn't help the Xbox "inch closer" to the PS4, which has .5 TFLOPS more performance. The gap between 1.31 and 1.84 is large enough that Xbox One finding a way to get 8% more of their theoretical max doesn't really matter that much. Anyway, what's 1.08*30 fps? That's right, 32.4 fps. So, yeah, this doesn't matter.
 
Sep 22, 2013
482
0
10,810
Admittedly, I had no idea that Xbone was only 30fps and doesn't even do true 1080p.
Those two points alone are reason enough that this PC gamer will stick to PS4 when that must-have exclusive finally launches.

Ultimately, I love gaming so I will eventually get a PS4 for the things it does conveniently that my PC does not.

But I won't ever again mention Xbone and "next gen" in the same sentence. <1080p at 30FPS isn't next gen at all.
 

alextheblue

Distinguished
A comment on the memory: Xbox One has a higher theoretical bandwidth than the PS4 due to the embedded memory.
It really doesn't. Effective memory bandwidth to the 32MB of eSRAM is probably similar to the PS4's GDDR5 memory interface, but definitely not higher. The difference is that on the Xbox One you only get that bandwidth to your most frequently used data. The main performance bottleneck on the Xbox One is not memory bandwidth, the GPU is simply not as powerful as the one in the PS4, especially in terms of ROPs. The Xbox One has 16, the PS4 has 32. This has been speculated to be the primary performance deficiency in the Xbox One, and is probably why it can't seem to keep up with the PS4 at higher resolutions even though graphics fidelity looks nearly identical.
Well, he's actually right. You're not factoring in main memory. You can use the eSRAM and main DDR3 at the same time. How well the eSRAM performs depends on how it's used, so it's down to the developers and workload. So yes, peak system bandwidth is higher, and average bandwidth is about the same. Talented developers will no doubt take better advantage of it. The on-die eSRAM is also extremely low latency. Even much lower latency than the DDR3, which in turn is lower latency than GDDR5. Of course, this probably won't be a factor at least until we see developers really working to squeeze the CPU side dry as well.

As for ROPs? I don't think the XB1 is ROP bound. There's a couple of reasons Sony went with 32 ROPs. First, their only choices were 16 or 32, and they're packing a lot more shaders. Second, ROP performance scales with GPU clock. The clockspeed is a bit low, so 32 ROPs was the obvious choice. XB1 on the other hand has less shaders and a slightly higher clockspeed, so 16 ROPs is probably more than adequate. There's at least one article on Anandtech that mentions this.

The bottom line is that PS4 has the raw GPU performance advantage. It's impossible to ignore this. Even with a slightly lower GPU clock, the PS4 has something like ~40% more raw GPU compute. If you end up being bandwidth limited, they are about the same, however. The only wildcard factors helping the Xbox One are things like CPU clockspeed advantage, memory latency, eSRAM, and the Move engines. But a lot of how much you benefit is developer dependent. A multiplatform title probably won't make the best use of the XB1's hardware in the first place.
 

dragonsqrrl

Distinguished
Nov 19, 2009
1,280
0
19,290


That's interesting because I'm actually referencing the Anandtech article:

http://www.anandtech.com/show/7528/the-xbox-one-mini-review-hardware-analysis/2

According to the article the eSRAM doesn't function as a cache and has to be mapped to main memory. Anand didn't mention a 'peak' bandwidth where the two memory interfaces could compound, although it seems like there would be a lot of limitations to such a situation (things like CPU accesses requiring a copy to main memory). After re-reading it seems like the only figure he gave was for average bandwidth to the eSRAM alone:

"Microsoft has claimed publicly that actual bandwidth to the eSRAM is somewhere in the 140 - 150GB/s range, which is likely equal to the effective memory bandwidth (after overhead/efficiency losses) to the PS4’s GDDR5 memory interface... It’s still not clear to me what effective memory bandwidth looks like on the Xbox One, I suspect it’s still a bit lower than on the PS4"

Could you provide a source that discusses the 'peak' bandwidth you're referring to? I'd like to find out more about this myself. The impression I'm getting is that the Xbox One 'can' achieve similar memory bandwidth to the PS4, but that overall it's memory implementation is less flexible and has more restrictions to achieving that bandwidth parity.

And as for the ROPs? It's funny you should say that, because that's exactly Anand's theory for the initial resolution discrepancies between the two systems:

"Typically AMD’s GPU targeting 1080p come with 32 ROPs, which is where the PS4 is, but the Xbox One ships with half that. The difference in raw shader performance (12 CUs vs 18 CUs) can definitely creep up in games that run more complex lighting routines and other long shader programs on each pixel, but all of the more recent reports of resolution differences between Xbox One and PS4 games at launch are likely the result of being ROP bound on the One."
 

demonhorde665

Distinguished
Jul 13, 2008
1,492
0
19,280
ask me MS's whole design approach to the kinect is just terrible implementation. the camera module is freaking huge , yet it relies on system resources to do its work...it makes no since. with the housing on that thing there's no reason they couldn't have just slapped in a low end gpu and cpu with some ram into the camera its self. It would have hardly increased cost as well as drastically boost the camera's body reading performance. I'm talking something cheap like a dual core arm cpuwith a the gpu performance of a 8-10 year old gpu. it's still be way more power than the 10% leech off the camera does to the system's gpu yet remain cheap. Best yet it completely free up all system resources for pure gaming functions.
 

demonhorde665

Distinguished
Jul 13, 2008
1,492
0
19,280
people keep quoting this false figure that MS was aiming for 720p at 30fps.. this is a flat out LIE. this lie stemmed from the fact that the CoD devs dropped true 1080p to achieve 60fps on their game. Of not the game does "upscale" to 1080p to compensate. But that is beside the point. given the actual hardware under the hood , and being some one that knows general game performance I'd wager the xbox one could easily handle 1080p at 40-50 fps averages witch is more than playable. but the idea that the xbx 1 only hits 720p at 30 fps is f---ing absurd.
 


This is true. Forza operates at 1080p native. There's nothing in the Xbox One spec that says every game is limited to 720p and 30fps. A lot of misinformation is perpetuated by the ill-informed.
 
Sep 22, 2013
482
0
10,810


Just to be clear, I was not actually saying it didn't do 1080p, just that I read that.

However, if it can't hold a steady 60FPS at 1080p in *every* title, then it's simply underpowered and NOT next-gen.

Upscaling is a nice term used to say "taking something that isn't 1080p and stretching it so it fits" on a 1080p screen. If a game isn't 1920x1080p w/overscan accounted for (most 1080 images are actually slightly larger in games and video, if only by a few pixels) but instead upscaled, it doesn't have the same pixel density as a true 1080p image.

FPS will be title-dependent, but the system should be *capable* of running a texture-heavy full 1080p 60FPS game if the game demands this. Otherwise the term "next-gen" means about as much as "HD" on 360.
 


If that's so then in truth neither the PS4 nor XB1 are truly next gen.

Well I never have considered them since their next gen parts have been around for almost 2 years now in PCs.

In reality, neither will run games at 1080P 60FPS like a PC currently can. Take say even Max Payne 3 which came out in 2012. I doubt that even the PS4 could push the same level of detail the PC version does at 1080P and sustain 60FPS.

Or another game, Skyrim with the High res texture mod for example would choke both systems.

So in reality neither are true next gen but rather allowing games to move to superior hardware and tech finally as we should see a lot more DX11 titles in the coming years but nothing they put out will out-impress what PC does.
 

alextheblue

Distinguished
That's interesting because I'm actually referencing the Anandtech article:http://www.anandtech.com/show/7528/the-xbox-one-mini-review-hardware-analysis/2According to the article the eSRAM doesn't function as a cache and has to be mapped to main memory. Anand didn't mention a 'peak' bandwidth where the two memory interfaces could compound, although it seems like there would be a lot of limitations to such a situation (things like CPU accesses requiring a copy to main memory). After re-reading it seems like the only figure he gave was for average bandwidth to the eSRAM alone:"Microsoft has claimed publicly that actual bandwidth to the eSRAM is somewhere in the 140 - 150GB/s range, which is likely equal to the effective memory bandwidth (after overhead/efficiency losses) to the PS4’s GDDR5 memory interface... It’s still not clear to me what effective memory bandwidth looks like on the Xbox One, I suspect it’s still a bit lower than on the PS4"Could you provide a source that discusses the 'peak' bandwidth you're referring to? I'd like to find out more about this myself. The impression I'm getting is that the Xbox One 'can' achieve similar memory bandwidth to the PS4, but that overall it's memory implementation is less flexible and has more restrictions to achieving that bandwidth parity.And as for the ROPs? It's funny you should say that, because that's exactly Anand's theory for the initial resolution discrepancies between the two systems:"Typically AMD’s GPU targeting 1080p come with 32 ROPs, which is where the PS4 is, but the Xbox One ships with half that. The difference in raw shader performance (12 CUs vs 18 CUs) can definitely creep up in games that run more complex lighting routines and other long shader programs on each pixel, but all of the more recent reports of resolution differences between Xbox One and PS4 games at launch are likely the result of being ROP bound on the One."
I didn't make it clear, the main fact I was referencing in AT article from memory was that ROPs scale with GPU clock (something I had long forgotten). I just don't think it has enough shaders to be primarily ROP bound. Forza 5 certainly doesn't seem to be crippled by the lack of ROPs, and they're doing 1080p at 60. Do you think 7790 and R7 260 cards are crippled by 16 ROPs? I don't think they have enough shaders/texture units to justify 32. Frankly, neither does the Xbox One.

If you look up a block diagram of the XB1, you'll see that the GPU has full access to both eSRAM and main RAM. I think it's pretty obvious that access to and from the GPU on one bus doesn't lock down the other one. You can do some work using the eSRAM, and put everything else in main memory. How much weight the eSRAM can take off the GPU remains to be seen, and making good use of it will require talent.

I'm not sure if CPU access to eSRAM (as unecessary as it would probably be) would require a move to main memory or not, but the Move engines can be used in such cases, and take weight off the CPU and GPU. They can shuffle around lots of data independently, no need to waste cycles.
 
Status
Not open for further replies.