News Xbox One Series X is Microsoft's Next Console

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
In any case, you probably can't just add up the wattage of a desktop graphics card and CPU to estimate the power draw. ... Consoles tend to be more efficient for a given level of performance, in part because the GPU and CPU may be integrated into a single chip, and all components will likely reside on the same circuit board.
I think the main thing you're saving is PCIe. However, I think that's probably in the neighborhood of a few watts.

The other thing you're saving is having to power only one set of memory. I already thought about that, and decided that system memory is likely not included (for the most part) in CPU TDP numbers.

So, it seems to me that adding those two numbers will likely get you closer than any other methodology, based on the information that's available. Note that I didn't include the SSD or other peripherals (WLAN, USB), just to stay on the conservative end.

A lot of power could also be saved by giving the GPU additional graphics cores, while keeping the clock rates reasonable. I would expect power draw to be roughly similar to the One X, but its hard to say for sure.
Sure, why not just stuff underclocked Radeon VII dies in there? That's basically what you're saying. Jack up the price a couple hundred more $, just so you can save a few watts. Good one!

I'm exaggerating, but really... we have to assume MS is simply trying to optimize perf/$. Their only constraint on power is going to be reliability and obviously not spending so much more on PSU, VRM, and cooling that it would offset any savings on die area.

Also, keep in mind that 8x Zen2 cores are going to add substantial area and cost. The Jaguar cores they're replacing are tiny, by comparison. And MS still needs to add ray-tracing hardware, as well. So, I think it unlikely they'll go much above 40 CU's, and some of those are going to be disabled, for better yield.

The 5700XT when boosting can provide around 10 TFlops of 32 bit compute performance.
If you're taking "boost" numbers, then be sure to compare against comparable "boost" numbers. I think it's usually more informative to compare base performance, since the definition of "boost" can vary.

However, as far as actual performance gains are concerned, that's not necessarily an accurate way to measure graphics performance,
Agreed.

it sounds like the console will be using an updated version of the RDNA architecture.
I wouldn't count on the baseline being any different than Navi. I think they probably just took Navi and added ray tracing, plus maybe a couple other MS-specific features (as they sometimes do). Using Navi's successor would've been ready too late for them to make their target release date.

Remember how the XBox One X launched nearly 6 months after Vega, but still used Polaris?

Another explanation is that there's simply a stack of fins, perhaps much like you would find on a typical graphics card, connected to the CPU/GPU by heat-pipes and maybe a vapor chamber.
Exactly. Cooling 300W on air is not a big deal, for GPUs. That is vapor chamber territory, however.
 
  • Like
Reactions: TJ Hooker
The specs seem fine. What do you want it to have? Anything more would likely be impractical from a cost standpoint. Not many people are likely to spend $1000 on a console.

As for the name, if the rumors are to be believed, there may be two versions of the console at launch, hence why they are using "series" in the name here. For example, there could be a version targeting 4k, and another targeting 1080p. If that's the case, the word "series" might get replaced by something else in the actual product names.

The specs are fine. That's the problem. They are a somewhat improved iterative improvement as opposed to a generational leap that needs to stay relevant for 5+ years. It's fine for the iPhone model of annualized releases... but as a consumer looking for a console, I don't want to be pressured into buying a new Xbox every 1-2 years. That's not how the console market works. There's a word for people who want to constantly upgrade their hardware to get the newest minor improvements and prettiest graphics: PC gamers. So I don't think the console market can/will sustain that business model. If the PC market's stagnant hardware and means Microsoft couldn't make the price/performance math work on a more significant upgrade, then it probably isn't actually a good time for a "next gen" console. At least the crypro-craze is over and memory prices have dropped a lot since last year. Hopefully that means they'll have a chance to rexamine that spec - and after all its just a rumor.
The rumored upgrade, to me at least, would be a disappointment from a generational perspective - which makes me doubt whether this really is a "next gen" console at all, of if Microsoft is trying a new release cadence.
Or, maybe AMD's ray tracing tech is more significant than the low-end RTX experience that I would probably expect and it really is good enough.

Although, if the rumored 13GB of unified memory for games is true, then that's going to be a significant bottleneck on any game that wants both scene complexity and 4k. The 16GB total pool is, what, only 50% more than a 12GB Xbox one X? The Xbox One had 16x the memory of the 360 - that's what a generational leap should look like.

Not really. Bear in mind, the Xbox One X only came out 2 years ago as a half-generational premium version of the console targeted at those wanting to game at "4K". It's not like computer hardware is massively improving from one year to the next either, so I wouldn't expect much more than that.

And as far as RAM is concerned, PC games have only recently started to significantly benefit from having more than 8GB of system RAM in demanding titles, and that's for the game, OS and any background applications combined. So while putting 16GB into a gaming PC is now the norm, a good chunk of that memory tends to still be sitting unused while gaming. And sure, that's unified memory also serving as VRAM, but that also likely means that games should be able to make more efficient use of it, rather than having to transfer data in and out of graphics memory.

Part of my point in pointing out the leap from 360 to (launch) Xbox one is that even doubling from 8GB to 16GB isn't on par with the improvements we've seen in most other console generations. Personally I believe the reason 8GB has lasted so long in gaming PCs is because that is what is available to consoles, and games are primarily targeted to consoles. That, and also because they are often gaining 8GB or more graphics memory when slotting in a mid to high range GPU. I think most midrange gaming PCs right now (even targeting 1440p) are going to fall in to the 20-24GB range for total system memory which I think would be a better target for a console. A high-end PC built to target 4k is going to have at least 28GB (16GB + 12GB (1080 Ti / 2080Ti)) but more likely 48GB, because an enthusiast spending that much on a GPU is probably going all in on all the specs.

Personally I first upgraded my PC to 16GB 6 years ago - and if Microsoft sticks to the generational model, in 6 years they'll still be using 16GB for their entry level console.
I'm sure developers will find creative ways to get around that limitation when developing console games, but I do believe it would have a similar effect in holding back the PC market. I'm not super excited to be thinking about how in 2025 my high-end PC gaming might be held back by what will be, at that point, a 12 year old spec for me.

In the very least, I think a 16 GB pool of memory (13GB usable) would be enough a bottleneck that it could limit game design, so it's worth talking about. I'm definately in the "I hope that's not accurate" camp. Even if Microsoft tossed in 4GB of cheap low-end DDR4 for the OS it would probably be enough to make the RAM a non-issue, since I doubt the OS benefits much from being in GDDR6.
So we'll wait and see what the real specs are (I have a feeling they'll wait for PS5 to announce first), but its fun to think about.
 
The specs are fine. That's the problem. They are a somewhat improved iterative improvement as opposed to a generational leap that needs to stay relevant for 5+ years.
They're what's practical, and a huge leap over the original XBox One. So, if that's your baseline, then it's a proper improvement.

If you're comparing with the XBox One X, then they might've done well to wait another year, if there were further process gains to be had by doing so. It certainly won't be the kind of leap over the One X that it was over the baseline or S model. Not on the GPU side of things, at least.

even doubling from 8GB to 16GB isn't on par with the improvements we've seen in most other console generations.
I think 16 is the minimum, but it could be 24. 12 would be pretty sad, though. It's not going to be 32 - you might as well come to terms with that, right now.

I think putting 8 in the One was a bold move. There's no way they can sustain such huge leaps in memory size, especially when we're now talking about much more expensive GDDR6.
 
Also, keep in mind that 8x Zen2 cores are going to add substantial area and cost. The Jaguar cores they're replacing are tiny, by comparison. And MS still needs to add ray-tracing hardware, as well. So, I think it unlikely they'll go much above 40 CU's, and some of those are going to be disabled, for better yield.
An 8-core Zen2 chiplet is quite small itself at only around 75mm2, not counting the separate IO chip. A 5700XT is around 250mm2, so add them together and that's only about 325mm2, plus whatever they do for IO. The original Xbox One and One X APUs are around 360mm2 for comparison. That kind of makes me wonder if the new console will be using a monolithic processor, or if they will combine a standard Zen2 chiplet with a separate graphics chip and IO on an interposer.

Remember how the XBox One X launched nearly 6 months after Vega, but still used Polaris?
Mid-range Vega cards never did launch though. I get the impression Vega might not have been all that well suited to that performance range, and they weren't going to put expensive HBM memory in a console. At the very least we know the new console is going to have some form of hardware raytracing support, which AMD hasn't incorporated into their existing desktop parts yet.

The specs are fine. That's the problem. They are a somewhat improved iterative improvement as opposed to a generational leap that needs to stay relevant for 5+ years. It's fine for the iPhone model of annualized releases... but as a consumer looking for a console, I don't want to be pressured into buying a new Xbox every 1-2 years. That's not how the console market works. There's a word for people who want to constantly upgrade their hardware to get the newest minor improvements and prettiest graphics: PC gamers. So I don't think the console market can/will sustain that business model. If the PC market's stagnant hardware and means Microsoft couldn't make the price/performance math work on a more significant upgrade, then it probably isn't actually a good time for a "next gen" console. At least the crypro-craze is over and memory prices have dropped a lot since last year. Hopefully that means they'll have a chance to rexamine that spec - and after all its just a rumor.
Based on the numbers going around, it sounds like the graphics performance (of at least the leading model, if there are more than one) should be upward of 8x the performance of the original Xbox One, the CPU and memory performance should be substantially improved, and there should be a massive improvement to storage performance. By the time the new console comes out, it will have been 7 years since the Xbox One launched, so I wouldn't say they are rushing a new generation of consoles every 1-2 years. The Xbox One X was a mid-generation improvement to the original's hardware, but aside from running games at higher resolutions and adding some graphical embellishments here and there, developers are still limited by the original's hardware.

Part of my point in pointing out the leap from 360 to (launch) Xbox one is that even doubling from 8GB to 16GB isn't on par with the improvements we've seen in most other console generations. Personally I believe the reason 8GB has lasted so long in gaming PCs is because that is what is available to consoles, and games are primarily targeted to consoles. That, and also because they are often gaining 8GB or more graphics memory when slotting in a mid to high range GPU. I think most midrange gaming PCs right now (even targeting 1440p) are going to fall in to the 20-24GB range for total system memory which I think would be a better target for a console. A high-end PC built to target 4k is going to have at least 28GB (16GB + 12GB (1080 Ti / 2080Ti)) but more likely 48GB, because an enthusiast spending that much on a GPU is probably going all in on all the specs.
I kind of doubt demanding games will be targeting native 4K though. Consoles usually tend to render at lower resolutions and upscale, and seeing as improvements have been appearing for upscaling tech on the PC side of things recently, we'll undoubtedly see that being heavily used on the new consoles as well, particularly since raytracing would likely be too demanding for native 4K.

And again, I think a big thing you are not accounting for is that games won't have to keep as much data in memory if they know they can quickly load it off an SSD on an as-needed basis, something current console and PC games don't really take advantage of. In fact, I believe the Xbox One X only has a 5400RPM 2.5" laptop drive with a maximum throughput of just 60MB/s, while the original Xbox One tops out around 40MB/s, so the improvements to storage performance should provide a huge uplift to the speed at which data can be loaded. The NVMe drive in the new console might offer around 40x the maximum performance of the prior generation's storage, if you're interested in big numbers for generational gains. And the significant improvements to per-core CPU performance and the higher thread count should also help maintain smooth background loading. This also means developers may make games that make more efficient use of SSD storage on PCs as well, though some games may still stick with methods that provide reasonable performance on hard drives, since SSD storage still isn't guaranteed on PCs.
 
An 8-core Zen2 chiplet is quite small itself at only around 75mm2, not counting the separate IO chip. A 5700XT is around 250mm2, so add them together and that's only about 325mm2, plus whatever they do for IO.
That I/O die is not small. Even accounting for the node difference (which is really like an area delta of 2x - not 4x suggested by the node names), you have to account for L3 cache, USB, and a few PCIe lanes for SSD and network.

The original Xbox One and One X APUs are around 360mm2 for comparison.
I'm pretty sure 7 nm wafers cost more than 28 nm did, even after adjusting for inflation. I think you can't assume it's going to be the same area. Certainly, I doubt it's going to be much larger.

That kind of makes me wonder if the new console will be using a monolithic processor, or if they will combine a standard Zen2 chiplet with a separate graphics chip and IO on an interposer.
While going multi-die could improve yield, it's going to add costs elsewhere.

Mid-range Vega cards never did launch though. I get the impression Vega might not have been all that well suited to that performance range,
The fact that it went into APUs shows it didn't depend on HBM2, in order to outperform Polaris.

The lack of a Vega mid-range card doesn't mean that Vega wasn't faster than Polaris, in that range. It just means the performance delta wasn't enough to justify another generation of GPUs. However, given that MS was going to update their GPU, you'd think they would just go with the best available. In fact, the official word from MS/AMD was that Vega wouldn't have been ready, in time. Maybe they're lying, but it makes a lot of sense that AMD would want to get their GPU tech fully debugged, before making it available to partners. So, you're inevitably going to have some lag.

At the very least we know the new console is going to have some form of hardware raytracing support, which AMD hasn't incorporated into their existing desktop parts yet.
That could be a half-generation feature, or perhaps a custom design that MS and Sony each individually funded.
 
Sure, why not just stuff underclocked Radeon VII dies in there? That's basically what you're saying. Jack up the price a couple hundred more $, just so you can save a few watts. Good one!
Yeah I don't think they'll use anywhere NEAR that number of CUs, but the idea is to run enough CUs to hit your target performance, but do so at a relatively high per-CU efficiency, rather than run less CUs at aggressive clocks/voltage. It depends on the efficiency curve of the design and process in question. There's a lot to be said for reducing voltage for a given die to create "mobile" or "low power" variants. AMD often tunes their desktop parts (especially the performance variants) aggressively, so they tend to be pushed over the efficiency sweet spot. For the GPU and CPU cores alike, MS could very well trim clocks slightly and cut voltage and make some substantial net gains. As you said cost is a big factor too, so it's all a balancing act.

I don't think they'll actually need 40 CUs to hit their goal anyway. With RDNA they could use less and still hit their 2X GPU performance goal. Personally though I still think they'll be in the 250-300W range, since they are also throwing in a RT hardware block. Hopefully more capable than current-gen Nvidia hardware, but I have my doubts... gets expensive fast!
 
  • Like
Reactions: bit_user
MS could very well trim clocks slightly and cut voltage and make some substantial net gains. As you said cost is a big factor too, so it's all a balancing act.
There are a lot of things they could do, but my point was that their incentive is to optimize perf/$, and nothing else. So, we should see clocks that are above the efficiency sweet spot, as you said AMD usually tunes it desktop products. Yields, reliability, cooling requirements, plus added PSU and VRM costs are the only things limiting how high they can go with clocks.

I don't think they'll actually need 40 CUs to hit their goal anyway. With RDNA they could use less and still hit their 2X GPU performance goal.
Surprisingly, the XBox One X used 36 CUs out of 40 on die (4 disabled, for yield). My guess is that they'll stay in that ballpark.

Personally though I still think they'll be in the 250-300W range
Yeah, I'm saying at least 250, but I think probably closer to 300.

Still, that case looks designed to move some heat! I'm just imagining all the memes about people using it as a hand warmer and drying their clothes above it. You could put some aluminum foil on a wire rack above it, and use it to warm up left-overs from the freezer.

Anyway, between die area and TDP, I think we can say with a fair degree of certainty that it won't be much faster than a RX 5700 XT, if at all.
 
That could be a half-generation feature, or perhaps a custom design that MS and Sony each individually funded.
I suspect AMD was likely already working on raytracing hardware. Nvidia first announced RTX the better part of 2 years ago, and AMD likely had information about what they were planning beforehand, so they have likely been working on their own raytracing tech for a while. There are rumors that a high-end Navi card is coming next year that may support the feature as well. Perhaps we'll hear more about that at CES in a few weeks.

I'm pretty sure 7 nm wafers cost more than 28 nm did, even after adjusting for inflation. I think you can't assume it's going to be the same area. Certainly, I doubt it's going to be much larger.
Sure, but if they are running standard Zen2 chiplets at lower clocks, they might be able to use chips that wouldn't have been suitable for their existing processors. Plus, they might be able to use another process for things like IO, like they do with their current desktop and server parts, if they utilize a similar multi-chip approach.

In any case, Microsoft themselves said the console would offer "over 8 times" the GPU power of an Xbox One, and "two times" that of an Xbox One X, so that would be right about 5700 XT territory.
 
I suspect AMD was likely already working on raytracing hardware. Nvidia first announced RTX the better part of 2 years ago, and AMD likely had information about what they were planning beforehand, so they have likely been working on their own raytracing tech for a while.
I actually have some inside info that AMD was blindsided by the development. They found out when MS contacted them about the ray tracing extensions in D3D, but after they were already baked. So, that probably gave AMD no more than 6 months notice, prior to the launch of the RTX line, whereas an evolutionary new generation of GPUs probably takes at least 2 years until graphics cards start shipping.

And these companies can't just turn on a dime - everyone in the design teams is already working to meet existing commitments. The only way you could quickly free up resources to work on it would be to cancel a planned product that's already in progress - a project with likely customer commitments and for which the RoI case has already been made. So, it would be a costly decision, and not just considering the sunk costs.

Sure, but if they are running standard Zen2 chiplets at lower clocks, they might be able to use chips that wouldn't have been suitable for their existing processors.
I think 7 nm yields can't be that bad. Plus, it adds cost to go multi-chip. So, I think the upside would have to be really big, to justify it.

I do expect their Zen2 cores to run at slightly lower clocks than we see on the desktop. Both for yield reasons and perhaps to save some power budget for the GPU.

they might be able to use another process for things like IO, like they do with their current desktop and server parts, if they utilize a similar multi-chip approach.
Its I/O die doesn't have to do nearly as much as in a PC, though. Much reduced PCIe lane count, probably one SATA port for (optional?) optical drive, some USB. The memory controller might need to be 7 nm, due to 256-bit+ GDDR6, so that's another function it won't be doing.

Once you cut it down like that, the benefits of using a smaller process node dwindle, yet you still have foot the bill for the interposer and multi-chip assembly.