Reports are surfacing about Lockhart, a new budget, next-gen Xbox that Microsoft's supposedly crafting.
Report: New, Cheaper 1440p Next-Gen Xbox in the Works : Read more
Report: New, Cheaper 1440p Next-Gen Xbox in the Works : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
It strikes me as odd that they'd make it less powerrful than the XBox One X, in any respect. And from the sound of it, the GPU performance is indeed worse.
IMO, the smart thing to do would just be to deliver a cost-reduced version of the One X, maybe with a few updates (like newer CPU cores & improved GPU features). However, they could simply target the One X, and know that anything which performs well on it @ 1440p would also work well on LockHart.
Teraflops are not necessarily directly comparable between different architectures though, and are measuring compute performance, not gaming graphics performance. An RX 580 is roughly a 6 Tflop card, and its GCN architecture is similar to what an Xbox One X uses. However, the newer RX 5500XT is typically a little faster, but it's only a 5 Tflop card as far as compute performance is concerned, due to its newer RDNA architecture offering more graphics performance relative to compute performance. It's possible we could see things shift a bit further in that direction with RDNA2. The updated architecture will supposedly be improving efficiency further, and part of that might involve trading away some more compute performance for a given level of graphics performance. New features like variable rate shading could also allow for some additional performance in games that support them. And this is assuming that this rumored "about 4 teraflops" number is even accurate.It strikes me as odd that they'd make it less powerrful than the XBox One X, in any respect. And from the sound of it, the GPU performance is indeed worse.
It strikes me as odd that they'd make it less powerrful than the XBox One X, in any respect. And from the sound of it, the GPU performance is indeed worse.
Yes. PC games do it all the time. The devs will dial things in to run well out of the box.not without games shipping with an alternate set of textures and detail settings to lower the load level...
Teraflops are not necessarily directly comparable between different architectures though, and are measuring compute performance, not gaming graphics performance. An RX 580 is roughly a 6 Tflop card, and its GCN architecture is similar to what an Xbox One X uses. However, the newer RX 5500XT is typically a little faster, but it's only a 5 Tflop card as far as compute performance is concerned, due to its newer RDNA architecture offering more graphics performance relative to compute performance. It's possible we could see things shift a bit further in that direction with RDNA2. The updated architecture will supposedly be improving efficiency further, and part of that might involve trading away some more compute performance for a given level of graphics performance. New features like variable rate shading could also allow for some additional performance in games that support them. And this is assuming that this rumored "about 4 teraflops" number is even accurate.
I don't think anyone in this thread is deliberately spreading lies, but the specs differences seemed large enough that I didn't expect RDNA's efficiency gains would necessarily cover them. However, I probably didn't account for the additional gains of RDNA2.As a reminder to others getting confused (or deliberately spreading lies) by implying Teraflop and 'overall' performance are directly linked,
Thanks for posting, and I have a lot of respect for Mr. Cerny, but 52:44 (+ ads) is a lot to ask of someone.please watch Mark Cerney's PS5 "Road to PS5" video...
Teraflops are not necessarily directly comparable between different architectures though, and are measuring compute performance, not gaming graphics performance. An RX 580 is roughly a 6 Tflop card, and its GCN architecture is similar to what an Xbox One X uses. However, the newer RX 5500XT is typically a little faster, but it's only a 5 Tflop card as far as compute performance is concerned, due to its newer RDNA architecture offering more graphics performance relative to compute performance. It's possible we could see things shift a bit further in that direction with RDNA2. The updated architecture will supposedly be improving efficiency further, and part of that might involve trading away some more compute performance for a given level of graphics performance. New features like variable rate shading could also allow for some additional performance in games that support them. And this is assuming that this rumored "about 4 teraflops" number is even accurate.
How many people do console gaming on a PC monitor, though? That's one thing that really jumped out at me about 1440p. I suspect game developers are going to do all of their testing at 4k and 1080p, with maybe a token run or two at 1440p and 720p to make sure they're not completely broken.A console aiming for 1440p (with possibly some heavy graphical settings turned down) would be a huge smart play by Microsoft. Besides, it's going to look great on a 1080p television anyways....
Why do you need to wait an additional 3 years, to upgrade a PC that's already 10 years old? You can find plenty of benchmarks comparing multiple generations, and it seems well worthwhile to upgrade even a Sandybridge, already (though I suspect you're on Nehalem - Sandybridge will be 10 years in 2021).I need to build a new PC (as the current one is going on 10 years), that Series "X" could satisfy me for 3 years. While I wait for new PC parts advancements and improvements in the coming years.
I would bet many devs will offer options (like how some games let you favor fidelity vs framerate) and/or automatically tweak things depending on output resolution. If you're hooked up to a 1440p or 4K panel, increasingly-popular dynamic resolution may offer benefits vs a 1080p panel. I don't want to overstate the potential benefits, but it may not be like times of yore where a console game targeted a single resolution and set of quality settings and simply scaled output resolution. But again it will vary substantially by game and developer... the main reason to use a PC-centric panel is response time. I've considered using a low-latency PC display combined with an OSSC for my classic consoles, but the cost always makes me reconsider. Maybe some day!How many people do console gaming on a PC monitor, though? That's one thing that really jumped out at me about 1440p. I suspect game developers are going to do all of their testing at 4k and 1080p, with maybe a token run or two at 1440p and 720p to make sure they're not completely broken.
Well, if they're using an SSD like the Series X, I don't feel the reduced RAM will be a substantial problem. Especially with developers getting used to heavily leaning on the SSD on both Series X and PS5. So yeah, I still feel they'll be in the One X ballpark in terms of visuals, while enabling a better overall experience out of a smaller APU. The power and thermal benefits could be quite substantial too, so I expect this to be a simpler, smaller package. My initial reaction matched others, I was like why not just dieshrink the One X chip? But really I think this is a better use of their resources, especially if it was being developed in tandem with the larger APU going in Series X. RDNA (2?) and Zen (2?) will be wildly more efficient than cat cores and GCN, and I bet the higher IPC (regardless of final core count) will chafe devs less when scaling down from the top model.In particular, I think the decrease RAM size is notable. From the sound of it, total physical RAM decreased from 12 GB (One X) to 8 GB (LockHart). Though my original post didn't mention it, that was one of the factors motivating it.
How many people do console gaming on a PC monitor, though? That's one thing that really jumped out at me about 1440p. I suspect game developers are going to do all of their testing at 4k and 1080p, with maybe a token run or two at 1440p and 720p to make sure they're not completely broken.
Why do you need to wait an additional 3 years, to upgrade a PC that's already 10 years old? You can find plenty of benchmarks comparing multiple generations, and it seems well worthwhile to upgrade even a Sandybridge, already (though I suspect you're on Nehalem - Sandybridge will be 10 years in 2021).
That said, if you're only considering an Intel CPU, I'd have to agree that Comet Lake's high power-consumption & corresponding cooling requirements make it rather unappealing. Still, you probably have only about 15 months until Intel launches its first 10 nm desktop CPU.
IMO, the best scenario (though not very practical) would be to hack the console and replace its video signal generator with a digital output. I'm not sure how hard that would be, on different consoles. I think it'd be easiest on those with a conventional framebuffer + discrete RAMDAC.I've considered using a low-latency PC display combined with an OSSC for my classic consoles,
Depending on how the RAM is used, it could be problematic to fall back on virtual memory. You're not going to get away with swapping out stuff that's needed from one frame to the next.if they're using an SSD like the Series X, I don't feel the reduced RAM will be a substantial problem.
A big problem with this approach is memory. The "One X" had a 384-bit GDDR5 setup. That adds cost in the memory and motherboard. So, I get why they wanted to use a cheaper 256-bit interface, but a sad consequence is RAM size. I wonder if they really couldn't have contracted some 12 gigabit GDDR6 dies.My initial reaction matched others, I was like why not just dieshrink the One X chip?
That's all stuff you can add incrementally, if you just chose to upgrade your CPU, motherboard, and memory, today.Yes, I'm waiting to build a new PC with better parts than are currently out. Improvements in Ray-tracing with better 4K FPS without shelling nearly $1,000 NVIDIA RTX cards. Some refined Navi cards on the next line of AMD cards should do the trick. Faster SSDs with higher TB for under $200. A newer Ultra-wide 3440x1440 144hz monitor with OLED or (MicroLED? pls?).
Yeah, a Sandybridge i7 is still meeting my needs. I can't really justify upgrading it, this year.I don't need, I want a new PC.
Didn't they say the PS5 would support all PS4 games?PlayStation doesn't offer this, nor does PlayStation offer support for older games.
I'm thinking mainly of 8 to 32 bit era consoles, in which case you might just take the raw RGB signal and bypass the video encoder chip. There are sometimes mods that do just that, resulting in an improved video output (to various degrees), but it's still analog and needs further processing for/by a modern display. Even those aren't generally worth it IMO - well, unless you had a particular model with a dodgy video output chip and/or shoddy board layout to start with, as they often used various components and board revisions over their lifetime. In many cases the stock internals are good enough, and/or the system just needs fresh capacitors.IMO, the best scenario (though not very practical) would be to hack the console and replace its video signal generator with a digital output. I'm not sure how hard that would be, on different consoles.
Ah, I should have been more clear. I was referring to games built from the ground up with the SSD in mind. For example, they develop a title which utilizes the SSD for streaming for both the Series X and Series S (scaled down for S as appropriate), and then also build versions for older Xbox models that can't rapidly stream and decode compressed assets. There's no way they'll use the SSD as virtual memory and attempt to straight up run One X games on it. They will have to run original Xbox One binaries and assets for older titles, for backwards compat.Depending on how the RAM is used, it could be problematic to fall back on virtual memory. You're not going to get away with swapping out stuff that's needed from one frame to the next.
By the time of the 32-bit era, I think consoles were framebuffer-based. So, it'd be cool if you could just swap out the RAMDAC with a HDMI output section. Of course, I'm assuming they had a separate RAMDAC. Probably by the next generation, it was already integrated into the GPU.I'm thinking mainly of 8 to 32 bit era consoles,
Do the capacitors fail only with use, or is sitting on the shelf for long enough all that's required for them to degrade?In many cases the stock internals are good enough, and/or the system just needs fresh capacitors.