Harsh.That cpu can't run decent gpu... or a decent pc. A i3 will be better
Harsh.That cpu can't run decent gpu... or a decent pc. A i3 will be better
Exactly. Competent and capable are synonyms. That's my point. Nowhere does the word "capable" suggest that it's going to provide significantly greater than the minimum required effort to deliver acceptable gaming performance. It's why we used the word. Sorry to pull out the dictionary, but..."Competent" and "Reasonable" are perfect ways to describe an iGPU that can't do 60fps on anything but minimum detail levels (if then) while costing more than a setup with even an entry level dGPU with far more performance (as pointed out at the end of the article, and in most others as well) and with a total system cost that exceeds a console.
This has been talked about so many times, they only are dead end platforms if you intend to upgrade within 2-3 years and are willing to stay on an old platform in 2-3 years.Those LGA-1700 & AM4 recommendations might be a bit faster or cheaper right now, but they're completely dead-end systems on dead-end platforms, and that's a SIGNIFICANT buying consideration for a new PC builder/owner!
Such numerical claims tend to attract scrutiny, since they're easy to sanity-check and doing so can sometimes turn up results that defy such intuitive explanations.AMD's chiplet-based processors not only have one large central die on a slightly more inefficient node, but they also have to spend quite a bit of their power budget shuffling data between the dies. Moving data across the Infinity Fabric consumes about 1.5 picojoules per bit (pJ/b), whereas on-die data transfers require roughly 0.1 pJ/b. That extra power consumption adds up for the chiplet-based designs.
Fixed that, for you!If you are going to buy aconsolelaptop CPU, why not just buy aconsolelaptop?
What about someone who doesn't have money for a faster CPU or nice dGPU, right now? Getting one of these APUs gets their foot in the door, but leaves an upgrade path where they could later add a dGPU and eventually swap in a faster CPU.If you make the right selection of CPU now you can keep the whole system, except for the GPU, for at least 5-6 years
I mostly agree, except about upscaling. Consider that you could run DLSS on a lowly RTX 2060, which had a nominal performance of about 42 TOPS (base freq.). However, if you look at the way Tensor "cores" were implemented by Turing, they're just tensor product pipelines fed by the same SIMD registers used to feed the CUDA cores and controlled by instructions from the same warps that drive the CUDA cores.16 TOPS is weak. It has an efficiency advantage in laptops when it can be used. I don't think there's any chance that it can make upscaling or raytracing better.
Gaming is not an integral part of a laptop, you can do laptopping with an iGPU that barely provides any display, but it is integral for consoles/gaming handhelds.Fixed that, for you!
Yes, people that don't have money will waste a lot of money on something they will not be using as soon as they upgrade (the iGPU)What about someone who doesn't have money for a faster CPU or nice dGPU, right now? Getting one of these APUs gets their foot in the door, but leaves an upgrade path where they could later add a dGPU and eventually swap in a faster CPU.
Well this story wasn't out when I wrote that comment:While I don't want to minimize the difficulty of upscaling, I'd point out that it only needs to beat the quality of conventional scaling algorithms, and I think that's an achievable target for such an amount of compute.
But this is why DLSS doesn't offer anything close to linear scaling in performance. Realistically, on something like an RTX 2060, the shader cores and execution units are likely at closer to 99% utilization for most workloads. Cut the resolution in half, then dedicate a chunk of the processing to DLSS, and you still come out ahead.I mostly agree, except about upscaling. Consider that you could run DLSS on a lowly RTX 2060, which had a nominal performance of about 42 TOPS (base freq.). However, if you look at the way Tensor "cores" were implemented by Turing, they're just tensor product pipelines fed by the same SIMD registers used to feed the CUDA cores and controlled by instructions from the same warps that drive the CUDA cores.
Therefore, if 85% of the GPUs time is spent on primary rendering, that leaves only about 15% for DLSS. 15% of 42 TOPS is just 6.3 TOPS, or less than half what the XDNA engine in the 8700G packs. The key difference is that the XDNA engine probably isn't doing anything else, so you can devote most or all of its 16 TOPS to upscaling.
While I don't want to minimize the difficulty of upscaling, I'd point out that it only needs to beat the quality of conventional scaling algorithms, and I think that's an achievable target for such an amount of compute.
Good points, but at least it sounds like we're in agreement that you don't need all of a GPU's tensor performance to do AI upscaling. Perhaps 16 TOPS would be enough to upscale from 720p to 1080p? That sounds like a plausible setup, for an iGPU-based laptop.cut the shader ops in half by dropping the resolution and you have more like 25 TFLOPS to do DLSS upscaling.
I've asked Nvidia about this in various ways over the years, and it never really says precisely how much compute is needed. Which, on the one hand, is understandable. It uses as much as it needs, based on the framerate. But at some level, there's a maximum throughput in FPS from the GPU for a given resolution upscale. I don't know if it's different for various games, though.Good points, but at least it sounds like we're in agreement that you don't need all of a GPU's tensor performance to do AI upscaling. Perhaps 16 TOPS would be enough to upscale from 720p to 1080p? That sounds like a plausible setup, for an iGPU-based laptop.
And those savings come before we add the extra platform costs associated with the AM5 platform. As recently as this month, AMD has clearly stated that it will continue to bring new value processors to the AM4 platform due to the continued high pricing for DDR5, hinting that this condition will persist for some time. While DDR5 pricing has fallen from the stratospheric heights we saw at launch, it remains significantly more expensive than DDR4, and all signs point to it jumping in price again due to market conditions.
You think that will fit in an AM5 package? I wouldn't assume so, due to the 256-bit on-package LPDDR5X memory. And without that, Strix Halo probably isn't such an interesting product.I think they're likely to skip desktop until they can offer versions of Strix Point and Strix Halo. That rumored high end part with 16 cores, 40 graphics CUs, and a 50 TOPS NPU -- that would be a huge jump in APU performance, especially with extra speed unleashed by desktop wattage.
There is a lot of room in the AM5 package. AMD is putting multiple chiplets in there. I think they'll find a way if they think there is any demand for the product.You think that will fit in an AM5 package? I wouldn't assume so, due to the 256-bit on-package LPDDR5X memory. And without that, Strix Halo probably isn't such an interesting product.
It's likely to miss the 128 extra pins needed to support the wider DRAM bus Strix Halo exploits for bandwidth...There is a lot of room in the AM5 package. AMD is putting multiple chiplets in there. I think they'll find a way if they think there is any demand for the product.
I don't. Their APUs dies aren't small and I think Strix Halo needs 4 stacks of DRAM.There is a lot of room in the AM5 package. AMD is putting multiple chiplets in there. I think they'll find a way if they think there is any demand for the product.