When multiple clients are sharing the same memory, there's usually some
statistical multiplexing, which means that the total required bandwidth is much less than the sum of peak bandwidth needed by each client.
I have personally seen many times how SATA ports are limited on a slow DMI bus. It is quite obvious that even the latest version of DMI is several times slower than the total number of lanes available directly from the processor.
Zen4 HX series has 28 PCIe 5.0 lanes - 24 are available. Compare with the thickness of the south bridge...
The funniest thing is that none of the laptop manufacturers have used all these Zen4 HX processor lanes, literally hanging in the air without doing anything. For a simple reason - the Zen4 HX memory bus is extremely weak - only 60-65GB/s, and although all devices require at least 2 times more, and taking into account the reserve for the system and software - 3 times more, if not 4, and here we are smoothly approaching the 256-bit Zen5 Halo controller with probably 200GB/s. Bingo! Eureka!
The question is - why did AMD make these 28 lanes in Zen4 HX? Obviously, just to show what it can do, although in real implementation, no one needed them due to the extremely slow memory bus and the lack of PCI-E 5.0 devices in laptops. They just created an artificial effect of "coolness" of this series, which no one will be able to use in practice. And apparently to "wipe the nose" of Intel with Raptor HX...
Here's how it performs, in Arrow Lake:
There is no point in me citing these graphs - they are obvious and trivial. The point was that the memory bus is as fast as the L1 cache, being located next to the processor, like the soldered memory (remember the context of our conversation). Therefore, any cache is a crutch. And today's hierarchy of crutches only proves the problems in the x86 architecture, which has become wildly unbalanced compared to the performance of cores and peripherals. It is obvious that Intel will also be forced to switch to a 256-bit (or 512-bit) controller in the HX series for the HEDT market, a year late compared to AMD, and then it will become widespread in regular series. At the moment, if Halo provides real 200 GB / s + - it will go into absolute lead over the HX series in terms of intensive processing of heavy data arrays in memory. And naturally, this will affect the performance in games, which such series usually target.
Can you cite any references supporting this claim?
Purely empirical assessment based on my understanding of the problems of x86 architecture. Especially in terms of igpu blocks and output to high-resolution screens with high frame rates.
. So, it'd be quite natural to distribute the GPU cores across multiple fabric ports, and with little downside.
I can't add anything except to repeat - such a scheme deprives the architecture of the universality of the memory bus for all devices equally and leads to bottlenecks for certain classes of calculations - in this case, during heavy calculations, on universal processor cores, since GPU cores cannot execute arbitrary code efficiently and are sharpened for vector operations first and foremost, imposing large penalties for complex commands and branching schemes.
I hope that the 256-bit Zen5 Halo controller will give as before at least 80%+ efficiency for the CPU cores, but at the same time will dynamically efficiently distribute the bandwidth between all devices according to their requirements, unlike the limitations of the Apple architecture.
At 32 bits per pixel, an 8k frame is 132.7 MB. At 60 Hz, that's 7.96 GB/s. That's something a modern APU can certainly manage and not even 1% of the memory bandwidth in dGPUs like a RTX 4090 or RX 7900 XTX.
It is empirically clear that when the bandwidth occupied by devices playing a video signal is more than 20% of the bandwidth, freezes begin. Intel directly recommends in the datasheets for video decoders and when outputting 4k to use only 2-channel memory. Why, if 22GB/s+ (DDR4 3200+) is more than enough even for a pair of 8k@60 monitors? But in reality, their IGPUs begin to freeze screens already with 4k monitors on single-channel memory, these are proven facts, especially in the case of old DDR4 3200.
It is extremely undesirable for the image output to take up more than 15% of the bus for a number of reasons. And you forget that fast VRAM is useless when data comes from the CPU in reality. And they always do. Only in games is it less significant, in other scenarios it is a significant limitation processor - memory bus - PCI-E bus with its brakes - VRAM.
It is much better if sys mem = vram and the processor cores access vram directly without restrictions of the pci-e bus.
And with soldered memory, nothing prevents such multiplexing by integrating a 1024-bit HBM3+ controller and 32-64GB of RAM into the processor. Super chiplet. We get very fast cores with fast system memory and direct switching to igpu cores when necessary. But the frame buffer (it is small in size even with 3 buffering) can be implemented separately on igpu, so as not to interfere with the common memory bus and common data processing by processor cores and gpu cores.
High cost + low demand would be my guess as to why 8k hasn't gone mainstream. For most people, 4k is plenty. For gamers, their GPUs long struggled to reach decent framerates even at 4k, so trying to do 8k would sound insane to them.
Most buyers on the planet are ignorant and do not understand what 8k (or more precisely 280+ ppi) on a screen up to 32" means to them. And at the same time they can easily compare the screen of a smartphone and the screen of their monitor. The most amazing thing is that even you (an extremely experienced member of this forum with many years of experience) do not understand this, judging by your statements. Although to be convinced of my rightness, you just need to compare the text on the screen of your smartphone, and then on the screen of your monitor with less than 150 ppi. And I previously suggested that you take a screenshot of this forum thread in Chrome and post it here. I will demonstrate to you what the problem is clearly, if you yourself do not see it point-blank. I am waiting for your screenshot from Chrome.
I remember how the same people on various forums literally laughed at me, claiming that fhd on a 24" screen was enough for them, but imagine their shock when they got the opportunity to work for 24" 4k. But this is not a complete low ppi problem solving.
The main problem for the eyes is that when it looks at a low ppi screen, it constantly refocuses from pixels to the objects themselves. This leads to problems with lens accommodation and increased fatigue.
Starting at about 400 ppi - this effect is no longer there - the human eye no longer distinguishes pixels. The picture becomes like from the real world - almost analog.
I have a smartphone with 400+ ppi and I see a difference of 300 ppi with a smartphone screen from 25 cm, but there is no difference further. And a person can accidentally approach the screen at shifts of 35-40 cm, especially a laptop. Therefore, the minimum ppi should be - 300. But 400+ is better, in fact, this is the final ppi for the eyes - further ppi race becomes meaningless. Only in VR, where there is an increase in the picture at the level of micropanels through the lens mechanism - that's where ppi of several thousand is needed.
So, I clearly don't agree that they're garbage. I've even watched some streaming content on them, and find that 4k content downsampled to 1440p looks great!
You confirm what I said. Yes, 4k reduced by a bicubic algorithm to 2.5k will naturally look more or less, because 4k is excessive for 2.5k, as well as for fhd. But ideally, because this is not a division by an integer.
But you cannot watch either 4k or fhd content in ideal quality on your monitor. Only 2.5k, and this practically does not exist in nature.
Only 8k, 4k, fhd are universal - all three are obtained either by multiplying lower resolutions by an integer or dividing higher ones into lower ones again by an integer. By the way, I was always surprised that manufacturers of 4k monitors do bilinear antialiasing, intentionally spoiling the picture, although the conversion algorithm from 4k to fhd is primitive, literally a couple of lines. Apparently, this was some kind of collusion on the market, incomprehensible to me. And only in 2019, first Intel, and then NVidia, with great fanfare, rolled out an integer resolution reduction on 4K panels to fhd, as some kind of "know-how". It was very funny, because this is not the GPU's task, but the monitor controller's. What difference does it make to him - to draw 4 pixels of 4k in a 2x2 matrix or 1 pixel in the same 2x2 matrix in fhd mode, while also increasing the frame rate by 4 times, if the LCD crystals allow it...
In addition, with the 8k-4k-fhd scheme, we get a clear advantage on the 4k and fhd screen compared to the original (especially when creating rips from the original) - in commercial video, only 4:2:0 color thinning scheme is used (i.e. the color resolution is 2 times worse in reality), but when resizing to fhd from 4k and at 4k from 8k, we get full color resolution for each pixel. That is why it is extremely profitable to shoot a movie (or home shooting) with 2 times higher resolution, if (optics and sensor allow it), and then convert it to 4:4:4 by reducing the resolution by 2 times.
That's why I don't understand why you keep a completely inferior 2.5k at home. Only 8k, 4k or fhd make sense. And high ppi is always only a benefit for everyone without exception, if you ignore the problems with crooked code in a number of applications and OS due to the fault of the developers.
Nobody complains about 400 ppi+ on smartphones, right? Everyone is delighted as soon as they see the quality of the picture. But for some reason they don't want to see the same analog quality of the picture on their desk, apparently out of stupidity, ignorance, not understanding that low ppi increases the problem of refocusing from pixels to objects (in addition, they are less clear along the contours) and leads to increased eye fatigue due to the constant re-accommodation of the lens back and forth. This is especially terrible on old fhd monitors and on 17.3" laptop screens, because people sit closer to them than to monitors. Yes, people use them - but why don't they have a desire for the best, when everyone has a direct example for comparison in their pocket? I don't understand this social (or mental) problem with the population. The majority of them...