As usual CoreTeks sounds very knowledgeable but he repeatedly gets things totally wrong. He even says in this video that everything he's talking about is a guess based on the fact the new cooler design has a fan on the back and the PCB looks fairly short.
It's a nice idea - it would be interesting if he is correct, however I'm not convinced. There's no leaks or anything pointing to the 3000 series including a second accelerator chip (which would be massive news) - and nVidia have bumped up the die size for this series quite a bit vs previous gens, I doubt they would need to do so with a big node shrink if they have moved most of the RTX hardware off onto a separate chip.
If this is true (that the new gen is a larger, 7nm die + a second large co-processor) then you can expect the price on these new cards to go up significantly from the current gen. nVidia never give their silicone away for cheap.
I know m8 it's a fascinating idea if there not doing it maybe they should be an AMD may look at a similar solution down the road... the GPU chip would be a lot smaller actually without all those extra transistors right an faster an cooler...if it turns out to be true it could put the AMD's console contracts with MS an Sony in jeopardy in the future...I wonder is this why Lisa recently got in bed with Nvidia on that deal.. can't seam to stop thinking about it...
A dedicated chip for a dedicated task is always better in my experience... and reducing the size of that previously "huge" RTX die is gonna make for some serious clock frequency increase as well.. So it could have a GPU that runs a hell of a lot faster than the Previous RTX gen... and then a small or similar sized second chip which is tailored to light-maps & texture compression... connected with NVlink and they do have previous experience in syncing chips working on same workloads etc... both with PhisX and also with SLi the tech they purchased off 3dfx way back when when they put them out of business an bought there IP.
It also gives them the added benefit of reducing cost at TSMC on the cutting edge node (with a smaller chip better yields an most off all less heat an faster clock speeds) and at the same time fabbing the other chip on a cheaper process at Samsung... sounds genius.
I tell if it is possible an they are not doing it they should be... also AMD may look at a similar solution down the road in my opinion.
If this works for Nvidia they will have succeeded in preventing the GPU from being pulled into the SoC... an keeps the need for very high end discrete graphics cards... which remember was Jensens goal... when he increased the transistor count an went with Raytracing in the first place, as I say so that the GPU couldn't be pulled into the SoC...but the RTX chip was way too big an hot and slow.. this may theoretically solve all of those problems...
Remeber we used to have a 2d card an up to two separate 3d cards all working together.. an a separate discrete 3d positional audio card for sound... I don't think will be any problem in splitting up the workloads between different hardware chips.. there never has been in the past..
Well we are living in the age of parallel and Asynchronous compute right so why the hell not is my question, were breaking up workloads an executing simultaneously in real time all over the place why not do the same here it does seam like the way forward no ?
Edit: You know what maybe they are jus cooling one massive die, but what an idea, can it be done, I have more of the feeling of why not or why isn't being done maybe that's actually the better question right .. the mind boggles. What an idea, I know he's guessing but fair play to him for throwing that out there with fear of ridicule etc... I don't know I am not technical enough to know that it is possible (not even close) but I can say I'd kinda be more surprised if it was not possible... Were in the age of Heterogeneous compute right. Sounds like it may be the next logical step, this is what it's all about. AMD have been talking about SoC with CPU's GPU's and tailored ASIC's for ages now... with workloads being thrown at the silicone engineered and best suited to execute it.