LOL..no.
Some 60Mb of cache already causes so many problems and so much increase in price, just imagine that for 8Gb...let alone the space required.
Look at a 8Gb ram stick, all of these chips would have to fit on a CPU die.
While 60MB of SRAM may be mildly problematic, 32GB of DRAM a few years from now should be trivial when HBM3 is aiming for up 64GB at 819GB/s per stack right now. The main challenge is achieving DDR4-like economies of scale to amortize tooling costs. Should be easy enough once we reach a point where every new CPU, GPU, SoC, etc. under the sun needs at least one to meet RAM bandwidth, size and power efficiency requirements. Should IGPs opportunistically scale up with node shrinks, you aren't going to meet 2028 IGPs' memory bandwidth requirements with DDR5 and I'm sure having over 1TB/s of memory bandwidth to a 16+GB local memory pool to load and flush cache lines would enable CPU engineers to pull a few tricks too..
When mainstream DDR5 maxes out at 6.4-8GT/s a few years from now, you will be looking at 10+GT/s before the next generation has a chance of delivering a meaningful performance benefit. That sort of bandwidth across a 128bits wide memory interface that has to cross a 4-5" long total path across the CPU substrate + socket + motherboard + memory slot + DIMM PCB will be extremely difficult to achieve within reasonable power, silicon and PCB materials budgets. It is already proving quite problematic with DDR5, especially when attempting to use two DIMMs per channel. GPUs can manage 18+GT/s across 256+bits wide busses only because the traces are much shorter (~1.5") and the manufacturer has nearly absolute control over everything since it is all soldered to the PCB. Also worth keeping in mind is that GDDR6 needs roughly 10X as much power per chip as standard-volted DDR4/5 does to do its thing. Pretty steep price there and most of it is due to all of the extra signal processing needed to maintain signal integrity at both ends at such crazy speeds. HBM-like interfaces ditch most of that overhead by effectively eliminating traces and most of the power associated with driving or terminating them.
My bet is that DDR5 will be the practical limit for high-speed socketable parallel memory interfaces. Whatever comes next will focus on supplementing on-chip memory, power efficiency, density and cost instead of raw performance.