News Lenovo unveils compact AI workstation equipped with Nvidia GB10 and 128 GB of system memory

If you're using a 4-bit LLM quantized with AWQ it's actually very usable. I've even used a 4-bit Flux 1.D model that was quantized with the new SVDQuant and it's pretty much indistinguishable from the FP8 or FP16 versions.
 

TRENDING THREADS