This dual Maxsun card is not proven and is made by a Chinese company, so they must be cutting corners. if it is priced around 700 bucks, it could be a novelty purchase
Not counting tariffs $700 actually sounds like a small markup, and the only cut corner and risk being production scale. I really can't imagine them selling like hotcakes.
However, I’m going to move forward with a B60 24 GB card, maybe buy two to start things off.
If you have the space, that should be just as good (or bad), but offer the flexibility to repurpose the two sepearately.
The Nvidia Empire is collapsing, Intel is quietly becoming the platform for creative work. Get in now while prices are good.
Collapsing is where I believe Intel is far ahead in the lead and that could become an issue in terms of support and long-term value.
After some initial evaluation my B580 sits in storage mostly as a spare. At €300 in January it was fair value for the price, but not a great GPU. I fear margins aren't where Intel needs them to carry on. It still retains some of the A770 issues around power consumption and nearly no XeSS (and AI) support in games (apps).
If I was really hard pressed on money, I'd probably go for a 5060 instead, even at 8GB. Otherwise I'd aim for a 5060ti with16GB or save for something bigger.
If I could swap the B580 for a 24GB variant at €50 extra, I'd probably do that, mostly for curiosity's sake. But I wouldn't buy a 24GB variant new: the value of the additional RAM may be much less than what you're hoping for.
I have a 24GB RTX 4090 and there aren't that many off-the-shelf, home-use models that fit that exact RAM size. Most of the 70B stuff really needs much more and even 35B models struggle. Designing and training your own size matching variant, even if you can use an existing open source dataset and model as a base, is way beyond what even Intel seems ready to invest: don't expect anyone else doing that, unless that were their only choice.
I can't see that B60 card fitting any workload I care about. The base B580 isn't great at 1440 gaming and 24GB doesn't improve that. But even with AI/ML 24GB isn't a very useful RAM size while lack of software support may mean that it delivers much less than a CUDA capable card would deliver with similar hardware specs.
Model design and training for denser models might be a niche, but even there you'd much rather use the CUDA ecosystem. For anything LLM even huge piles of these won't do any good, even if you could afford to write your own software stack like DeepSeek did: not for design, not even for training, and not for inference.
I'd be happy to hear your experiences if you dare to try.
In cases like this I like to ensure that I have a 30 day free return window available full time.