News Nvidia RTX Pro 6000 Blackwell appears online with an eye-watering price tag of over $11,000

This card shows the real reason that NVidia doesn't care about gamers at all and does everything it can to keep gaming cards from being used for AI professional tasks.

If a RTX Pro 6000 sells for 8600 dollars and a 5090 sells for 3000, NVidia is probably making 5-10 times more money on slightly better silicon (the extra VRAM is only a couple hundred dollars)

That is just the profit on a pro card - I can't even imagine the margin on silicon used for a datacenter product.

The pricing is much better than I expected if you can really get it for the 8600 dollar US price mentioned in the article,
Assuming a 20 percent tariff, that is only 5 percent more expensive than the previous pro model and the card is significantly better for AI work.
 
  • Like
Reactions: Rob1C and jp7189
This card shows the real reason that NVidia doesn't care about gamers at all and does everything it can to keep gaming cards from being used for AI professional tasks.

If a RTX Pro 6000 sells for 8600 dollars and a 5090 sells for 3000, NVidia is probably making 5-10 times more money on slightly better silicon (the extra VRAM is only a couple hundred dollars)

That is just the profit on a pro card - I can't even imagine the margin on silicon used for a datacenter product.

The pricing is much better than I expected if you can really get it for the 8600 dollar US price mentioned in the article,
Assuming a 20 percent tariff, that is only 5 percent more expensive than the previous pro model and the card is significantly better for AI work.
I completely agree, and I also have my doubts the price will land at $8600. It may for the maxQ variant.

It looks like PNY will be the official board partner again this gen. I signed up on their site for availability notifications. We'll see where it lands soon.
 
Running Red Dead Redemption 2 at maximum Resolution Scale, requires over 37 GB of VRAM, so there's at least one good reason to buy this card.
 
I think, if the finished drivers show any kind of a performance advantage over a 5090 in games, the rich kids will be pouncing all over these for their "uber gaming rig".

In the past, IIRC, the pro-grade cards usually ended up doing worse in games didn't they?
 
When I made my first money programming on the side while I was studying computer science, I had a hard choice to make:
In 1985 I could spend the equivalent of $30.000 in today's money on
  • a brand new compact car,
  • a used Porsche or
  • an IBM PC-AT clone with an 8 MHz 80286, 640KB of RAM, 20MB of hard disk, an EGA graphics card (640x350 pixel, 16 colors), a matching monitor, keyboard and mouse.
Only one of them was going to help me make more money, so it wasn't much of a contest.

You guys simply don't appreciate how cheap things have become.

Yes, the top range is still growing upward, and things there cost more than a few pennies, but at the lower end you can get an insane amount of computing power for little more than a few lunches.

That IBM-PC was more than 50% of my entire income for a year!

Just because there are now 4000hp Porsches out there doesn't mean everyone should be able to afford one on their hobby budget.
 
  • Like
Reactions: dipluz
When I made my first money programming on the side while I was studying computer science, I had a hard choice to make:
In 1985 I could spend the equivalent of $30.000 in today's money on
  • a brand new compact car,
  • a used Porsche or
  • an IBM PC-AT clone with an 8 MHz 80286, 640KB of RAM, 20MB of hard disk, an EGA graphics card (640x350 pixel, 16 colors), a matching monitor, keyboard and mouse.
Only one of them was going to help me make more money, so it wasn't much of a contest.

You guys simply don't appreciate how cheap things have become.

Yes, the top range is still growing upward, and things there cost more than a few pennies, but at the lower end you can get an insane amount of computing power for little more than a few lunches.

That IBM-PC was more than 50% of my entire income for a year!

Just because there are now 4000hp Porsches out there doesn't mean everyone should be able to afford one on their hobby budget.
I totally agree computer hardware is insanely cheap. My only question is if I work with Machine Learning, why would I buy that card instead of just waiting until summer and spend 10k USD on the new Nvidia DGX Station, that comes with over 280gb of GPU memory, 400+ system memory, instead of paying 11k USD on 1 gpu.
 
I totally agree computer hardware is insanely cheap. My only question is if I work with Machine Learning, why would I buy that card instead of just waiting until summer and spend 10k USD on the new Nvidia DGX Station, that comes with over 280gb of GPU memory, 400+ system memory, instead of paying 11k USD on 1 gpu.
I agree that that is a tough one.

I've never really played with the truly big online models, only done extensive tests with what I could run in the company lab (V100) and the home lab (up to RTX 4090).

But while vendors keep claiming that quality is not only improving but reaching GAI levels, my experiments have been very disappointing throughout.

I've gone and tried everything that I could somehow fit into my hardware, nearly every model (family) available from HuggingFace and then gone through the various sizes and weight precisions to see how they'd influence the speed and quality.

And in some cases that meant having to wait a very long time to get answers even from 70B models, which clearly won't fit into an RTX 4090 even at the smallest quantizations, so a lot of layers wind up running on my 16-core and its 128GB of DRAM.

My main takeaway: the garbage they produce is so bad, they are just not useful. And within 1-70B weights and FP16-INT4 quantization that changes remarkably little. Yes, they gain depth and seem to become much more knowledgeable, but even the reasoning models never know when they fall off their knowledge cliff and fall to hallucinations that defy very basic human reality.

I've never been interested in GAI, I'd have been perfectly happy for these LLMs to have as good an understanding of the world as any servant would have, but they must be reliable with regards to any information for the domain/household they are working it. I'd have been happy with a peasant with manners who sticks to my orders: context and RAG data needs to be interpreted with precision and strict obedience.

Alas, when these models are smart enough to know Marie Antoinette as the wife of Loius XVI, but claim that she died in obscurity ten years after being executed and didn't have a biological mother, you obviously can't trust them to even toggle a light switch, because they might as well just electrocute you, let alone take control of family logistics as a domestic with control over sharp blades or foodstuff that can be turned into poison.

The IBM PC-AT represented a value return that was basically guaranteed for years. Today buying AI hardware is like crypto mining: hard to tell if you'll even break even.

For the RTX 4090 it was still easy, it sees dual use in after-hours gaming (far too little, actually). The stuff you mention: no meaningful alternate use that I can see. So even if I could afford that, I wouldn't jump, especially since I no longer have a career riding on it.

If you get around having your DGX station, I'd love to see your results!