News Jim Keller responds to Sam Altman's plan to raise $7 billion to make AI chips — I can do it cheaper!

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Sure, I could see Altman asking for $8T, but actually expecting to achieve more like $2-3T.
$8T is price of 20000 "Aurora" supercomputers with additional discount of price because large dial. This is close to 500 Gigawatts power consumption, also with today technology level of computers. Century global warming released for 5 years.
 
Look guys, I can do it for just a 100 million. Also it will come with a bridge I want to sell ya!

BTW this bridge has a new feature, it is elephant proof! No elephants will cross it, guaranteed!
 
I think the disconnect is happening at a couple points:
  1. It was previously stated that the investments will be spent over several years - possibly a decade, which brings it much more in line with (i.e. within an order of magnitude of) the roughly ~$100B/year of investments the semiconductor is currently making.
  2. You're thinking too narrowly about the applications of AI - they clearly are targeting things like robotics, self-driving cars, surveillance, and probably other sectors like telecomms and industrial automation.
  3. Probably not all of the chips will even be AI chips, but many will be DRAM, storage, and various infrastructure (CPUs, networking, etc.) ICs needed to support the actual AI processors. Just look at the various glue that Nvidia needs to tie together & support its huge networks of H100s.


This is also a very good point. It's easier to see how he could find a return-on-investment, if he's thinking AI will replace millions of information workers with an average salary on the order of $100k or so.


This is an investment, so they're going to spend it wherever they think they can get a good return on it.
You don’t seem to realize there’s enough fabs under construction now that we’ll be right back into oversupply in a few years as it is. Altman spending all that money on fabs will go bankrupt.
 
You don’t seem to realize there’s enough fabs under construction now that we’ll be right back into oversupply in a few years as it is. Altman spending all that money on fabs will go bankrupt.
Agreed. In addition, I highly doubt Sam is getting that money anyways. I'm sure he released that statement with the sole purpose of drumming up publicity, hoping for at least a few fish to bite. Investors looking to buy into AI are better off putting money into established reputable and existing players like Apple, Google (meh, they'll probably cancel it anyways), Nvidia, maybe Softbank.... or....anyone else that's not Sam Altman, and they know it. Rich people don't get rich pouring money into ventures without some kind of guaranteed return. There's zero guarantees with Altman. Just look at the mess OpenAI is. The best OpenAI can hope for is getting bought out in the long run which may or may not happen. I expect Microsoft at some point may bite the bullet, or pull out if the going gets rough. They are kinda Googly with these kinds of investments.
 
  • Like
Reactions: NinoPino
You don’t seem to realize there’s enough fabs under construction now that we’ll be right back into oversupply in a few years as it is. Altman spending all that money on fabs will go bankrupt.
I agree that's an obvious concern, and one that's come up in other recent threads on the subject.

I don't know what demand model Altman is using to reach the conclusion that so much fab capacity is needed, but surely the investors he's pitching to will be privy to his numbers and have the opportunity to do their own diligence. Even so, bad investments are made all the time, so there's no guarantee your fears won't play out.
 
I highly doubt Sam is getting that money anyways. I'm sure he released that statement with the sole purpose of drumming up publicity, hoping for at least a few fish to bite.
I believe his concern is that OpenAI's market opportunities will be constrained by not having enough chips in the world that are capable of implementing the stuff they're developing. By one way or another, his goal is probably to address that mismatch, so that OpenAI can grow to eventually join the $Trillion club.

Investors looking to buy into AI are better off putting money into established reputable and existing players like Apple, Google (meh, they'll probably cancel it anyways), Nvidia, maybe Softbank....
Oh geez... I mean, Nvidia's valuation has gotten so insane that even they seem like a risky bet, by this point. The others also might have difficulty justifying these valuations, in the long run.

If you want to invest in AI... I have no idea where's the best place to stash your cash, but I think the answers aren't necessarily obvious.
 
  • Like
Reactions: CelicaGT
Jim Keller knows what he’s talking about so much more, it hurts.
Jim Keller is among the best computer chip engineers, it makes sense.

Altman seems to be out of his mind. Add another to the list of human beings to be replaced.
He's a crypto-bro. That is to be expected...
Not even close man. Most cryptocurrencies like Bitcoin and Ethereum are trying to improve by moving away from centralized systems while the Worldcoin Altman was trying to push(before it crashed spectacularly) wanted biometrics in every human being, which means complete loss of privacy, leading to a nightmarish world making 1984 look like paradise. Meaning it's completely opposite in goals.

Normal crypto to Worldcoin is comparing Rockets to Bombs. Same technology, way different purpose, one that is nothing but bad.
 
Not from gpt, it's the standard playbook every MBA is taught. It guarantees accelerated industry growth, commodity, and monopoly. Strategy is typically used when you don't have a clear use case (i.e. vision) but see huge market upside.

Ford was successful at it with the model T, Apple with the iPhone, but customers knew exactly what those products were for when they unboxed them, AI's not there, yet.
guess you are right, the Amazon, Spotify, Netflix, et al strategy, became the first and the bigger fish that no one can compete, just this field is pretty uncertain, like the crypto rush with GPUs that suddenly tanked, now GPU are in the AI bubble, will be defined by NPU, GPGPU, ASIC, Software? I won't trust Altman with 7 trillion.
 
I'm not talking about what I want to happen, but we should consider what happened to factory workers as automation improved. Today, you need like 1/10th as many factory workers as you did for the same factory output, back in the 1970's or so, yet the quality of manufactured goods has markedly improved. By analogy, enhancing IT workers of the future should mean the industry needs fewer of them/us. Businesses will embrace AI, in order to save money or increase revenues. I think those savings will come in the form of reduced labor costs, more than increased revenues, especially if everyone is doing it (because the total addressable market of most industries probably won't get much bigger).

As for the ecological impact, the carbon footprint of someone roughly scales with their salary - particularly in the range between the poor & middle class. So, if large numbers of us drop out of the middle class, you'd expect to see us have a smaller ecological impact. I'm not saying that's a worthwhile tradeoff, but if you imagine how certain people could justify the carbon footprint of deploying AI at greater scales, perhaps it's something they're taking into account. Who knows?
as the birthrate is declining and resources dwindling, a more efficient production is welcomed, I also hope for a brighter future.
 
Altman seems to be out of his mind. Add another to the list of human beings to be replaced.
I think you're trying to be cute, but please don't talk this way.

I'm sure you're perfectly capable of articulating your philosophical disagreements with someone. Talk of humans being replaced seems like a thinly veiled allusion to violence, which has absolutely no place on these forums. Honestly, I wish people wouldn't talk like that in any context, because it's a slippery slope towards dehumanizing one's political opponents, which is an actual precursor to political violence - supported by numerous historical examples.
 
I think you're trying to be cute, but please don't talk this way.

I'm sure you're perfectly capable of articulating your philosophical disagreements with someone. Talk of humans being replaced seems like a thinly veiled allusion to violence, which has absolutely no place on these forums. Honestly, I wish people wouldn't talk like that in any context, because it's a slippery slope towards dehumanizing one's political opponents, which is an actual precursor to political violence - supported by numerous historical examples.

I don't know the poster, but I think one should never assume "cuteness".

I prefer that people who think this way also talk this way.

Don't train the Dangerous to hide better.
 
I prefer that people who think this way also talk this way.

Don't train the Dangerous to hide better.
The flip side of that is that you don't want to normalize talk of violence, because normalizing talk of it leads to normalizing the act itself. Rwanda provides a pretty good example of this. If you want something more recent, look at Myanmar about a decade ago.

Anyway, I think it's inconsistent with forum policies, but I will flag future such posts and let the mods decide.