News AI industry needs to earn $600 billion per year to pay for massive hardware spend — fears of an AI bubble intensify in wake of Sequoia report

vijosef

Upstanding
Feb 26, 2024
111
112
260
His first achievement will be to invent Keynesian lies so obscure that it will take us centuries to unmask them.
 

bit_user

Titan
Ambassador
Applying a blanket software margin of 50% is pretty hilarious. I also find the 50% overhead of building & operating the datacenter to be somewhat suspect. I think he's just applying historical figures atop this new hardware, even though the hardware is disproportionately more expensive.

If the datacenter overhead ends up being more like 25%, then you get $187B. It's hard to know exactly what's meant by that software 50% margin, but software does have to be written & models obviously have to be trained. So, maybe costs go up to $200B. Yes, they'll want some profit on top of that, but they probably have some appetite for depressed margins, so long as they believe revenues will continue to grow and perhaps hardware costs will eventually start to decline.

I just think the approach of "Take hardware costs and double them. Then, double again." is laughably naive. And apart from a little money in some NASDAQ index fund, I say this with no substantial vested interest.
 
  • Like
Reactions: thestryker

bit_user

Titan
Ambassador
I want convincing evidence these startups are not a Ponzi scheme.
Startups? He's using Nvidia's revenue as the top line figure. I'm not sure how startups tie into this story, unless you're way off on a tangent.

As for your statement, they're not a ponzi scheme if they execute to their business plan and don't engage in embezzlement. As long as investors have transparency into the revenue model of the company, then they should be sophisticated enough to evaluate the risks for themselves and decide if it's a sound investment.
 

Notton

Commendable
Dec 29, 2023
724
631
1,260
Startups? He's using Nvidia's revenue as the top line figure. I'm not sure how startups tie into this story, unless you're way off on a tangent.

As for your statement, they're not a ponzi scheme if they execute to their business plan and don't engage in embezzlement. As long as investors have transparency into the revenue model of the company, then they should be sophisticated enough to evaluate the risks for themselves and decide if it's a sound investment.
I am talking about this line.
"Companies like AWS, Google, Meta, Microsoft, and many others invested heavily in their AI"
It's pretty clear that AWS, Google, Meta, and Microsoft have an abundance of cash, but the "many others"? doubtful
 

AI industry needs to learn how to earn​


thats easier said than done. most "ai" stuff has so little value to most people outside of fomo.

ppl might spend early on into some new fad of ai but long term most ppl wont care to spend for it as honeymoon phase ends..

current "ai" just has no real "must have purchase" factor.
 
  • Like
Reactions: Sleepy_Hollowed

NedSmelly

Prominent
Feb 11, 2024
743
399
770
This is worrisome, regardless of how the figures were extrapolated. The key point is that these companies need to monetise to get ROI on their massive expenditures on AI. I don't think it's going to be achieved by nickel-and-diming end users - it's going to be service contracts with other corporations and enterprise seeking to cut their own costs through automation. This is setting the scene for some major social repercussions, such as with the labour market.
 

bit_user

Titan
Ambassador
thats easier said than done. most "ai" stuff has so little value to most people outside of fomo.
Dunno about that. Some people would be pretty unhappy if you took away Siri and Alexa.

Also, depending on how broadly you define "AI", it's now a mainstay of many other industries. I can tell you that it basically killed off most classical computer vision approaches, and I'm talking about in actual deployed products. In fact, Tesla's self-driving software is fundamentally AI-based.
 
  • Like
Reactions: thestryker

A Stoner

Distinguished
Jan 19, 2009
365
136
18,960
Not sure if AI is ever going to meet the goals of those investing. You can feed it oodles of points and it will never, ever, understand anything in an intelligent fashion. It seems to have some capability of piecing together various existing things and combining them differently to make unique results, kind of like my grandmother taking hundreds of pieces of old clothes and making a quilt out of it. She did not make the patchcloth designs, they came from someone else. She just cut them up, organized them, and then stitched them all together.

AI kind of seems to work like that, but in a more mechanical rather than organic way.
 
Dunno about that. Some people would be pretty unhappy if you took away Siri and Alexa.

Also, depending on how broadly you define "AI", it's now a mainstay of many other industries. I can tell you that it basically killed off most classical computer vision approaches, and I'm talking about in actual deployed products. In fact, Tesla's self-driving software is fundamentally AI-based.
Funny that mention Alexa since it famously lost Amazon billions. They could never figure how exactly to make money from it. Siri is frankly the same though Apple seems to treat it more like a loss leader to keep people in the iVerse.
 
And this all feeds back to what many people, including myself, are saying: People will use AI as long as it's free, or at the very least very cheaply priced and included in existing subscriptions, such as existing Microsoft 365 or Google One plans, but the vast majority of people are not willing to pay a large price for it, as we see when Google and Microsoft try to push their $20+ a month prices, and it's not a sustainable model at this scale. I WANT Copilot to be able to generate Word templates, review and suggest improvements to my work, and be a proper software assistant that Google Assistant (and all others) can't be, but an extra $240 a year for that feature on top of a Microsoft 365 subscription isn't worth it.

AI for medical, legal, and scientific uses is one thing that IS sustainable as those models are trained on specific data sets intended for specific usage, such as detecting anomalies in medical scans or finding patterns in astronomical imagery and data.

In time when ASICs, or at least far more efficient models, make the entry and operation costs far more affordable than the rather brute force models that depend on the vast parallel processing capabilities of GPUs currently in use, then AI at large like Copilot, Gemini, ChatGPT, and other public facing AI will be sustainable. As it stands right now it's just an unsustainable bubble that's going to burst sooner rather than later and trigger a market collapse not seen since the Dot Com bubble imploded.
 
  • Like
Reactions: ivan_vy

bit_user

Titan
Ambassador
I think I described what it can do... It can take a bunch of other pieces of something and put them together without understanding what it is doing.
I mean without anthropomorphizing. We use computers to do a lot of useful things for us today, but we don't object based on the fact that they don't actually understand those tasks on a conceptual level.

For instance, if I use an AI bot to find bugs and security flaws in my code, why would I care how deeply it understands what it's doing, so long as it's effective in performing the task? Same thing for chip layout, automated translation, or numerous other sorts of tasks people are doing with it.

The way I think about current AI tech is that it's already good enough to solve certain hard problems that have defied classical computer science techniques thus far. That's not nothing. It doesn't need to have human-level intelligence and introspection, in order still to be a useful and valuable tool.
 
Last edited:

watzupken

Reputable
Mar 16, 2020
1,142
635
6,070
Generally, people are using AI because it’s free and provides convenience to what they are doing. So at some point where companies need to charge for using them, I feel the demand will plummet. So the sunk cost of buying AI hardware and the cost of maintaining is unlikely to be recouped.
 
  • Like
Reactions: ivan_vy

ttquantia

Prominent
Mar 10, 2023
12
12
515
There was a huge AI hype in the form of autonomous self-driving cars a couple of years ago. That hype is gone. Then game ChatGPT, and then suddenly AI was again about to do everything, although the only thing AI still does is generate interesting images and interesting text. What do people need interesting images and interesting text for? One could for example generate low-quality websites and fill them with text and images and then sell advertising. That is a real application. Is there anything else? (OK, some programmer types are happy to have some of their Javascript or Java code automatically generated (however imperfectly), instead of having that work outsourced to some low-cost country, but is that really so important?)
The issue with "interesting text" is that the text is often not even true. If you want your AI to do something really useful, you would at least for the very minimum need to guarantee that it gets things right, as in "not kill anybody", "not harm anybody", "not cause great financial losses". These three properties are completely missing in the current "interesting" AI.
 

bit_user

Titan
Ambassador
If you want your AI to do something really useful, you would at least for the very minimum need to guarantee that it gets things right, as in "not kill anybody", "not harm anybody", "not cause great financial losses". These three properties are completely missing in the current "interesting" AI.
Not to minimize these points, but human employees and contractors for companies don't meet these criteria, either. The standard AI needs to meet is simply not to be a greater liability than human employees.
 

Sleepy_Hollowed

Distinguished
Jan 1, 2017
532
217
19,270
Dunno about that. Some people would be pretty unhappy if you took away Siri and Alexa.

Also, depending on how broadly you define "AI", it's now a mainstay of many other industries. I can tell you that it basically killed off most classical computer vision approaches, and I'm talking about in actual deployed products. In fact, Tesla's self-driving software is fundamentally AI-based.
"AI" here being generative AI, which is just Large Language Models, more accurately called stochastic parrots.

Siri, unless Apple is switching to large language models (which would make everyone stop using it), is a different type of machine learning.

Tesla's "AI" is absolute scum, absolutely years if not a decade behind Mercedez and BMW, and even Subaru.
 
  • Like
Reactions: palladin9479
Generally, people are using AI because it’s free and provides convenience to what they are doing. So at some point where companies need to charge for using them, I feel the demand will plummet. So the sunk cost of buying AI hardware and the cost of maintaining is unlikely to be recouped.
As the models get better they will be able to offer more value to a user.
With ChatGPT-3 only being released 2 years ago it is safe to say that we aren't anywhere close to AI's final form.
Just like it's hard to say what will be the killer app for any particular platform its hard to say what will be the killer feature that will prompt, pun intended, people to pay for it in mass.
 
To me right now it seems like the big problem is simply the sheer amount of money being pumped into it without a plan for what to use it for. That's not to say "AI" isn't useful as we've had consistent advancing of machine learning for years before the GPTs caught general attention. Of course the more pared down specific use cases don't require hundreds of billions of dollars to continue advancing.

I think it's telling that the industries immediately under attack by the new rise of AI are creative ones and I think that's largely due to it being low hanging fruit and people with money don't tend to value the human creativity aspect in the first place.

My main hope is that business isn't too aggressive when it comes to implementation that it causes brain drain throughout industries.
 
  • Like
Reactions: bit_user
So folks gotta realize there are two sides to "AI", model generation (training) and model use (prediction). The vast vast amount of infrastructure cost is on generating the model, comparatively little is spent on the user side. Searching through a multidimensional mathematical construct, which is all AI text prompt generation is, is computationally cheap compared to having to do the analysis and build that searchable model from raw datasets.

We've been able to build these models for a very long time, it was just so insanely expensive that nobody thought it would be feasible. Then someone figured out how to modify the algorithm to use CUDA/OpenCL cores in massive parallelism. Something that would of taken years if not decades on a supercomputer can do be done in weeks or months.

Stochastic parrot is the best term I've seen used for what AI is. The google employee who wrote that paper was pressured to retract it and when they didn't, Google fired them.
 

bit_user

Titan
Ambassador
"AI" here being generative AI, which is just Large Language Models, more accurately called stochastic parrots.
That's an oversimplification to the point of being misleading. I know it's popular to say this sort of thing, but if you actually tried using a stochastic model to do what LLMs do, you'd only get gibberish out of it.

Siri, unless Apple is switching to large language models (which would make everyone stop using it), is a different type of machine learning.
They just announced a huge partnership with OpenAI, where they're indeed swapping out the backend of Siri with OpenAI's models.

As the models get better they will be able to offer more value to a user.

With ChatGPT-3 only being released 2 years ago it is safe to say that we aren't anywhere close to AI's final form.
It has a final form??? I think you've been playing too many video games, bro.

Just like it's hard to say what will be the killer app for any particular platform its hard to say what will be the killer feature that will prompt, pun intended, people to pay for it in mass.
I think LLMs are a little bit like a parlor trick. They have applications, but to the extent people think that's what AI is really about, they're missing the point.

For instance, Google's Deep Mind trained an AI to do protein folding, jumping medical science forward by decades, almost overnight!


Not to mention the various ways it's being used in drug discovery, material science, etc.
 
Last edited:

bit_user

Titan
Ambassador
Stochastic parrot is the best term I've seen used for what AI is.
It was chosen to grab attention, not really to be an accurate description of how LLMs really work. A purely stochastic approach can't do many, if not most of the tasks in Hugging Face's latest LLM Leaderboard:

These tasks include things like:
  • follow explicit instructions, with a focus is on the model’s adherence to formatting instructions rather than the content generated, allowing for the use of strict and rigorous metrics.
  • multistep arithmetic, algorithmic reasoning (e.g., boolean expressions, SVG shapes)
  • a compilation of high-school level competition problems
  • questions crafted by PhD-level domain experts in fields like biology, physics, and chemistry
  • algorithmically generated complex problems, each around 1,000 words in length, including murder mysteries, object placement questions, and team allocation optimizations. Solving these problems requires models to integrate reasoning with long-range context parsing.
  • refined multiple-choice knowledge assessment (now with 10 choices).

You can read the details here:

Follow the Leaderboard, to see how the open source models (including Meta's Llama 3) are progressing.

The google employee who wrote that paper was pressured to retract it and when they didn't, Google fired them.
That was published in mid-2021 (i.e. pre-ChatGPT) and focused on the aspect of LLMs just learning whatever they were trained on, including any implicit biases. I think it's a mistake to read too much into the specific term "stochastic parrot". The authors were specifically concerned with issues of algorithmic bias (think DEI).
 

bit_user

Titan
Ambassador
BTW, it occurred to me that the author of the report assumed that all of Nvidia's datacenter revenue was going to profit-making businesses. They're ignoring governments and research applications of this tech. For instance, it was just reported that some of the 2024 revenue figure includes $12B worth of sanctions-compliant devices being sold to China. While some of that is undoubtedly going to Chinese businesses like Weibo, Alibaba, and Tencent, no doubt a lot of it is going for research and some will presumably end up being used for government purposes.

Again, the notion of just taking Nvidia's datacenter revenue projections and multiplying it by 4x just seems wildly simplistic. I think this firm decided to start shorting Nvidia or other AI companies, and is just churning out some junk analysis to engineer a market swing.