News Linus Torvalds reckons AI is ‘90% marketing and 10% reality’

I COMPLETELY agree with Linus. The market should first find the "must have" use, and that can drive the market acceptance as well as the tech development. This does not mean making a cool picture or screwing up a term paper. It should be on the same level to society as search engines or smart phones.

When I studied neural nets several decades ago, there was a common feeling in our class that this was considered esoteric tech to be used by techies only, for things like adaptive control and fuzzy logic. Fast forward to now and we finally have enough nodes to do some interesting things to images and sound. But the method that we are using to do these things is bulky, not cost effective, and certainly not optimized.

Using analog nodes uses way less power and probably will be the right final solution for many uses. Yes, that is a real thing. Analog neural nets (like inside your brain).
 
Yes at first we were all impressed with CHATGPT but then we found it was not very useful. The hype cycle is much larger than I expected. I think leading edge semiconductor technology is a solution looking for a problem and many of the large tech company [marketers] are easily fooled. Nvidia has done a good job of providing tools like CUDA but the idea that AI can work without heuristics is nonsense.
 
Agree and disagree.

It probably is mostly 90/10, but its so new we're in a spot that I like to say: "we dont know what we dont know." We need people to jump on that band wagon, play with it, use it, break it, fix it, and figure it out. Those that do this early will be wasting a lot of time on the hype, but like any good pyramid scheme, those that get in early, as much as it may seem pointless now, will have the most to gain when it becomes mainstream.
 
90% sounds extremely optimistic. I agree with Li, 99% of them are pure bubble and will fail.

But even that sounds optimistic. I've still yet to to see AI do something, that actually turns a profit (the only part that actually matters) and isn't just a relabeled service previously done by "big data" or "the algorithm".
Yeah, I'm pretty much in line with this feeling. I thought Torvalds was being slightly on the optimistic side. But, Torvalds and Li both have the right idea. Right now it's become a marketing term, and they're applying the AI moniker to everything they can.

Suddenly parsing documents is "AI," recording video or sound and parsing out the words is "AI." Even the commercials for the new phones, talking about now with AI, your phones can do something new... except that the examples they give seem to be what phones could already do.

Anyone remember how we were hearing the phrase "big data" all the time? Yeah, kinda feels like that.
 
This statement has been true for all bubble tech. Hardware bubble in the 80s, .com bubble in the 90s, cloud bubble in the 00s, crypto bubble in the 10s, AI bubble in the 20s. 1000s of companies making ever promise in the world, 30 or so players with 3 to 5 super major players will come out with real, usable products and survive when the bubble pops
 
  • Like
Reactions: Hartemis
I would argue the ratio should be closer to 95% Marketing, 5% Reality.
5% is probably optimistic.

The big problem is the <5% of non-nonsense applications are the ones that were being actively used before the 'deep learning' boom, mainly machine vision and data filtering.
Large Language Models, where the vast majority of investment is currently going, have basically no utility beyond toys. Image generation is in Photoshop Era (or if you're older, the digital drum-machine era) of artistic panic: the bit where everyone thinks it will take their jobs by having anyone push a button and receive art, and before the bit where everyone figures out that you still ned to be an artist to actually get anything of actual value out of it and it just becomes another digital tool that some artists will use productively and others will eschew for various reasons to various effect (e.g. the auteur directors who will still demand celluloid film vs. the many directors who cannot afford the massive costs of celluloid film). But even then, image generation is a pretty niche use, and will be relegated to mundane non-consumer-facing applications like creating tiling non-repeating rock textures from a small number of seed textures in game engines, or mapping user-generated-avatar expressions to face models without having to make sure the individual composited face elements can tolerate all possible poses.
 
Sounds about right to me!

In next five years it should get 50%-80% cheaper to field a system at about the current level, and it will be better understood by both vendors and customers, and it might even turn a small profit, LOL.
Probably more like ten years to see anything massively better than today, and longer than that - say 20 years - to something more or less human-level.
FWIW I have very little fear of any "super-intelligence", for reasons I can go on about at very great length.
 
As cynical as he may be, Linus is basically an optimistic guy.
I thought he was a "No Non-Sense, 100% Realistic" kind of guy


5% is probably optimistic.

The big problem is the <5% of non-nonsense applications are the ones that were being actively used before the 'deep learning' boom, mainly machine vision and data filtering.
Large Language Models, where the vast majority of investment is currently going, have basically no utility beyond toys. Image generation is in Photoshop Era (or if you're older, the digital drum-machine era) of artistic panic: the bit where everyone thinks it will take their jobs by having anyone push a button and receive art, and before the bit where everyone figures out that you still ned to be an artist to actually get anything of actual value out of it and it just becomes another digital tool that some artists will use productively and others will eschew for various reasons to various effect (e.g. the auteur directors who will still demand celluloid film vs. the many directors who cannot afford the massive costs of celluloid film). But even then, image generation is a pretty niche use, and will be relegated to mundane non-consumer-facing applications like creating tiling non-repeating rock textures from a small number of seed textures in game engines, or mapping user-generated-avatar expressions to face models without having to make sure the individual composited face elements can tolerate all possible poses.
4% of that 5% belongs to nVIDIA and how they're fleecing so many "Hyper Scalers" into buying their Over-Priced hardware

As with the gold rush, it's the Axe-Pick sellers that are making the fortune.

Ergo nVIDIA is getting away with "Highway Robbery".
 
The hype is not for the infancy of AI, it is for its future.

He is missing the point. All robot automatization will past through AI, not mentioning all the automatization for smart cities.

There is a reason why the investment are only made by the big tech companies.

It is literally the 4th industrial revolution.
 
Yeah, that 10 percent is mostly made up of porn and politically motivated bots. It is the future but currently it's not that useful outside a handful of niche applications.
 
I'm not so sure "AI", as we know it today, is going to be around in 5 years. In limited cases, like medical, law, sciences/research, and military sure, but public ChatGPT, Gemini/Copilot, quite likely not if they continue to lose money at the pace they are.
 
There are things where AI is surprisingly useful (self driving cars have 2x the accidents, but some of the problems are faulty sensors and human error in the use of the vehicle: https://www.nhtsa.gov/vehicle-safety/automated-vehicles-safety), for AI generated art it works reasonably well; sometimes two heads or four arms makes for a great result, even if unwanted.

It's when you rely on it and don't check the result that there's problems.

Few people code (or write a book) and don't check it, they go straight to publication. Similarly AI can help (like any tool) but currently we must use our own brain to supplement the results.
 
In the context of normal consumers and desktop/laptop users it's more like 99/1 ratio so he's optimisitic IMO. But in the world of research, engineering, medicine, etc AI is actually doing some amazing things, and there I'd say the ratio is more like 5/95.

We aren't getting anything that is remotely AI on our PC/phones/laptops at all. All we are getting is plagiarising engines that summarise existing data and generative fill for editing photos.
 
I think ignoring marketing in general works best for both personal/consumer and business/enterprise; that's great to be informed by a new product/solution/service, but research it and apply some critical thinking before going all-in.

Linus probably has a more old-school, "hard-programmed" way of thinking, but he understands that things need to be proven be implemented. Otherwise, wow, the whole Linux world could come crashing down... imagine that!
 
There are definitely real use cases for AI, but I don't think he is too far off on percentages. The problems will be solved for some of its use cases eventually, but its following the typical hype curve it will continue to get better and its uses will become more mainstream, but yeah its very much at the tail end of the hype phase for many uses. I will just be happy when it stops labeling my cat or people as a package on my blue iris install...lol
 
This man might be my pc lord amd savior but boy does he make some dumb comments sometimes. The comment was 10% reality. Lets not get it twisted with your personal hang ups.
 
  • Like
Reactions: RUSerious