News Jensen Huang advises against learning to code - leave it up to AI

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Thats why we have laws...
Oh my sweet summer child...
... and would hopefully keep ethics in check...
They can't keep laws up with the current speed of megacorps trying to skirt them. Can't wait for the "AI" to be used for finding legal loopholes. Of course, it will have safeguards against you and me asking it how to reduce taxes but rich people will have their own models with no restrictions because if we learned anything is that there are two set of laws -- one for rich and one for everyone else.
... and by that time they will be printing and growing 3d organs so they wont need to.
Why would they print and grow organs when they can harvest them practically for free?
and who to say they cant take cells from your organs an grow new ones in the future and you wont need anti rejection pills or have any side effect from the pills because your body wont reject the organ.
It is also possible that you will be used as an organ growing vat yourself and if you are poor you might even have to "pay" for medical care by agreeing to grow more than one organ as a compensation.
 
  • Like
Reactions: Order 66
Look at any art AI has generated...
Sorry bro, but that's not art.
...each piece is unique, something no human has made before.
It can't be unique because its model is based on what humans created so far.
I pulled up an AI img generator, typed in "draw me something" and it drew an interesting line drawing of a woman's face - something unique that has not been made by humans.
Line drawings exist since first humans lived in caves.
- Discover millions of new materials as it has far better pattern matching and prediction than human - https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
Discovery of a million of new materials isn't noteworthy at all. What would've been noteworthy is if it found just one new material capable of superconductivity at room temperature.

Quantity != Quality.
 
Oh my sweet summer child...

They can't keep laws up with the current speed of megacorps trying to skirt them. Can't wait for the "AI" to be used for finding legal loopholes. Of course, it will have safeguards against you and me asking it how to reduce taxes but rich people will have their own models with no restrictions because if we learned anything is that there are two set of laws -- one for rich and one for everyone else.

Why would they print and grow organs when they can harvest them practically for free?

It is also possible that you will be used as an organ growing vat yourself and if you are poor you might even have to "pay" for medical care by agreeing to grow more than one organ as a compensation.
well thats a dark\bleak outlook and far out there man but here's to a bright outlook

They would be no point in havesting organs when you can just have one made, what would be the point if everything you harvest will be rejected by whoevers body it's being transplanted into? when you can simply get a new one grown from your own body, human donors or in your dark world victims, well that will the olden days.
 
  • Like
Reactions: bit_user
I feel this valid discussion of AI merits and capabilities is distracting from the main point.

Huang is advising against studying programming, and against choosing it as a career path. Because AI is smart enough to make the next generation of programmers obsolete.

Yes there's boilerplate code and it is mostly dull. A lot of monkey work in programming. It's not always creative or clever. And yes there's copy paster programmers who don't actually know how to program.
So yes AI can help filter out the bad solutions, but Huang said nothing of designers. He did not mention managers at all. Code is kinda easy to do even for beginners IF they have an excellent design document to follow. And if they have good communication with the team when errors are spotted.
Does AI improve design? Does AI even get a say? Who decides? And did he talk about that?
And as someone above mentioned, there are customers and they damn well never know what they actually need. It is a lot more than just programmers.
But Huang only mentioned programmers!

If AI can make piecemeal improvements to existing processes, be they in programming, design, or corporate sturcture or anything, then i think we should all welcome these changes, and also examine the results to see if they are good or need changing back. And AI can assist that as well.

Advising students not to take up one of the more useful skills of our times is not only borderline idiocy. And it's not just what he said but the way it was said. That's the main point. That he thinks it's time to abandon programming because AI can take over.
One supposedly clever and certainly successfull man thinks it's the perfect time for creative and talented people to NOT develop their talent, because the new tool they will use is better than any they've ever had in the past. And they would be more productive then ever?
?

Is that a paradigm shift? Or did John Romero Jensen Huang just say Daikatana AI is going to make you his bi*** ?
 
  • Like
Reactions: CmdrShepard
Well, soon AI will take over GPU design and Nvidia will go bankrupt, right?

Might as well stop living, I am sure AI can take that over also and live better than we do, for cheaper and using less energy and space than we do.

There are simply a lot of things where we humans have to keep the leading role, and can use AI to help us improve, including checking code.
But we should not let AI take the leading role, and make us "their" helpers.
 
  • Like
Reactions: CmdrShepard
Well, soon AI will take over GPU design and Nvidia will go bankrupt, right?

Might as well stop living, I am sure AI can take that over also and live better than we do, for cheaper and using less energy and space than we do.

There are simply a lot of things where we humans have to keep the leading role, and can use AI to help us improve, including checking code.
But we should not let AI take the leading role, and make us "their" helpers.

That is called the singularity and is the main worry of all those AI doomsayers. But the idea of creating a generalized AI that is as or more intelligent than a human isn't necessarily where the current AI ends up.

The fundamental differences between a large language model basically iterating to the best answer and a neural network with enough complexity to actually have thought/understanding are too great.

If we put all our effort into the short term gains of LLM, then we will end up with better tools and not AI smart enough to design itself or create true innovation.
 
  • Like
Reactions: mitch074
Sorry bro, but that's not art.

It can't be unique because its model is based on what humans created so far.

Line drawings exist since first humans lived in caves.
By your logic, the only original artist was the first cave man who drew a line drawing.

What people need to understand about Deep Learning, is that it is very similar to how humans learn.
Likely the biggest difference between humans and DL at this point is the lack of genes with which humans start out with for a basis to build an understanding from the world.
 
  • Like
Reactions: bit_user
I wonder when it was we, as a society, no longer saw knowing how to use a hammer properly valuable?

There are plenty things which have changed, some now not even in modern history books, because of the advancement of technology in many fields. It used to be engineers calculating all aspects of buildings and structures (cars and other such machines), but now most of them are CAD or calculated via software which a primitive type of AI (keeping the term very generous as they use it) if you want to think about it. Think about medicine and how computers are now assissting diagnosis and even issuing full diagnoses with just a human signature due to liability. Yes, some diagnoses are just signed by the doctor, but basically "calculated" by a machine. Modern nurses need to know way way less to be effective at their jobs as well (this is good, not bad). And a long long "etc".

As I said before: Jensen's words are not without merit. It is the way things will go, whether we like it or not. You either adapt or die. That is nature, because mother nature is the most ruthless of mothers.

Regards.
 
I mean that's not hard and no rant required...

Look at any art AI has generated, each piece is unique, something no human has made before.
I pulled up an AI img generator, typed in "draw me something" and it drew an interesting line drawing of a woman's face - something unique that has not been made by humans.

Let's try to go deeper. Something AI can do that humans cannot?
- Discover millions of new materials as it has far better pattern matching and prediction than human - https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
- Understand protein folding patterns unlocking huge leaps in medicine - https://en.wikipedia.org/wiki/AlphaFold
- Discovering new ways to play Go - https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol
"Michael Redmond (9p) noted that AlphaGo's 19th stone (move 37) was "creative" and "unique". It was a move that no human would've ever made."

Its not unique though, if you did a fine tune yourself with images you will see that it is not unique, it is just taking parts of other images and mashing them together.

Hows your 401k going when the Govs give me some of it?
 
well thats a dark\bleak outlook and far out there man but here's to a bright outlook

They would be no point in havesting organs when you can just have one made, what would be the point if everything you harvest will be rejected by whoevers body it's being transplanted into? when you can simply get a new one grown from your own body, human donors or in your dark world victims, well that will the olden days.

But he is right, if you think that everyone will have equal AI then you are wrong. Only if Open Source is as good or if not better than we might be on an even playing field.

What is already happening today is Enterprise companies are getting cheaper deals with MS to create AI for their company and using the companies private data to help speed up the companies daily tasks in each section.

At some point they wont need any grads or junior workers and will just hire people with 5-20+ years experience. So all the kids today will find it difficult to find jobs of tomorrow, unless schools and "scam" universities create study related to AI, but even then that will become obsolete.

Govs wont make money, they will lose money. More unemployed people = less tax. What I can see happening is they will tax the big companies even more and force them to show the AI that they are using, call it "AI Tax".

This is already becoming a Law in the EU, if you plan to sell a product that uses AI you have to go through the EU rules and regulations and apply.

AI wont make a utopia, it will make the rich richer and the poor poorer. You will have AGI that will cost $50,000 a month and AGI that will cost you $20 a month. Which one is better, I would say the one that costs you $50,000 a month.

Currently AI is a helpful assistant I use it everyday just like most people, but when it comes to the point that you only need to speak few words and it can do nearly everything it will be a "Speed Race". How fast can your AI produce the final product. While the $50,000 can do it in a day the $20 one can do it in 6 months. GG GameOver too the poor person who cant afford the 50k.

So many people think companies are there to help them... wrong every single company is there to make money. They dont give too craps about you and your dog.
 
  • Like
Reactions: CmdrShepard
But he is right, if you think that everyone will have equal AI then you are wrong. Only if Open Source is as good or if not better than we might be on an even playing field.

What is already happening today is Enterprise companies are getting cheaper deals with MS to create AI for their company and using the companies private data to help speed up the companies daily tasks in each section.

At some point they wont need any grads or junior workers and will just hire people with 5-20+ years experience. So all the kids today will find it difficult to find jobs of tomorrow, unless schools and "scam" universities create study related to AI, but even then that will become obsolete.

Govs wont make money, they will lose money. More unemployed people = less tax. What I can see happening is they will tax the big companies even more and force them to show the AI that they are using, call it "AI Tax".

This is already becoming a Law in the EU, if you plan to sell a product that uses AI you have to go through the EU rules and regulations and apply.

AI wont make a utopia, it will make the rich richer and the poor poorer. You will have AGI that will cost $50,000 a month and AGI that will cost you $20 a month. Which one is better, I would say the one that costs you $50,000 a month.

Currently AI is a helpful assistant I use it everyday just like most people, but when it comes to the point that you only need to speak few words and it can do nearly everything it will be a "Speed Race". How fast can your AI produce the final product. While the $50,000 can do it in a day the $20 one can do it in 6 months. GG GameOver too the poor person who cant afford the 50k.

So many people think companies are there to help them... wrong every single company is there to make money. They dont give too craps about you and your dog.
I dont think everyone will be equal, I was just posing the question, because nobody knows what the end result will be until the dust has settled.

To me this is really exciting times in techonlogy, but the impact of AI can effect people differently I can understand that.
 
Last edited:
  • Like
Reactions: bit_user
I dont think everyone will be equal, I was just posing the question, because nobody knows what the end result will be until the dust has settled.

To me this is really exciting times in techonlogy, but the impact of AI can effect people differently I can understand that.
It will be that movie In Time where time will be the only valuable product, you will just work away and be happy so you can live. AI will take the best jobs while us lower end humans will be working in some dirty mine or doing a task that the AI has not been able to think of as yet.

While Nvidia will be bankrupt and Jensen living on his own wealth because the average joe cant afford the $10 Subscription to Netflix or Prime, all datacenters will be shutdown because in the end its the average joe who pays them the most.

Without consumers there is no product, there is no service and most of all there are no sales.

These giant companies cant live and sell off each other because at least one of them needs the average joe. Who's going to buy all the iPhones, Tvs, Cars and get in debt for that company. It wont be the rich..
 
It will be that movie In Time where time will be the only valuable product, you will just work away and be happy so you can live. AI will take the best jobs while us lower end humans will be working in some dirty mine or doing a task that the AI has not been able to think of as yet.

While Nvidia will be bankrupt and Jensen living on his own wealth because the average joe cant afford the $10 Subscription to Netflix or Prime, all datacenters will be shutdown because in the end its the average joe who pays them the most.

Without consumers there is no product, there is no service and most of all there are no sales.

These giant companies cant live and sell off each other because at least one of them needs the average joe. Who's going to buy all the iPhones, Tvs, Cars and get in debt for that company. It wont be the rich..
well at least that is one of the happier hypothesis i have read about AI so there is that to consider.
 
well at least that is one of the happier hypothesis i have read about AI so there is that to consider.

Yes you will be happy with what we give you.

You do as you are told, you don't need to learn to code, you don't need to learn to drive, you don't need to learn to farm. It will be done for you, all you need to do today is follow what Jensen or Musk or Zuck or Google tell you ok.

And most of all be happy without really being happy. Put on a face.
 
  • Like
Reactions: vijosef
how often do people around the world do as they are told or should i rephrase it to how often don't people around the world do as you are told?

I'm guessing everyday in the real world.

what make you believe that will change? humans will be humans and carry on doing great things, dumb thing, disagree, agree etc, like always. nothing will or can change that other than mass extinction of the human race I suppose?
 
Its not unique though, if you did a fine tune yourself with images you will see that it is not unique, it is just taking parts of other images and mashing them together.

Hows your 401k going when the Govs give me some of it?
By your logic only the first caveman to draw was original 😉

And my 401k is fine though I have other investments. Only a fool has savings only in stocks and bonds
 
  • Like
Reactions: bit_user
I suppose in a perfect world AI would do everything making the rich even richer and everyone else finacially equal?

Would you care if the 1% richest people had 10000 or more times the money they have now if you didnt have to worry about your well being, eating, heating, rent, travel, personal items etc?

probs not.

Most people in the first world and increasingly world wide have not had overwhelming imminent worry for a long time. It hasn't perfected the world or made everyone content.

What people crave is significance and things worth of striving after. Taking responsibility for serious things is a much better path to meaning than "being cared for".

The idea that everyone else will be "financially equal" sounds pretty sinister to me (and why are the Rich excluded from the equalizing?). Who is doing the equalizing ? "Who decides" is usually the important question.
 
If you ask a GPT model "Who painted Mona Lisa?" it will correctly answer "Leonardo Da Vinci".

It is able to do so because words "Mona Lisa" and words "Leonardo Da Vinci" which are internally represented as a string of numeric tokens are found right next to each other in the dataset used to train said model.

Since there's more than one occurence of this in the whole dataset it has a high significance just like with humans (many people said the same thing so it is taken as ground truth) and as such model has memorized a high probability that tokens 44, 6863, 29656 (which represent "Mona Lisa") are extremely likely to be followed by tokens 73004, 21106, 14569, 97866 (which represent "Leonardo Da Vinci").
Um, no. You're not describing how GPT actually solves that problem, but rather how you imagine it's solved.

Now comes the fun part -- if you ask a question which it has never seen before it will turn it into tokens and try to answer based on the next token probability. If the temperature is high and if you have a good set of guardrails injected into the context window you can force the model to give a truthful "I don't know" answer. Otherwise, you will get what's called a "hallucination" which will most likely be factually incorrect, possibly harmful,
Cool story. Now tell us how it composes sonnets, haiku's, or limericks about whatever subject you like? In order to perform that trick, it needs to separately understand the form in which it's being requested to answer as well as the subject matter. Styles, and their distinction from substance, are things it learns, implicitly.

As an aside: here's a fun fact about parrots. They can reproduce things they've heard people say, but always in the voice of the speaker. Because they don't understand the words and don't know which aspects of the speech are words or voice, they can't repeat one person's words in the voice of another. Here's where the "parrot" analogy people like to apply to LLMs really falls short.

How do you do that with code for say... Mars Rover?
Robotics is actually one of the more straight-forward tasks to teach AI, since you can just use sophisticated simulators. Tesla's self-driving algorithm is one giant neural network, from what I've heard. I'm not actually a big fan of that approach, but I digress.

Yes, but those programmers can create novel output while the model can't
I don't believe it. What really is creativity, at some level, other than filtered noise? I expect it should be possible to train a GAN to produce more novel output, with a suitably designed and trained adversarial network. It's a harder problem than simply imitating human works, but probably not insurmountable with today's technology.

Who is going to provide that data if in a couple of decades nobody knows how to write code anymore?
In some ways, code is an easier problem because it must adhere to well-defined rules and achieve specific objectives. These are all quantifiable things.

Furthermore, there's already billions of lines of code out there, much of it in high-quality, open source software. Much of it even has automated tests! That's a vast reserve of training data, right there.
 
I dont believe this is one of those "blubble burst" techs for 1 second (as some seem to believe)
Most of the AI boom is being bankrolled by big corporations and venture capital firms. They're investing in AI tech, but will expect to start seeing a return on that investment within a timeframe of months to a couple years. When they run out of patience, if the RoI isn't there, they'll pull the plug on those projects and those initial investments in AI hardware and/or services won't be followed-up.

I think that's the most likely scenario driving us towards a downturn, if not an outright bubble bursting. I'm not saying all or even most AI projects will fail to provide a RoI, but probably enough to make a serious dent in the market for AI hardware, software, and services. Not enough to shut down further developments in the technology, but perhaps enough to trigger waves of stock selloffs and downsizing.
 
  • Like
Reactions: The TrippyHippie
Well, soon AI will take over GPU design and Nvidia will go bankrupt, right?
I'm pretty sure that's what Sam Altman is hoping/planning. Otherwise, what chance could he possibly have of competing with Nvidia, who seems almost untouchable? It's hard enough for people who've had way more of a head start than he does - he'd be starting from zero!
 
50 years ago, to 'program', you needed to know assembly.
Then, C, FORTRAN, COBOL
Then, C++
and on and on.

Now, JavaScript, Python, Ruby...
Even lingering VB/VBA within Office.

You can grab prebuilt libraries to do a lot of the low level API things.
A lot of things are abstracted for you.

But with all of those over the years, you still need to know 'how to program'.
You still needed a valid design, to meet whatever functionality was needed.

This statement is purporting to say "Don't bother, just rely on the AI".

Yeah, right.
Note that if you really want the most efficient code possible, you'll either want to give hints to your compiler or go straight to assembly - you can see that in performance-critical pieces of code like video encoders where saving 5 instructions in a 17 lines piece of code means that you reduce your encoding time by 70%. Same with AI (duh) and rendering - extremely repetitive tasks.
It would be interesting to see AI applied to the world of optimization, though - right there, an AI looking to reduce the number of steps to reach a known result could make for frighteningly competent compilers and transpilers.
 
That is called the singularity and is the main worry of all those AI doomsayers. But the idea of creating a generalized AI that is as or more intelligent than a human isn't necessarily where the current AI ends up.

The fundamental differences between a large language model basically iterating to the best answer and a neural network with enough complexity to actually have thought/understanding are too great.
Doom from AI doesn't depend on General AI. If you give a GPT-class AI the ability to use the web, it could cause a great deal of harm, either in the hands of a naive or especially a malicious user.

In fact, such an AI agent or concierge could be quite a powerful tool for someone too mentally unbalanced to do much harm by their own hand, or a minor tyrant incapable of attracting a very large following of very competent underlings.

There are plenty of other, interesting scenarios of rogue AI's run amok. If you consider how much harm a virus can cause, in spite of being a tiny bundle of lipoproteins and nucleic acid, you can perhaps appreciate that an entity of relatively low complexity can wreak a disproportionate amount of havoc.

If we put all our effort into the short term gains of LLM, then we will end up with better tools and not AI smart enough to design itself or create true innovation.
Even if true (which I doubt), I think that's a false sense of security.
 
Such a colorful discussion.
Every coder: Jensen is evil.
Every MBA: Jensen's gonna make me money
Every person that hates current iPhone apps: yes please. In the end, coding is about math and language syntax. LLMs excel on those 2 topics: math based on data science, lang based on labeling and so forth.

Ever forked a GitHub repo? That what copilot ai is doing to an extent.
 
how often do people around the world do as they are told or should i rephrase it to how often don't people around the world do as you are told?

I'm guessing everyday in the real world.

what make you believe that will change? humans will be humans and carry on doing great things, dumb thing, disagree, agree etc, like always. nothing will or can change that other than mass extinction of the human race I suppose?
The illusion of free will won't go away. The skilled manipulator knows how to plant ideas in people's heads or shift their opinions without realizing they're being manipulated.

Then, you try to tell them they're being manipulated and they don't believe you. Worse, they probably even feel insulted, because they believe those ideas and conclusions they've had and drawn were all their own!

With the powerful tools of manipulation AI can provide, this will only get way worse.
 
  • Like
Reactions: The TrippyHippie
Most of the AI boom is being bankrolled by big corporations and venture capital firms. They're investing in AI tech, but will expect to start seeing a return on that investment within a timeframe of months to a couple years. When they run out of patience, if the RoI isn't there, they'll pull the plug on those projects and those initial investments in AI hardware and/or services won't be followed-up.

I think that's the most likely scenario driving us towards a downturn, if not an outright bubble bursting. I'm not saying all or even most AI projects will fail to provide a RoI, but probably enough to make a serious dent in the market for AI hardware, software, and services. Not enough to shut down further developments in the technology, but perhaps enough to trigger waves of stock selloffs and downsizing.
what mean by bubble burst.

I believe AI is here to stay, I can also believe some if not many start up bubbles will fail to live up to the hype, but I mean in general I believe AI is here to stay and it's not going to be a flash in the pan technology.

I am not saying I am 100% correct, but I do believe 100% that AI is here to stay and this is just the very start.
 
  • Like
Reactions: bit_user
Status
Not open for further replies.