News Jensen Huang advises against learning to code - leave it up to AI

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[General rant]
I mean it just seems like you're ranting in general.

This is what _everyone_ needs to realize. There is going to be a _huge_ shift coming to what work _is_ in the next 10 years.

Sadly no politicians understand this or aren't doing anything to help people understand what's coming.

Artist thoughts they were safe: nope
Musicians thought they were safe: nope
Drivers (truck/transit/taxi) think their jobs are safe: nope (it is coming 😉)
Engineers thought they were safe: nope

Find what you like to do out of work is my best advice.

For anyone wanting to learn more, watch Ray Kurzweil on YouTube for a basis from which to understand what's coming.
 
Question for the forum ?

Should kids learn cursive writing in school ? We have at least 30 years since it was "necessary" to learn and I bet at least half the posters were forced to.

I think it's a very similar question.
 
DL AI has a doubling time of roughly 6 months.
I think it's a mistake to extrapolate future hardware performance from previous increases. We know process node improvements are slowing down and transistor costs have basically flatlined (if not even increased, in the latest nodes). The B100 might end up being only like 50% or 70% faster than the H100 that launched 2 years earlier. That's pretty far off a doubling time of 6 months.

Perhaps improvements in algorithms will provide further speedups, but I think we're still not looking at a doubling time of 6 months.

For anyone wanting to learn more, watch Ray Kurzweil on YouTube for a basis from which to understand what's coming.
If the "singularity" happens just as incremental computational improvements are becoming exponentially more difficult and costly, the two could sort of cancel out and we're left with a more linear pace of change.
 
I mean it just seems like you're ranting in general.

This is what _everyone_ needs to realize. There is going to be a _huge_ shift coming to what work _is_ in the next 10 years.

Sadly no politicians understand this or aren't doing anything to help people understand what's coming.

Artist thoughts they were safe: nope
Musicians thought they were safe: nope
Drivers (truck/transit/taxi) think their jobs are safe: nope (it is coming 😉)
Engineers thought they were safe: nope

Find what you like to do out of work is my best advice.

For anyone wanting to learn more, watch Ray Kurzweil on YouTube for a basis from which to understand what's coming.
None of your points change the fact that you must KNOW to understand. The shift is irrelevant. Also, some of your predictions for which jobs are insecure are wildly uninformed. Take this from someone in the know.
 
Should kids learn cursive writing in school ? We have at least 30 years since it was "necessary" to learn and I bet at least half the posters were forced to.
I think it's a little hard to answer, in the abstract. The main case against it should be the opportunity cost (i.e. what other skills might that same time be better spent teaching them?).

I'm no expert on education, but I think there are two main reasons to teach kids a subject. One is for cognitive development, while the other is to have a specific skill or knowledge we expect them to need. Some research has come out regarding the cognitive benefits of having fluency in at least two languages, and I wouldn't be terribly surprised if knowing two writing systems had similar (though probably far lesser) benefits.

In practical terms, cursive seems pretty useless, beyond signing your name, if you're not reading old documents. Even then, I always had a bit of trouble reading relatives' and family members' letters. I'm sure a decent OCR system would be way better at it than I ever was.

That's probably all I should say on the matter.
 
I think it's a little hard to answer, in the abstract. The main case against it should be the opportunity cost (i.e. what other skills might that same time be better spent teaching them?).

I'm no expert on education, but I think there are two main reasons to teach kids a subject. One is for cognitive development, while the other is to have a specific skill or knowledge we expect them to need. Some research has come out regarding the cognitive benefits of having fluency in at least two languages, and I wouldn't be terribly surprised if knowing two writing systems had similar (though probably far lesser) benefits.

In practical terms, cursive seems pretty useless, beyond signing your name, if you're not reading old documents. Even then, I always had a bit of trouble reading relatives' and family members' letters. I'm sure a decent OCR system would be way better at it than I ever was.

That's probably all I should say on the matter.

I think you covered it.

I have always been very skeptical of the often emphasized idea that knowing two languages resulted in more cognitive development. Many confounding variables that are probably difficult to unknot, a convenient result that has to be preferred outcome for our academic class, and that fact that most social science is garbage.

I do agree that most of the people I know who are actually fluent in two languages are pretty smart. OTH I can only really judge one side of that competence and find that most of the people who are really fully fluent in English are pretty smart.

The first sign of "we don't need people".
Will be bad...

Don't worry the programmers will straighten that kink out of the AI ..... oh wait.
 
  • Like
Reactions: Amdlova
I have always been very skeptical of the often emphasized idea that knowing two languages resulted in more cognitive development.
Perhaps you haven't seen the latest research on the subject? The latest thinking I've heard on it suggests it's due to increased frontal lobe development (specifically, better executive function from having to decide which language to use in each circumstance). I'm pretty sure there should be other ways to achieve that, but it seems childhood fluency in multiple languages is a reliable one.

Many confounding variables that are probably difficult to unknot, a convenient result that has to be preferred outcome for our academic class, and that fact that most social science is garbage.
Kinda funny how you, yourself, don't employ science to argue that a theory is unscientific.
 
50 years ago, to 'program', you needed to know assembly.
Then, C, FORTRAN, COBOL
Then, C++
and on and on.

Now, JavaScript, Python, Ruby...
Even lingering VB/VBA within Office.

You can grab prebuilt libraries to do a lot of the low level API things.
A lot of things are abstracted for you.

But with all of those over the years, you still need to know 'how to program'.
You still needed a valid design, to meet whatever functionality was needed.

This statement is purporting to say "Don't bother, just rely on the AI".

Yeah, right.
 
Perhaps you haven't seen the latest research on the subject? The latest thinking I've heard on it suggests it's due to increased frontal lobe development (specifically, better executive function from having to decide which language to use in each circumstance). I'm pretty sure there should be other ways to achieve that, but it seems childhood fluency in multiple languages is a reliable one.


Kinda funny how you, yourself, don't employ science to argue that a theory is unscientific.

Absolutely true that I have not been keeping up to date on the latest research in this area! I can contribute as a personally validated data point my feeling that four years of high school German, that I took pretty seriously and definitely took a lot of time, seems like it was wasted effort with no return . Other courses I took at the same time (History, Literature, Engineering, Genetics, Civics) significantly changed the way I view the world and gave me knowledge and skills that I still reference and grew from. Of course maybe the German enlarged my lobes without my being conscious of it.

Concerning the lack of a scientific basis for my snap opinion on the value of multiple languages:

1. Confounding variables is a scientific concern.

2. Researcher Bias is also usually considered important enough that disclaimers or bias statements are sometimes required by journals. (I admit my anti-authority and anti-intellectual biases may influence my sloppy mud slinging.)

3.

The disaster that is scientific repeatability in Social Science ought to be the first thing anyone thinks of when they read any research in Social Science. It is a disgrace of epic proportions and subject of much less discussion than it deserves.

If social science was an airline they would have to ground all the Universities while the entire staffs were subjected to shame, encouraged to engage in soul searching and forced to remedial training in the foundations of the Scientific Method, Ethics and Statistics. Then they would be forced to create objective standards for future research with accountability built in from the ground up.
 
  • Like
Reactions: vijosef
I can contribute as a personally validated data point my feeling that four years of high school German, that I took pretty seriously and definitely took a lot of time, seems like it was wasted effort with no return .
Based on the explanation I've heard, I think the benefits occur either at the point of fluency or from immersion, where the brain has to decide which language to use in which context. If using a language involves it's a concerted mental effort, then perhaps you don't get that specific benefit. Just to be clear: this isn't making them "smarter", but better at things like impulse-control that correlate well with success in later life (see also: marshmallow experiment).

I've also heard that learning a second language builds up some form of cognitive reserve, which can later serve to help stave off the symptoms of Alzheimer's. I'm much more foggy on the specifics of this aspect.

Concerning the lack of a scientific basis for my snap opinion on the value of multiple languages:

1. Confounding variables is a scientific concern.
A study can be conducted in such a way as to control for these. If it shows no quantifiable benefit vs. control, then you have your answer.

3. The disaster that is scientific repeatability in Social Science
I thought we were talking about brain science.
 
but better at things like impulse-control that correlate well with success in later life (see also: marshmallow experiment)
I thought we were talking about brain science.

That famous experiment, which popularizes so well (it included cute kid videos), is one which doesn't hold up well.


I thought we were talking about brain science.

Some research indicates that Cognitive science does fare better than other Social Sciences in reproducibility.


Summary page from above

https://mfr.osf.io/render?url=https://osf.io/jq7v6/?action=download&mode=render
 
Last edited:
  • Like
Reactions: bit_user
This is laughable.

1. If you don't know enough about "programming", you can't tell when the AI is wrong.
(this is already a problem, even without "AI")

2. The code syntax is only one part of delivering and supporting an application. Requirements, design, testing...all equally or more important than the basic syntax.
Regardless, he is using his spill to inflate the power of his AI, aka he is trying to find ways to replace humans with his tech powered by Nvidia driven AI.

I wonder if the billonaires still do not understand that less people with jobs it means less people will buy their luxury products.
 
  • Like
Reactions: NinoPino
I mean it just seems like you're ranting in general.

This is what _everyone_ needs to realize. There is going to be a _huge_ shift coming to what work _is_ in the next 10 years.

Sadly no politicians understand this or aren't doing anything to help people understand what's coming.

Artist thoughts they were safe: nope
Musicians thought they were safe: nope
Drivers (truck/transit/taxi) think their jobs are safe: nope (it is coming 😉)
Engineers thought they were safe: nope

Find what you like to do out of work is my best advice.

For anyone wanting to learn more, watch Ray Kurzweil on YouTube for a basis from which to understand what's coming.
You're headed towards the same dissapointment people who believed we'd live on Mars and own flying cars by the 1980s did.

Any exponential growth curve you've ever seen, any one of them, failed to achieve or materialize. Most recently the COVID infection rate. But also you could see it in computing, and to a lesser extent overpopulation and global warming.

You are not the first or the last to worship an ideal. In order to do so you must sacrifice criticism and self-criticism, and with that reason dies. Now you're stuck with your ideal which now MUST come true. But you cannot will future into being.
Example, music and musicians you mention.
If you are fully honest you will understand that "push-button" music making software has existed for almost 20 years. You will also admit that it is mainly used to quickly prototype templates or ideas for producers and musicians. It can even throw up nearly complete songs, up to 90% in my estimate. Using this to cheaply produce music this would likely be laughed out by others in the industry who have access to the same tools but also talent AND knowledge. Or at least a desire to learn how to use tools.
This holds true even if the music industry already is full of such "cheap" music. Because it has been full of it for at least as long as radio has been around, although it goes further back.

Now, are music industry jobs at risk because of AI? Yes, they are.

The jobs not worth having. And the jobs done by people not worth paying. The creative and the talented and the hard working are at no risk. The ones that make money for the company/label are desirable, and AI will keep them desirable.

By avoiding learning programming, programmers would put themselves at risk of being pruned for not knowing programming.

If you or anyone else still wants to believe that using auto-complete and automating trivial tasks is in ANY way going to produce fantastic results, then you have already followed Huang's advice and made yourselves ignorant and redundant.
AI can copy, mix and mimic human results. For a top class AI output, it would need top class input somewhere.

The exponential curve is an illusion. AI is a useful tool but this revolution nonsense is a dishonest dream.
 
Any exponential growth curve you've ever seen, any one of them, failed to achieve or materialize.
This is some pretty bad misinformation you're peddling. Plenty of examples exist, including disease infection rates, but also Moore's Law. Yes, it's finally coming to the end of the road, but had being going strong for many decades. Every trend will come to an end, eventually. That doesn't mean they were false - the definition of a trend doesn't stipulate that it continues without bound.

to a lesser extent overpopulation and global warming.
Population models, like just about any other real-world example you'd care to produce, don't grow without bounds. There are downward pressures, like resource scarcity, disease, or predators - if we're talking about animals. It's ignorant - if not down right disingenuous - to inject global warming into that claim. It does have positive feedbacks, but the main thing driving it is carbon emissions, which are not intrinsically exponential.

The exponential curve is an illusion.
It sounds like you never studied differential equations.

Mathematical models of trends have abundant examples validating their use, but that's not to say they can't be misused or over-extrapolated. Most examples I see of people foolishly extrapolating trends beyond reason are businesses and specifically more PR-oriented messaging - not science or engineering. Just because some fool misapplies a model doesn't mean the model is completely invalid. Every mathematical model has limitations and you should be aware of them when attempting to apply it.
 
Last edited:
It sounds like you never studied differential equations.

Mathematical models of trends have abundant examples validating their use, but that's not to say they can't be misused or over-extrapolated. Most examples I see of people foolishly extrapolating trends beyond reason are businesses and specifically more PR-oriented messaging - not science or engineering. Just because some fool misapplies a model doesn't mean the model is completely invalid. Every mathematical model has limitations and you should be aware of them when attempting to apply it.
Far from disagreeing with you, i fully agree. Huang is himself extrapolating a trend but not foolishly. He is trying to raise billions because that's what it will take to take the next step.
Also i agree because i don't believe i am incapable of making foolish posts. Criticism is neccessary.

But i wasn't directly responding to Huang's comments but to the post flatly stating that the AI revolution is here and that too many of us are blind to see we're becoming obsolete.

The model isn't invalid, but the misuse of the model is just as invalid here as it was with other predictions.
I specified the expansion of humanity onto other planets (by 1980s no less), and flying cars. There are other examples.

But one things even honest models cannot do is account for unforseeable errors or discoveries.

For example, the reason there are no permanent structures on the Moon is the fine and jagged, sharp dust particles that get everywhere and spoil equipment and space suits. Also, there is nothing at all there to justify the costs.
Before this was discovered, it could not be modelled.

This is more or less the reason Huang's original statement is so stupid. The claim that AI will make programming an obsolete skill not only goes against logic, it is based on a very selective "model" or interpretation. It lacks honesty but Huang is a businessman and a successfull one so that's a plus for him.
He's out raising money.

Other believers also use selective and incomplete reasoning. I have tried to demonstrate that with the music industry example.
 
None of your points change the fact that you must KNOW to understand. The shift is irrelevant. Also, some of your predictions for which jobs are insecure are wildly uninformed. Take this from someone in the know.
What must anyone "know"?

What Huang is getting at here is that AI will be superior to humans at writing code. I've met plenty of programmers that write worst code than current AIs. But this is beside the point.

When any layman can ask the computer "write me a program that does 'x'" and it can in minutes if not seconds. It's easy to run it and see if it works. If it doesn't, tell the computer what's wrong and in another few seconds the new/modified code is ready.

Even if it takes a few iterations it's still far faster in the end to the vast majority of programmers.
 
The model isn't invalid, but the misuse of the model is just as invalid here as it was with other predictions.
I specified the expansion of humanity onto other planets (by 1980s no less), and flying cars. There are other examples.
It's very different to make those sorts of predictions than to analyze and model behaviors in an existing system. In order to get flying cars or interplanetary travel, many conditions need to be satisfied which span quite disparate domains: technology, economics, and society. People making predictions about flying cars is especially laughable, since they didn't even know what technologies or innovations would be necessary for it to become practical and accpted, much less how much the final product would cost or what sorts of infrastructure requirements it would have. This really isn't an extrapolation, but actually just a sci fi fantasy.

I also doubt anyone predicting interplanetary human space travel happening by the 1980's had sufficient expertise to make such predictions. First, the moon is only about 239 k miles. At closest approach, Mars is about 50 million miles from Earth, which happens only once every 2.1 years. Consider that the longest continuous human space flight has only been about 1.2 years, and that was within the safety of the Earth's magnetic field. Not only would you need to massively exceed that record, but everything the crew needs would have to travel with them. That + the rocket for their return trip is an awful lot of stuff to ship there, which is why there's interest in building essentially a gas station on the moon.

For example, the reason there are no permanent structures on the Moon is the fine and jagged, sharp dust particles that get everywhere and spoil equipment and space suits. Also, there is nothing at all there to justify the costs.
First, there are credible plans to 3D print structures from molten regolith. Second, the moon has water ice.

This is more or less the reason Huang's original statement is so stupid. The claim that AI will make programming an obsolete skill not only goes against logic, it is based on a very selective "model" or interpretation. It lacks honesty but Huang is a businessman and a successfull one so that's a plus for him.
I think most of us are dubious of his timescale, at least. I expect we're all aware of how self-serving his statements are.
 
they ask for code to be written and verify it's what they want
How can they verify it's what they want if they don't know how it's supposed to go, and how? Do you ask another AI? And how do you cross-check that?
LLMs are good at doing stuff. They simply suck at explaining why they do it.
Granted, 95% of coding is repetitive boilerplate crap. But, how do you get to these last 5%? I've seen enough graduates who "know" how to program that are unable to code a "while" loop - AI may know about a "while" loop, or a "switch", but how many will be able to determine that this task is buggy because it's using that out of scope pointer, that more often than not has a proper value but sometimes has the wrong one? Or why it works in debug mode but not in final mode because one slightly faster thread triggers a race condition? Or one on CPU and not another, for that very same reason.
If your user doesn't know about those, there's no way he'll fix it. And current AIs are UNABLE to solve that problem.
Yeah, YOU know about those. But, once you're retired, who will then know about it?
 
  • Like
Reactions: 35below0
Status
Not open for further replies.