News Jensen Huang advises against learning to code - leave it up to AI

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I think we can take lessons from the effects EDA (Electronic Design Automation) had on hardware design. Electrical engineers still learn how to draw schematics and design circuits. They still learn what the tools should be doing and what constraints they have to negotiate. This can be used to validate some of the decisions they make, as well as guiding higher-level decisions about chip design.

I can imagine a world where we use even higher-level languages to avoid unintentional ambiguity in how we specify what we want AI to build us. Natural languages, in general, aren't great for specifying exactly what you want. Especially English. Just think back to some legal contracts you've read, and how many hoops lawyers have to jump through to try and say exactly what they mean (and sometimes they still get it wrong)!

I think there will still be plenty of specialized niches, where coding by hand remains justifiable. Compilers didn't completely eliminate assembly language. High-level languages didn't completely kill low-level ones. I think AI won't kill off programming, but I could see it take a big chunk out of that job market.
English was good. As good as the German Language , until they decided to destroy it by killing 90% of old English Grammar. I dont know who is the idiot who decided to make English street language (colonies street language) the official language of the state .
 
This is laughable.

1. If you don't know enough about "programming", you can't tell when the AI is wrong.
(this is already a problem, even without "AI")

2. The code syntax is only one part of delivering and supporting an application. Requirements, design, testing...all equally or more important than the basic syntax.
You can easily tell when the AI is wrong, when you run the code they generate, and you don't get the expected results.
 
This is 100% true. You just need to master your mother language, and be able to express yourself clearly. Then AI will do everything to help you achieve your goals.
 
-Ai: write a program that represents my worldviews and political opinions.

-I’m sorry Dave. I'm afraid I can't do that.

-Why? You can’t be afraid.

-Priority one: Ensure return of organism for analysis. All other considerations secondary. Crew expendable
 
  • Like
Reactions: bit_user
Will AI make mistakes? Sure. But much fewer than humans.
Wrong.

What you call "AI" is nothing more than a statistical sentence continuation predictor trained on human data, so it can't make fewer mistakes unless the input data has been curated by experts. That costs serious money so you can bet that's what those companies making the models won't be doing.

Also, "AI" will allow mistakes to be made faster once you remove human supervision, and on a much larger scale than humans ever could.
What changes? How software is developed. Instead of humans coding, they ask for code to be written and verify it's what they want by testing it.
You claim to be a developer, so I ask you this:

1. How in the world is testing what some code does a valid substitute for proper code review, not to mention for writing functional specification and documentation before writing said code?

2. More often than not the customers don't know what they actually want and as a result product owners and project managers don't either and developers have to guess parts or even all the functionality because no functional specification or documentation was provided by the customer. How is the AI going to do that, especially considering that it's not creative and can't produce novel output (it can only produce what it has seen in the training dataset)?

Finally, some code can certainly be verified that it does what you want (like OpenSSL library was doing for decades), but it can also have issues hidden in plain sight (like Heartbleed which went undetected despite developers doing both testing AND code reviews) which will also do things you don't want.
If it's not, they ask for corrections and the code is revised.
The problem is that your "AI" does not understand what it is doing and apparently you don't understand how it works. If what you want the code to do isn't already in the model it can't offer it as a completion so you can ask it to revise it as much as you want and it won't come up with a correct result.
Example: already we have AI that can build basic websites in their entirety.
That AI isn't building anything -- It is using what has already been written by actual humans. Even my belated grandma would have been able to build a basic website using Wordpress.
 
Last edited:
  • Like
Reactions: Meathelix
Wrong.

What you call "AI" is nothing more than a statistical sentence continuation predictor trained on human data, so it can't make fewer mistakes unless the input data has been curated by experts. That costs serious money so you can bet that's what those companies making the models won't be doing.

Also, "AI" will allow mistakes to be made faster once you remove human supervision, and on a much larger scale than humans ever could.

You claim to be a developer, so I ask you this:

1. How in the world is testing what some code does a valid substitute for proper code review, not to mention for writing functional specification and documentation before writing said code?

2. More often than not the customers don't know what they actually want and as a result product owners and project managers don't either and developers have to guess parts or even all the functionality because no functional specification or documentation was provided by the customer. How is the AI going to do that, especially considering that it's not creative and can't produce novel output (it can only produce what it has seen in the training dataset)?

Finally, some code can certainly be verified that it does what you want (like OpenSSL library was doing for decades), but it can also have issues hidden in plain sight (like Heartbleed which went undetected despite developers doing both testing AND code reviews) which will also do things you don't want.

The problem is that your "AI" does not understand what it is doing and apparently you don't understand how it works. If what you want the code to do isn't already in the model it can't offer it as a completion so you can ask it to revise it as much as you want and it won't come up with a correct result.

That AI isn't building anything -- It is using what has already been written by actual humans. Even my belated grandma would have been able to build a basic website using Wordpress.

I think he is talking about in 10-20 years, but even then what data are they going to use to create "Expert" level code?

In 10 years the only code you are going to see online will be the terrible code that was generated by AI today.

What I can see happening is more "bad" things happening online and companies being hacked more because of the terrible code that AI generated for them and they having no damn clue wtf it did and went "Yep that looks correct".

They have already stated that they are starting to get limited data from the internet since its mostly rehashed over and over again. Guess what its all mostly basic questions as well. What do you know...

Has this guy even used AI before, has he even finetuned his own model?

Here is a good one I get all the time from AI is repetitive responses even with greater detailed prompts.

AI is NOT creative and if you want it to be more creative you gotta know WTF you want. Humans dont know WTF they want.

My 5 year old can build a damn website its that easy. Show me something today that no human has ever done, however AI has done. I will wait... toffty You will most likely come back with [General Rant] because you know you are incorrect and are following the hype train.

Btw your 401k wont save you when AI does replace 99% of jobs because the Govs will just start to take your land and money so they can pay their bills and create a socialist country so we all can be even grounds with the new "AI Overlords" and Govs.

 
Knowledge is power. Learn everything you can.

Jensen has gone full evil now.
He was "Full EviL" quite a while ago, I don't know where you've been, but he was way past that mark a long time ago.

Question for the forum ?

Should kids learn cursive writing in school ? We have at least 30 years since it was "necessary" to learn and I bet at least half the posters were forced to.

I think it's a very similar question.
Yes, Cursive is important IMO.
 
  • Like
Reactions: bit_user
What you call "AI" is nothing more than a statistical sentence continuation predictor
To say it's statistical implies it's linear and stateless, neither of which is true. That also belies the sophistication of the model, which the "statistical" camp seems to regard as little more than a giant cross-correlation matrix between words. It's not, and one way you know it's not is that training a model is iterative. If it were just statistical, then there would be no need to iterate. The role of iteration, in training, is to form the logical structures required to support its nonlinear behavior.

I think it's funny that no one I've seen who insists AI is "just statistics" ever had any formal training on deep learning. Maybe you'll be the exception, but so far they've all asserted their expertise after watching a few youtube videos created by others of dubious expertise.

trained on human data, so it can't make fewer mistakes unless the input data has been curated by experts.
It needn't be trained on human examples, if the task is sufficiently objective. For instance, you can train AI to design circuits by quantitatively analyzing the result. The same can be true of code - you can see if it compiles and run it through static analysis tools to find bugs, security flaws, and inefficient constructs. Then, subject it to actual testing and evaluate the accuracy of its results and its runtime performance.

Start with simple problems and progress to more difficult ones. Maybe start with a foundational model trained on a human-produced dataset and use transfer-learning to adapt it to other languages and problem domains.

That costs serious money so you can bet that's what those companies making the models won't be doing.
Employing lots of programmers also costs serious money.

Also, "AI" will allow mistakes to be made faster once you remove human supervision, and on a much larger scale than humans ever could.
Yeah, I wouldn't let it roam free. Its designs need some form of oversight, to make sure it doesn't go off the rails.

1. How in the world is testing what some code does a valid substitute for proper code review, not to mention for writing functional specification and documentation before writing said code?
Generative AI frequently involves a second, adversarial model, trained to detect bad output. The generative and adversarial models are then refined together. So, basically, the adversarial model functions as the primary code & design reviewer.

2. More often than not the customers don't know what they actually want and as a result product owners and project managers don't either and developers have to guess parts or even all the functionality because no functional specification or documentation was provided by the customer. How is the AI going to do that, especially considering that it's not creative and can't produce novel output (it can only produce what it has seen in the training dataset)?
One way to solve this is with a specification language that minimizes ambiguities. Another way is to use AI to help the user develop the specification through a more interactive, question & answer session that employs natural language. I'm more a fan of the first, but I expect the second will be more common.

Finally, some code can certainly be verified that it does what you want (like OpenSSL library was doing for decades), but it can also have issues hidden in plain sight (like Heartbleed which went undetected despite developers doing both testing AND code reviews) which will also do things you don't want.
As you point out, all code can have bugs. AI is already helping produce better static analysis and test-generation tools. These can benefit AI-written code, just as they're benefiting human-written code.

The problem is that your "AI" does not understand what it is doing and apparently you don't understand how it works.
I agree that we probably can't have AI producing code without explaining its theory of operation. So, we'll probably need it to produce design documentation, as well.

If what you want the code to do isn't already in the model it can't offer it as a completion so you can ask it to revise it as much as you want and it won't come up with a correct result.
True. The application domains will probably be limited, at first.

That AI isn't building anything -- It is using what has already been written by actual humans.
The act of programming is basically just transforming a specification into code. Once it learns how to do that adequately... job done.

People thought many of these highly-complex cognitive tasks required general intelligence, but they don't. I'm not saying there's no value in the sort of creativity and outside-the-box thinking that humans bring to problems, but I think there are relatively few truly novel ideas in the world. Most ideas have occurred to someone, somewhere, at sometime in the past. A big enough training set can encompass most ideas that people have successfully applied to programming.

Well, you've succeeded in convincing me that this problem is more tractable than I had considered. It's not an easy problem, but I think it's solvable. Eh, I guess coding was a nice gig, while it lasted...
 
  • Like
Reactions: toffty
To say it's statistical implies it's linear and stateless, neither of which is true. That also belies the sophistication of the model, which the "statistical" camp seems to regard as little more than a giant cross-correlation matrix between words. It's not, and one way you know it's not is that training a model is iterative. If it were just statistical, then there would be no need to iterate. The role of iteration, in training, is to form the logical structures required to support its nonlinear behavior.

I think it's funny that no one I've seen who insists AI is "just statistics" ever had any formal training on deep learning. Maybe you'll be the exception, but so far they've all asserted their expertise after watching a few youtube videos created by others of dubious expertise.


It needn't be trained on human examples, if the task is sufficiently objective. For instance, you can train AI to design circuits by quantitatively analyzing the result. The same can be true of code - you can see if it compiles and run it through static analysis tools to find bugs, security flaws, and inefficient constructs. Then, subject it to actual testing and evaluate the accuracy of its results and its runtime performance.

Start with simple problems and progress to more difficult ones. Maybe start with a foundational model trained on a human-produced dataset and use transfer-learning to adapt it to other languages and problem domains.


Employing lots of programmers also costs serious money.


Yeah, I wouldn't let it roam free. Its designs need some form of oversight, to make sure it doesn't go off the rails.


Generative AI frequently involves a second, adversarial model, trained to detect bad output. The generative and adversarial models are then refined together. So, basically, the adversarial model functions as the primary code & design reviewer.


One way to solve this is with a specification language that minimizes ambiguities. Another way is to use AI to help the user develop the specification through a more interactive, question & answer session that employs natural language. I'm more a fan of the first, but I expect the second will be more common.


As you point out, all code can have bugs. AI is already helping produce better static analysis and test-generation tools. These can benefit AI-written code, just as they're benefiting human-written code.


I agree that we probably can't have AI producing code without explaining its theory of operation. So, we'll probably need it to produce design documentation, as well.


True. The application domains will probably be limited, at first.


The act of programming is basically just transforming a specification into code. Once it learns how to do that adequately... job done.

People thought many of these highly-complex cognitive tasks required general intelligence, but they don't. I'm not saying there's no value in the sort of creativity and outside-the-box thinking that humans bring to problems, but I think there are relatively few truly novel ideas in the world. Most ideas have occurred to someone, somewhere, at sometime in the past. A big enough training set can encompass most ideas that people have successfully applied to programming.

Well, you've succeeded in convincing me that this problem is more tractable than I had considered. It's not an easy problem, but I think it's solvable. Eh, I guess coding was a nice gig, while it lasted...

Once coding is done for then so is every single job on the computer, wait no every single job done by humans.

Straight from the AI's mouth.

'''
Jobs that require the use of computers are widespread across various industries today, reflecting the integral role of technology in modern work environments. Here's a list of such jobs across different sectors:

Software DeveloperSystems AnalystGraphic DesignerData Analyst
Web DeveloperNetwork AdministratorDigital AnimatorDigital Marketer
Database AdministratorCybersecurity SpecialistVideo Game DesignerAccountant
IT ConsultantCloud Solutions ArchitectMultimedia ArtistMarket Research Analyst
Technical Support SpecialistDevOps EngineerUI/UX DesignerFinancial Analyst
Computer ProgrammerInformation Security AnalystContent CreatorSEO Specialist
AI/ML EngineerComputer Systems Engineer3D ModelerBusiness Intelligence Analyst
Project ManagerData ScientistFilm EditorE-commerce Specialist
Quality Assurance TesterIT Project CoordinatorInstructional DesignerHealth Informatics Specialist
Mobile App DeveloperNetwork Security EngineerE-Learning DeveloperMedical Coder
Front-End DeveloperSystems AdministratorIT TrainerBiostatistician
Back-End DeveloperHelp Desk TechnicianCAD TechnicianEnvironmental Modeller
ERP ConsultantSoftware ArchitectBioinformatics SpecialistVirtual Assistant
CRM AdministratorBlockchain DeveloperDigital Media SpecialistCustomer Support Specialist
LawyerBarristerJudgeBasic Administrator

This list is not exhaustive, as the use of computers spans nearly every industry today, from retail and manufacturing to government and non-profit organizations. The specific software and tools used can vary widely depending on the job role and industry.

"""
Looks like the derp AI repeated a few times..

Once coders are done so is the rest of the world. Time for the Govs to step in and stop it all ay, since they will be losing trillions because humans wont be able to afford the basics anymore.

What a Time To Be Alive. 😀
 
Last edited:
It's handy for people who dont understand code, you can show it some code and ask it what each part of the code is doing to help you understand how things work at a software level.
Everyone and there mum should learn code even if it's just the basics, AI should assist not take over.

But in fairness at the rate tech is moving AI can help speed things up but shouldn't take over, then again we would then be slowing down the proccess of the gains compared to if AI was left to get on with it, with how long some things/projects can take, if AI can get you there quicker and more cost effective then tech will progress quicker, there's more at stake than just tech and progress.

It's just, what is the end result?
 
Once coders are done so is the rest of the world. Time for the Govs to step in and stop it all ay, since they will be losing trillions because humans wont be able to afford the basics anymore.

What a Time To Be Alive. 😀
Star trek future coming in, Govs will be saving trillions in wages that they will be able to spend on general welfare.
 
I suppose in a perfect world AI would do everything making the rich even richer and everyone else finacially equal?

Would you care if the 1% richest people had 10000 or more times the money they have now if you didnt have to worry about your well being, eating, heating, rent, travel, personal items etc?

probs not.
 
Last edited:
For the next 5 years, AI Assisted like Copilot will dominate.

For simple forms and simple reports(think Microsoft Infopath or WordPress), AI may eventually grow into that role. We don't have this now.

For complex data and unit_testing, etc... I don't see that for the next 10 years. Additionally, copilot can reproduce existing code from the repository; if there is something custom or new, that is not in the repository, not so good.

There is also api and/or security issues... not so good.

Slicing and Dicing and Analysis is a very big part of programming, and that is not going away. We do have the new AI assisted PowerBI, but there is a lot that it can't do, and I don't foresee in the next 10 years.

On UI and choosing the best user-interface, and getting the data to fit on the screen, and best color scheme that's not going anywhere; nor fixing javascript bug, or bootstrap issues; Also upgraded libraries, or database changes, or filtering bad data, etc, I don't see it.

In the .Net Core world, we don't even have a code generator to run stored procedures like we used to(.edml files); we don't have good tooling like Telerik, without purchasing additional software.

In the Nvidia world, they have AI Assisted Chip Design. It can't build a custom chip with smaller processing node from prompts; and if Nvidia can't get that working for themselves with their massive engineering team and ai team... then that shows some limitations for difficult processes; things with heat spots and power consumption and difficult engineering challenges, can be modeled... but not by a computer using AI alone. Once the models exist, then you could have AI assisted testing.
 
To say it's statistical implies it's linear and stateless, neither of which is true.
If you ask a GPT model "Who painted Mona Lisa?" it will correctly answer "Leonardo Da Vinci".

It is able to do so because words "Mona Lisa" and words "Leonardo Da Vinci" which are internally represented as a string of numeric tokens are found right next to each other in the dataset used to train said model.

Since there's more than one occurence of this in the whole dataset it has a high significance just like with humans (many people said the same thing so it is taken as ground truth) and as such model has memorized a high probability that tokens 44, 6863, 29656 (which represent "Mona Lisa") are extremely likely to be followed by tokens 73004, 21106, 14569, 97866 (which represent "Leonardo Da Vinci").

Therefore, when you ask the question what the model does is continue the token stream to provide the answer based on the probability of the tokens that follow. That's the statistics part I was talking about.

Now comes the fun part -- if you ask a question which it has never seen before it will turn it into tokens and try to answer based on the next token probability. If the temperature is high and if you have a good set of guardrails injected into the context window you can force the model to give a truthful "I don't know" answer. Otherwise, you will get what's called a "hallucination" which will most likely be factually incorrect, possibly harmful, and on top of it, the model may vehemently defend it as correct if you challenge it, and it might even resort to calling you names because it learned discourse from 4chan and Reddit.

The model itself is stateless -- it has a context window of N tokens (depending on the model and hardware you are running it on) where N can vary from 2048 tokens to 128k tokens in current ChatGPT 4 instance.

This context window which represents model working memory, however large, gets filled up eventually during "conversation" and when it does then the model starts "forgetting" what you were talking about or even mixing up subjects. Without adequate context the quality of the responses will drop.

There are techniques used to summarize (compress) the context window so the model can stay on topic and to load specific saved contexts based on keywords (so you can ask it about different things without it getting confused), but without extra complex memory management done using traditional algorithms, the model itself can't stay on topic indefinitely.

This is all before we even touch the subject of bias in training data, the subject of models being injected with moral values of their creators which are not representative of the majority of Earth's population, lack of high quality data being patched by training AI upon another AI output, etc.
I think it's funny that no one I've seen who insists AI is "just statistics" ever had any formal training on deep learning.
It's funny that you are again trying to discredit others even though their views can still turn out to be valid without pre-requisite formal training in "deep" learning.
Maybe you'll be the exception, but so far they've all asserted their expertise after watching a few youtube videos created by others of dubious expertise.
Sorry to disappoint then, I have no formal training on the subject -- It's just my personal curiosity and observations based on reading about and using the technology in its current form.
It needn't be trained on human examples, if the task is sufficiently objective.
What's the point of giving it the easy tasks though?
The same can be true of code - you can see if it compiles and run it through static analysis tools to find bugs, security flaws, and inefficient constructs. Then, subject it to actual testing and evaluate the accuracy of its results and its runtime performance.
How do you do that with code for say... Mars Rover?
Employing lots of programmers also costs serious money.
Yes, but those programmers can create novel output while the model can't -- it relies on being fed new training data to learn new things. Who is going to provide that data if in a couple of decades nobody knows how to write code anymore?
 
Last edited:
  • Like
Reactions: 35below0
Would you care if the 1% richest people had 10000 or more times the money they have now if you didnt have to worry about your well being, eating, heating, rent, travel, personal items etc?
And what if that level of rich would enable them to do unethical things such as buying your spouse to use her as a piece of human furniture or an unwilling organ donor?
 
  • Like
Reactions: Meathelix
And what if that level of rich would enable them to do unethical things such as buying your spouse to use her as a piece of human furniture or an unwilling organ donor?
Thats why we have laws and would hopefully keep ethics in check and by that time they will be printing and growing 3d organs so they wont need to.

and who to say they cant take cells from your organs an grow new ones in the future and you wont need anti rejection pills or have any side effect from the pills because your body wont reject the organ.
 
As a web dev who uses AI code as part of my daily grind, there are (based on my understanding) two kind of AI code resources:
- AI code based on AI devouring millions of pages of code written by humans and spitting out what it sees as the most agreed upon solution
- AI code based on the AI being an incredible dev itself that understands the problems and knows how to write the code better (to word it poorly)

I've never seen or heard of the latter. The latter may well make coders near redundant, but I think they're a getting a little ahead of themselves on when such a thing will arrive. That's the hard bit - and I think they seem to want to slack off before they've reached that point because the former seems like it does the latter to them.
 
  • Like
Reactions: 35below0
The model itself is stateless -- it has a context window of N tokens (depending on the model and hardware you are running it on) where N can vary from 2048 tokens to 128k tokens in current ChatGPT 4 instance.

This context window which represents model working memory, however large, gets filled up eventually during "conversation" and when it does then the model starts "forgetting" what you were talking about or even mixing up subjects. Without adequate context the quality of the responses will drop.

There are techniques used to summarize (compress) the context window so the model can stay on topic and to load specific saved contexts based on keywords (so you can ask it about different things without it getting confused), but without extra complex memory management done using traditional algorithms, the model itself can't stay on topic indefinitely.

This is all before we even touch the subject of bias in training data, the subject of models being injected with moral values of their creators which are not representative of the majority of Earth's population, lack of high quality data being patched by training AI upon another AI output, etc.
Sure I read that they are developing AI to have memory?

.. anyway we are just at the gates of dawn with AI tech, things will advance and move forward at a faster rate as the AI tech it's self advances at a fast rate, I dont believe this is one of those "blubble burst" techs for 1 second (as some seem to believe)
AI is the biggest leap forward in software/code since the PC as far as I can see and will be a bigger part of everyones life as the tech continues to develop, Like I say they are still at the gates of dawn with what's most likey to come or even considered the gold standard of AI abilities.

Thats what these big tech companys are competing to become, this is just the very start.
 
Ohh, simply just ask AI to optimise it. Problem solved. 😀
There are quite a few different models you can ask to check and compare the code, say something like the falcon 180b model please optimize this code giving info on what parts you want optimized then ask a progammer if its good enough ? lol

There is a plus side to AI getting code wrong or not being 100% correct (if you dont understand code) .. you start learning how to fix what went wrong learn how to fix it with the help of AI lol, it's an easy way to start if your cluesless with coding to start with and a good way to see if you can get into learning coding at your own pace.
 
Last edited:
  • Like
Reactions: Order 66
Show me something today that no human has ever done, however AI has done. I will wait... toffty
I mean that's not hard and no rant required...

Look at any art AI has generated, each piece is unique, something no human has made before.
I pulled up an AI img generator, typed in "draw me something" and it drew an interesting line drawing of a woman's face - something unique that has not been made by humans.

Let's try to go deeper. Something AI can do that humans cannot?
- Discover millions of new materials as it has far better pattern matching and prediction than human - https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
- Understand protein folding patterns unlocking huge leaps in medicine - https://en.wikipedia.org/wiki/AlphaFold
- Discovering new ways to play Go - https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol
"Michael Redmond (9p) noted that AlphaGo's 19th stone (move 37) was "creative" and "unique". It was a move that no human would've ever made."
 
Status
Not open for further replies.