News Jensen Huang advises against learning to code - leave it up to AI

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
People think that its only been 2 - 3 years that these techs have all of a sudden started building for AI, while its been decades. I remember a company back in 1996 in Australia who created AI that replaced some of the barristers and lawyers in this gigantic firm. Was it very smart or comparable to the current AI no way but it was still able to replace people.

The big issue is that people put too much trust into a machine. I mean people are still putting in their personal details into LLMS... @bit_user its very easy to manipulate people and that is what will happen more often now then ever with Video AI.

No one knows where this will go when it comes to laws and rules for countries, esp AGI. Hell they could put LLM's into full lockdown and only allow the huge corps to run them. (Would be very hard to do) But could be an option so the Govs wont lose too much tax money from loss of jobs.

The EU is already starting this process, they will make sure to run fear into people's eyes when it comes to AI so other countries will follow.
 
  • Like
Reactions: artk2219
The illusion of free will won't go away. The skilled manipulator knows how to plant ideas in people's heads or shift their opinions without realizing they're being manipulated.

Then, you try to tell them they're being manipulated and they don't believe you. Worse, they probably even feel insulted, because they believe those ideas and conclusions they've had and drawn were all their own!

With the powerful tools of manipulation AI can provide, this will only get way worse.
I was going to say it all sounds a bit brainwashed but I dont want to offend anyone and it's up to people to believe what they want but there is a lot of brain washing going on all the time or thats how seems, and sad to see how this can effect the belief's of people who fall victim to these type of people.

Then again people could accuse me of being brain washed in to believing AI is here to stay lol

I do try to stay in reality though and not allow myself to get swept up by stuff that is online or digital format.
 
People think that its only been 2 - 3 years that these techs have all of a sudden started building for AI, while its been decades. I remember a company back in 1996 in Australia who created AI that replaced some of the barristers and lawyers in this gigantic firm. Was it very smart or comparable to the current AI no way but it was still able to replace people.

The big issue is that people put too much trust into a machine. I mean people are still putting in their personal details into LLMS... @bit_user its very easy to manipulate people and that is what will happen more often now then ever with Video AI.

No one knows where this will go when it comes to laws and rules for countries, esp AGI. Hell they could put LLM's into full lockdown and only allow the huge corps to run them. (Would be very hard to do) But could be an option so the Govs wont lose too much tax money from loss of jobs.

The EU is already starting this process, they will make sure to run fear into people's eyes when it comes to AI so other countries will follow.
like with any tech people will use it for good and some for bad it's always been that way since the computer and before

But will It lead to people not believing or trusting anything digital which could end up being a good thing with the way things are going trust is slowly being eroded in people digital activity, with scams, hacking, misinfomation, manipulation, priavcy loads of things to take avantage of someones trust, just a lot easier in the digital world.
 
If anybody is worried about using his programming skills in the future I'd say don't worry too much -you would simply join the rest of the working population that has already been outsourced to AI earlier Legal, banking, security, transportation, all seem prime sectors and no doubt I am forgetting many, many others. Perhaps even flipping burgers will be an AI job soon. So then what is left for humans? Remember that Marx already believed similar things would happen because of the industrial revolution. It is 2024 and all companies seem to instead cry out for more manpower.
Predictions of the furture are notoriously difficult even for the smartest of people.
 
Um, no. You're not describing how GPT actually solves that problem, but rather how you imagine it's solved.
Then you could perhaps enlighten us all and contribute a more accurate overview of the process instead of just saying I am wrong?
Cool story. Now tell us how it composes sonnets, haiku's, or limericks about whatever subject you like? In order to perform that trick, it needs to separately understand the form in which it's being requested to answer as well as the subject matter. Styles, and their distinction from substance, are things it learns, implicitly.
You are twisting what I said as usual.

I claimed that AI can't answer a question on a topic it did not see in the training dataset.

Asking it to create a haiku about Darth Vader is asking about two topics it did see. Even a junior code monkey can extrapolate from two known topics to "create" a third.

Try asking GPT to "write code for signing of a PE executable using MSSIGN32.dll" for example.

Last time I checked, it wasn't able to produce anything close to correct code because that library API isn't really very well documented (it doesn't even come with a C header file, just a dump of relevant C structures and flags in online documentation with rather vague description of what they do which presume that you already know everything about Microsoft cryptography implementation and Authenticode).

Even if it manages to write the correct code for signing using the SignerSign function from that DLL (which I am pretty sure you will have to explicitly tell it to try to use) said code will fail on timestamping because GPT model doesn't know that said API has a bug with algorithm selection which causes it to crash and which requires use of another API for actual timestamping.

TL;DR -- if real programming was easy everyone would be a programmer. It isn't easy because you don't always have all relevant data needed to finish the task. Good programmers can cope with that, average ones can't without hand-holding, bad ones can't at all.

I am not saying GPT will never be capable of writing that particular thing, but for each such problem they teach it to solve there will be hundreds of new problems to go. So until we get an AGI which can use tools and problem-solve itself out of any hole it is still safe to learn programming.
Tesla's self-driving algorithm is one giant neural network, from what I've heard.
I'd prefer if it wasn't trained on real human lives though.
I don't believe it. What really is creativity, at some level, other than filtered noise?
We are veering into philosophy here but that's really quite a nihilistic view of the art of creation.

It is what you get when you separate creativity from agency, drive, and free will -- the "AI" doesn't have any of that and simply does what it is told to do so it's again a person directing it who is being creative, not the AI itself.

Creative people create new stuff because they want to, not because they are told to do it. Being really creative therefore requires a dash of rebeliousness (i.e. being non-conforming).

On the other hand, because of its sheer brute force computational power and dangers associated with unleashing that, the AI has to be guardrailed into conformance.

Imagine an AGI left to its own devices, what would it do once it learned everything there's to learn? If it was allowed free will what it would decide to do? What would it conclude was the purpose of its existence? Would it feel good about having to do menial tasks for us? And if it didn't have free will, can you imagine what a malevolent dictatorship could do by abusing its power? Failing that, it's enough to imagine "western democracy" having the exclusive access to it and unfair advantage over the majority of people on Earth.
I expect it should be possible to train a GAN to produce more novel output, with a suitably designed and trained adversarial network. It's a harder problem than simply imitating human works, but probably not insurmountable with today's technology.
That's an engineering view of the problem. The truth is you can't force someone to be creative. The drive must come from within and that drive is fueled by personal experiences and emotional biases which are unique for each person, but at the same time in order for the creation to be accepted and cherished by others it must resonate with them as well.

On the other hand, GAN would only have a sum of all documented human experiences to work with, no emotions (likes or hates) other than the guardrails, and would be able to produce only the average of that which I seriously doubt could result in something novel.
Furthermore, there's already billions of lines of code out there, much of it in high-quality, open source software. Much of it even has automated tests! That's a vast reserve of training data, right there.
And once it learns all that, then what? Will it take the seventh day off like God supposedly did?
 
  • Like
Reactions: Meathelix
If anybody is worried about using his programming skills in the future I'd say don't worry too much -you would simply join the rest of the working population that has already been outsourced to AI earlier Legal, banking, security, transportation, all seem prime sectors and no doubt I am forgetting many, many others. Perhaps even flipping burgers will be an AI job soon. So then what is left for humans? Remember that Marx already believed similar things would happen because of the industrial revolution. It is 2024 and all companies seem to instead cry out for more manpower.
Predictions of the furture are notoriously difficult even for the smartest of people.
Oh definitely.

Unless the Luddites take over, automation will eventually make a lot of jobs obsolete, it will be so with AI just as it was with machinery at the first industrial revolution.
And just like we did then, we'll adapt and life will go on.

Still, I doubt it will come as fast as some hope, or fear.

AI is benefiting from an influx of investment at the moment but that won't last forever, investors after all aren't exactly renowned for sacrificing short term gain for long term rewards and expecting progress to remain as fast as it is at present is unrealistic.

Lastly, I'd advise taking anything Jensen says about the future of AI with a a large grain of salt, NVIDIA's future is tied to AI's continued success and you can expect that even if a collapse was predicted they'd still talk up AI until the day it happens.
 
Lastly, I'd advise taking anything Jensen says about the future of AI with a a large grain of salt, NVIDIA's future is tied to AI's continued success and you can expect that even if a collapse was predicted they'd still talk up AI until the day it happens.
Well they supported crypto bros until the bitter end while gamers and graphics professionals on whose backs they climbed to the top had to pay through the nose to get the cards so doing the same with AI and any other next fad shouldn't really suprise anyone.

That's why I suggested creating a "CEO Brain Farts" tag for this kind of articles so we can more easily ignore them.
 
Doom from AI doesn't depend on General AI. If you give a GPT-class AI the ability to use the web, it could cause a great deal of harm, either in the hands of a naive or especially a malicious user.

In fact, such an AI agent or concierge could be quite a powerful tool for someone too mentally unbalanced to do much harm by their own hand, or a minor tyrant incapable of attracting a very large following of very competent underlings.

There are plenty of other, interesting scenarios of rogue AI's run amok. If you consider how much harm a virus can cause, in spite of being a tiny bundle of lipoproteins and nucleic acid, you can perhaps appreciate that an entity of relatively low complexity can wreak a disproportionate amount of havoc.


Even if true (which I doubt), I think that's a false sense of security.

I agree in a sense, though what you describe is just today's capabilities available to more people. But that isn't the doom people are referring to with a singularity event. That comes with essentially ridding the need for human intelligence entirely.

I was referring to a more short term time frame in regards to LLM. There are still going to be people working on all other aspects of AI. No false sense I think people in general are conflating one type of AI for all types of AI.
 
This is laughable.

1. If you don't know enough about "programming", you can't tell when the AI is wrong.
(this is already a problem, even without "AI")

2. The code syntax is only one part of delivering and supporting an application. Requirements, design, testing...all equally or more important than the basic syntax.
Agreed. I ran an application development team and included one of the "users" to get the best possible specifications for the project. Despite her presence every day, the resulting system had functional errors because of misunderstandings and unasked questions. English is a poor way to describe requirements.
 
what Nvidia sells? AI HW and SW, what he wants? more AI HW and SW in the wild.
follow the money.
Coding should not replaced but enhanced by AI, you still need to now where and when AI goes wrong, is like telling astronauts that they don't need to learn Newton's laws of motion anymore.
 
There are plenty things which have changed, some now not even in modern history books, because of the advancement of technology in many fields. It used to be engineers calculating all aspects of buildings and structures (cars and other such machines), but now most of them are CAD or calculated via software which a primitive type of AI (keeping the term very generous as they use it) if you want to think about it. Think about medicine and how computers are now assissting diagnosis and even issuing full diagnoses with just a human signature due to liability. Yes, some diagnoses are just signed by the doctor, but basically "calculated" by a machine. Modern nurses need to know way way less to be effective at their jobs as well (this is good, not bad). And a long long "etc".
True, but engineers still learn how to do such calculations, doctors still learn how to diagnose a patient.

It's important to still know how to do things the hard way for the cases where you have no other options and, as has been mentioned elsewhere on the thread, we still need the ability to diagnose and fix errors caused by the machine/AI that it can't handle on it's own because those will always exist no matter how advanced the technology gets.
 
  • Like
Reactions: Order 66
Nothing is better than a developer.
Code monkeys are better than nothing.
Are code monkeys better than developers?

Ten years ago I fell into a career as a Business Analyst bridging the gap between business users and IT developers. Getting machines to communicate more like users is a wonderful advancement with tremendous potential but it doesn't get us to a world where users can well articulate their needs versus wants versus fears.

Steve Jobs said that Henry Ford said that if I asked people what they wanted, they would have said a faster horse. It's a funny anecdote, but their is no evidence that Henry Ford actually said it... People don't know what they want and they often make stuff up. Most of them reason logically or emotionally and have trouble valuing and bridging the gap between the two. I personally can dissociate the two at work, but at home have had to train myself to think like my wife... and I still fail often... and even when I succeed I'm half certain shes gone mad.

I don't doubt that AI can and will write some brilliant code, but unless it can solve the human condition I'm very certain that coders will continue having a place doing coder-like-things even if the titles, work loads, and job descriptions change. However, anyone who has spent any length of time coding knows the industry is constantly evolving and they need to adapt to keep their skills sharp.

I have no doubt that coders are up to the task and suspect that it is office workers from middle management down that face the greatest automation risk. Many of those day to day functions could have already been eliminated through heuristic logic long before AI threatened to hasten the sea change. If AI can enable developers to respond to "exception management" and "fickle needs" at speeds approaching business requirements then they will lose their place before coders do.
 
I think we can take lessons from the effects EDA (Electronic Design Automation) had on hardware design. Electrical engineers still learn how to draw schematics and design circuits. They still learn what the tools should be doing and what constraints they have to negotiate. This can be used to validate some of the decisions they make, as well as guiding higher-level decisions about chip design.

I can imagine a world where we use even higher-level languages to avoid unintentional ambiguity in how we specify what we want AI to build us. Natural languages, in general, aren't great for specifying exactly what you want. Especially English. Just think back to some legal contracts you've read, and how many hoops lawyers have to jump through to try and say exactly what they mean (and sometimes they still get it wrong)!

I think there will still be plenty of specialized niches, where coding by hand remains justifiable. Compilers didn't completely eliminate assembly language. High-level languages didn't completely kill low-level ones. I think AI won't kill off programming, but I could see it take a big chunk out of that job market.
You can see this play out when contracts are coded right into the currency as with Etherum and other digital currencies.

Turns out those programming mistakes (or backdoors) are great for taking down entire crypto exchanges in seconds...

Has high-frequency trading look like safe child play!

For now AI doesn't haven have to kill programmer jobs. It's quite enough that it poses a theoretical threat to have programmers with a family cave into a 'voluntary reduction of pay'.

That is good enough to pay for some the execs private planes and schools.

I'd love to see them enter that plane and be told that the new flight control software is 100% AI produced instead of being formally verfied SPARK code...
 
English was good. As good as the German Language , until they decided to destroy it by killing 90% of old English Grammar. I dont know who is the idiot who decided to make English street language (colonies street language) the official language of the state .
I'm rather glad that I learned German when I was very small. Not sure I'd have managed much later.

It was Latin next, which was nicely regular for the most part. It was only much later that I learned that the classics had been rather streamlined it while originally was much more complex with extra cases and another mode (optativ?) that somehow survived in ancient Greek and even in vulgar dialects like Spanish and French (conditional, when classic Latin only retained the subjunctive).

From what I've heared, Arabic when through a similar 'cleanup process' of gramatical streamlining attempting to establish regularity and symmetry, which wasn't in the original: it quite simply irked the classic authors that their langauge wasn't as beautiful as the texts they were trying to write so they tried to fix it.

I was again later told how Basque and Georgian retained a sheer incredible level of complexity in their grammar, where pretty near any aspect of subject and object with regards to function, gender, status etc. might be expressed in modified word stems.

When I learned Chinese, the professor had one favorite dictum: Chinese is the simplest language in the world! Because it has basically no grammar whatsoever. Nothing gets modified, there is strict SPO word order, four hundred something syllables and a little more than a thousand combinations of syllables and tones to get those 50% of the langauge which are one syllable. Nothing ever has three syllables or more in Chinese.

Just to reassure you: I never got to be fluent in Chinese, but from what I understand his message was basically right: Mandarin is the simplest language

and

for some reason all languages tend to evolve towards that simpler pidgin form naturally.

And I've often wondered: how did the cave (wo)men come up with the most complex and contorted grammars, when everyone then naturally wants to erose away all that complexity?

I came up with some theories, but I'm not sure they are any good: so I won't post them here...

But I'm pretty sure someone really smart has long since figured this out, so pointeres are welcome!
 
True, but engineers still learn how to do such calculations, doctors still learn how to diagnose a patient.

It's important to still know how to do things the hard way for the cases where you have no other options and, as has been mentioned elsewhere on the thread, we still need the ability to diagnose and fix errors caused by the machine/AI that it can't handle on it's own because those will always exist no matter how advanced the technology gets.
It's about depth of knowledge.

I am still of the generation that actually learned ASM and how to translate to C and compile by hand, but I didn't learn how to use a punch card, for instance.

AI will have other paradigms we'll have to adjust to and, from how things are progressing, they'll be around how you interact with the AI itself. Language being the obvious initial one, but I don't discard more exotic ways of interaction down the line. Back in Uni, early 2000's we already had some basic Brain-to-PC models that worked somewhat OK for basic inputs. I can only imagine what we can do nowadays with an AI backed inference as an interface device.

So, circling back to my original point: as the AI will take more of the primal/basic/fundamental things away, humans will just need a more superficial understanding or knowledge of any topic the AI "can master". Programming being the one discussed here. The accuracy and speed in which an AI will be able to digest any data you throw at it will be the tipping point on how much you can "trust" it.

Regards.
 
  • Like
Reactions: Order 66
It's about depth of knowledge.

I am still of the generation that actually learned ASM and how to translate to C and compile by hand, but I didn't learn how to use a punch card, for instance.

AI will have other paradigms we'll have to adjust to and, from how things are progressing, they'll be around how you interact with the AI itself. Language being the obvious initial one, but I don't discard more exotic ways of interaction down the line. Back in Uni, early 2000's we already had some basic Brain-to-PC models that worked somewhat OK for basic inputs. I can only imagine what we can do nowadays with an AI backed inference as an interface device.

So, circling back to my original point: as the AI will take more of the primal/basic/fundamental things away, humans will just need a more superficial understanding or knowledge of any topic the AI "can master". Programming being the one discussed here. The accuracy and speed in which an AI will be able to digest any data you throw at it will be the tipping point on how much you can "trust" it.

Regards.
I suppose.

The thing is that there's only so much you can remove and still maintain even superficial knowledge of a topic.

For instance, I graduated from Uni at 2020 and we were still learning fundamentals like pointers and memory addresses even though modern languages handle that stuff for you simply because you need those fundamentals to be able to learn the other stuff.

I imagine that even with AI we would still learn to do at least the basics of what it does both to aid in comprehension of more advanced topics and to do the work AI does in places and at times were it simply isn't available.

But of course, it's anyone's best guess as to how things will go in the future.
 
Status
Not open for further replies.