News ChatGPT Will Command More Than 30,000 Nvidia GPUs: Report

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Titan
Ambassador
My point is that chapgpt has revolutionized the codeing world to make it so we do not have to hire coders to do menial tasks anymore,
I'd also love to know just what kind of coding tasks it's good at.

IMO, if you're using a good language and set of libraries, a lot of the repetitive grunt work is taken care of for you and you can focus just on what's different an unique about whatever it is you're trying to do. I'd imagine I'd spend as much time or more trying to coax an AI into writing what I wanted + checking the output, as just to write it myself. Heck, even to tell another programmer what I want is more than enough work, in itself.
 

Rogue Leader

It's a trap!
Moderator
I can see why someone who spent thousands of hours coding from scratch would hate it, jealousy of a tool that makes their job 1000% easier.

I don't spend thousands of hours coding, because I know what I am doing and how to get what I work with to do what I want efficiently and effectively. It doesn't make my job any easier when i have to go back and re-write 3/4 of what it did because its coded like crap. Better to do it right the first time.

Exactly which is why I hate when people say it will replace real tech engineers. It certainly will not.

But is an incredible search tool when you need a specific answer...especially to something tech related. ChatGPT isn't a real threat to actual workers. It's a threat to search engines, wikis, and other digital tools that people use to lookup information!

Except if you play around with it enough its very easy to manipulate it into giving you whatever answer you want it to give you. In that respect what is it worth searching if you can't trust waht it is spitting back at you.

The thing about tech is that it has a remarkable tendency to improve. In the long run, it's usually a pretty foolish position to bet against tech.

I have no doubts that it is promising tech for sure. But behaving as if its ready for primetime right now is a fools errand.
 
  • Like
Reactions: bit_user
".....Thankfully, gamers have nothing to be concerned about, as ChatGPT won't touch the best graphics cards for gaming but rather tap into Nvidia's compute accelerators, such as the
A100."

This is not a complain, just a sad fact. I think many gamers around the world, me included, are more worried about having food on the table and clothes to wear, than getting any of the "best graphic cards".
Sadly thats the cruel reality this days for many of us.
 

DavidLejdar

Respectable
Sep 11, 2022
286
179
1,860
... It's made me and my company lots of money man. Sure, you may not like it, but it's the future and it is a viable and extremely effective business model. ...
Such future also includes that others are possibly already offering lower prices. Which sure has its upsides, such as that many do not need much expertise for products which are becoming worth only a dime for a dozen. And nothing new as such, where e.g. musicians who were providing music at the movies then ended up without a job once movie-audio was introduced, and so on.
 

randyh121

Prominent
Jan 3, 2023
257
53
770
a lot of the repetitive grunt work is taken care of for you and you can focus just on what's different an unique about whatever it is you're trying to do.
That's exactly what it does and it's very good at it.

. I'd imagine I'd spend as much time or more trying to coax an AI into writing what I wanted + checking the output, as just to write it myself.
Not at all. It writes 500 lines of code in a matter of seconds for us and it only takes us a few minutes to review it and push it
It is the future of software development and has been very successful at making money for us
 

bit_user

Titan
Ambassador
Not at all. It writes 500 lines of code in a matter of seconds for us and it only takes us a few minutes to review it and push it
I'm imagining it churning out reams of copy-and-paste style code.

Most programmers I've known that churn out high volumes of code don't write very good code. If you're doing consulting-type work, that's fine for you. As long as your stuff basically works and you get paid. Any bugs that need to be fixed or refactors needed to add more features, you can just bill for those too. As long as you can just rewrite stuff, maybe it doesn't matter how clean, well-structured, or well-architected it is.
 

slate33

Honorable
Feb 4, 2017
4
3
10,515
As someone who actually codes, I say you're either lying or you have extremely low standards.

ChatGPT writes poor code. Not only that, half the time it doesn't even work. If you have people using it to write code and passing it off as a usable finished product you are running a poor business. Anyone who does use ChatGPT to write their code, and actually considers it acceptable, should quit their job and go work packing boxes in a factory.

This, exactly this. For fun, I asked it to implement a few numerical calculations (i.e. double exp(double) and the like), as well as some simple LRU caches and other very basic algorithms. All of them were riddled with errors. They are also so simplistic that they don't even begin to consider real-world cases. Large X, small X, infinity or NaN, none of it is even considered. Lots of correctness and performance issues, as well as just not really upholding the required contract for these functions.

ChatGPT is good at writing code that looks like it's actual code. But once you start reading it, you realize it's not actually working code, just a super simplistic set of lines that look superficially like it could work if you don't read it or test it.

The funny thing is that this is something that an AI should actually be at least slightly better at. It has compilers, so it can easily check that the code compiles. It also should be not too hard to make a few well-known test cases and see if the code handles them correctly. Like an LRU cache could easily have a bunch of random test cases made and reject anything that doesn't at least pass the tests.

AI will actually get there eventually. We are starting to have the sorts of tools that can provide the baseline capabilities on which AI might one day be built. But viable general purpose AI isn't just a larger large language model, so there's quite a way yet to go. ChatGPT is basically a calculator for English and other language based fields, and it's about as effective as calculators are in math. Good at reducing drudge work, but not really solving the real problems that people are trying to solve with real math much of the time.
 
  • Like
Reactions: Rogue Leader

slate33

Honorable
Feb 4, 2017
4
3
10,515
In the early days of automobiles, I'm sure people said similar things about them: noisy, unreliable, slow, dependent on expensive fuel you can't easily get, etc.

The thing about tech is that it has a remarkable tendency to improve. In the long run, it's usually a pretty foolish position to bet against tech.

I don't think people disagree with that. However, there's a tendency in the tech news (and among some fanboy types) to think that they've almost finished climbing mount everest because they finished lacing up their shoes. :) There is a long, long way to go. The issue with some of these tools isn't that they don't work, it's that they aren't doing what people claim they are doing, and this is likely to cause mishaps down the line. The tools will improve, but a lot of people (myself very much included) think that the current developments are almost orthogonal to questions of general purpose AI. I don't think strong AI is just a larger large language model. I think a large language model is a part of it, in the same way a wheel is a part of a car, but inventing the wheel doesn't mean you've invented the car. We are at the early chariot and wagon stage. People are saying it's "almost a car" because it has 4 wheels and moves heavy loads, but a car has 100,000 individual parts and a wagon has like 10. There is a huge difference between the two. And this difference becomes obvious when you start asking a wagon to turn on the air conditioning and such. :)
 

bit_user

Titan
Ambassador
For fun, I asked it to implement a few numerical calculations (i.e. double exp(double) and the like), as well as some simple LRU caches and other very basic algorithms.
I wouldn't expect it to do particularly well at specialized numerical algorithms. Even so, there are ways to improve the quality of the output you get from it, and I have to wonder how much difference it'd have made if you guided it through the rough thought process a human would follow, in order to do such work. First, asking it what are popular approaches to implementing them. Then, asking about the considerations when selecting one approach vs. another. Next, asking about the pitfalls and performance implications. At this point, it might be ready to produce some pseudocode. Finally, ask for C code or whatever. I'm not saying it'd be fit for use, but I'd expect to see a marked improvement in its output.

ChatGPT is good at writing code that looks like it's actual code. But once you start reading it, you realize it's not actually working code,
I wonder if it works better for things like slapping web GUIs on server apps, which is the sort of thing I'd guess our new friend @randyh121 is doing. Essentially, high-volume low-complexity code. Also, I'd suspect it helps improve output if you provide a more detailed functional specification of what you need.

The funny thing is that this is something that an AI should actually be at least slightly better at. It has compilers, so it can easily check that the code compiles.
Yes, but also not how you're probably thinking. GAN is a common technique used to train most generative AI. You can literally include a compiler and the number of warnings and errors produced, in the loss function used for training it. I highly doubt they did this for ChatGPT, since I believe it wasn't designed specifically to write code, but it's what you'd absolutely want, if that's what you were doing.

Furthermore, you could run the programs and measure their correctness, how fast they ran, whether they segfaulted, how much memory they used, and how many memory errors (e.g. leaks, bounds overruns, etc.) occurred. Include all of those metrics in the loss function, also.
 
Last edited:

bit_user

Titan
Ambassador
I don't think strong AI is just a larger large language model.
Yes, of course. However, a lot of people are getting very surprised by what a "mere" language model is capable of doing.

Also, people deriding its output often don't realize that giving it a single prompt is barely scratching the surface of its capabilities. It's not designed to work solely from a single prompt.
 

USAFRet

Titan
Moderator
Yes, of course. However, a lot of people are getting very surprised by what a "mere" language model is capable of doing.

Also, people deriding its output often don't realize that giving it a single prompt is barely scratching the surface of its capabilities. It's not designed to work solely from a single prompt.
One of the problems that I've seen so far is that the human must ask the right question.
These things are pretty good as answering what you ask.
But if you ask the wrong question, it will happily give you a correct answer to that question.
Which may not be what you actually need.

They are not, yet, good at asking questions, to get to the meat of the problem.

From the TrueCases log:

User comes here, incensed.
Windows and/or Microsoft randomly deleted a Very Important Word doc he had been working on for several weeks.
A bit of Q&A, and it turns out the system rebooted overnight following a Windows Update, and his Word doc is now gone.

So, his proposed solution:
"To prevent this from ever happening again, how do I completely and permanently turn off Windows Updates?"

ChatGPT would have happily gave him the steps to maybe turn off Windows 10 updates.

Wrong answer....

Further Q&A from the humans led to the actual cause and problem:
  1. He had never once Saved the Word doc. It was simply open on the desktop. Not even a "file" yet.
  2. Autosave in MS Word was turned OFF.
  3. No backup of the system, ever.
  4. He had ignored the orange icon on the Taskbar, telling him that the system needs to be rebooted. This icon appears for several days after the Win 10 Update finishes. Eventually, it gives up and does the reboot anyway.

The problem was not the malicious or faulty Windows Update, but rather the poor computing practices by the user.

So, the real solution is not to turn off Windows Updates, but rather be a little bit more careful with your data. Like hit the Save button once in a while.
A single click would have saved his Very Important file.

Like ignoring the Low Oil pressure light for weeks, and being pissed at Toyota when your engine blows up.


We see this type of thing often.
Standard X/Y problem.
 

USAFRet

Titan
Moderator
How does someone end up with autosave disabled? Isn't it on, by default?

If someone disabled it and then got upset they lost their doc, that's on them.
Exactly.
It IS on by default.

But, the 'doc' was gone (never existed), and this particular user wanted to blame everyone else.
And then look for a solution to fix his self induced problem.
 
  • Like
Reactions: bit_user

Ralston18

Titan
Moderator
Plus there are users who call in fake problems and then attempt to blame the support tech for the loss of user's "work".

We had three levels of backups: user, office, department. Daily and weekly+, backups.

Very easy to prove that no such "work" or "doc" existed....

Still such users attempted to blame the "contractor".

One very important IT skill is CYA......
 
  • Like
Reactions: bit_user

USAFRet

Titan
Moderator
Plus there are users who call in fake problems and then attempt to blame the support tech for the loss of user's "work".

We had three levels of backups: user, office, department. Daily and weekly+, backups.

Very easy to prove that no such "work" or "doc" existed....

Still such users attempted to blame the "contractor".

One very important IT skill is CYA......
Our main platform records ALL created/deleted/modified items.

User: "Help!! My file disappeared for no reason!"
Support office: "Dude....you personally deleted this last Tuesday at 2:43PM. It is now restored"
(if needed, here is a screencap of your deletion :eek:)
 

bit_user

Titan
Ambassador
Plus there are users who call in fake problems and then attempt to blame the support tech for the loss of user's "work".

We had three levels of backups: user, office, department. Daily and weekly+, backups.

Very easy to prove that no such "work" or "doc" existed....

Still such users attempted to blame the "contractor".

One very important IT skill is CYA......
Many years ago, I heard about an incident at my job, where an engineering middle manager blamed the IT department for something, and a low-level IT guy who was on the call refuted the allegation. Even though he was correct, she still somehow managed to get him fired for making her look bad in front of her boss. Corporate politics...
 
Status
Not open for further replies.