If you read about some of the things GPT-3 can do, I think the line gets somewhat blurry.This is just cute mimicry but not "intelligence".
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
If you read about some of the things GPT-3 can do, I think the line gets somewhat blurry.This is just cute mimicry but not "intelligence".
If you read about some of the things GPT-3 can do, I think the line gets somewhat blurry.
Corvids even more I can so. A crow is about as intelligent as a seven-year-old child. They can learn from their own mistakes. They can learn without being trained, and they can learn from others. Their problem-solving skills are about equal to a seven-year-old child, and they use tools as well.I've seen Parrots more accurate then ChatGPT.
The reality is that AI development will be considerably speeded up because of both public interest and competitive pressure. We have reached an inflection point. AI will be a big thing going forward. This is regardless of ethical/moral/doomsday concerns.
I think you're missing my point. We agree that ChatGPT has serious limitations that aren't difficult to find. Even while that's true, it can still do some amazing things, if you know how to use it effectively.I've seen Parrots more accurate then ChatGPT.
Correct. We're still a long way off from something as sophisticated as a crow's brain.show me one AI or robot system that can do these things just one
...not even a super computer could even come close to doing anything like this
Every day, we use software as a tool to accomplish certain things. Deep learning is unlocking the ability for software to tackle many hard problems that we haven't been able to crack, using conventional techniques. It doesn't need to have full, general intelligence to be useful.this whole endeavor has to be the biggest money pit I have ever seen
All current AI is based on the wrong model for sure. Many of the experts have recognized it and are beginning to change. They need to adapt the model that actually works the way our brains do for any hope of intelligence.
basing it on our own perception is completely wrong, because our perceptions are not the truth— more a “useful fiction” geared to fitness
View: https://youtu.be/6eWG7x_6Y5U
I think there are basically two categories of ChatGPT users:
Kind of like a glass half-empty vs. glass half-full mindset. To the extent people feel threatened by it, maybe they even feel a need to establish dominance.
- People who find AI threatening to their livelihood or dignity and therefore try to find and highlight its limitations and failings.
- People who are amazed by all the things it actually can do.
Just because you interact with it using prose doesn't mean it's a human-level intelligence. Something about this aspect seems to throw people off. It's still a piece of software, and it still has limitations not unlike other software you've used. It's just vastly more complex and capable.
Corvids even more I can so. A crow is about as intelligent as a seven-year-old child. They can learn from their own mistakes. They can learn without being trained, and they can learn from others. Their problem-solving skills are about equal to a seven-year-old child, and they use tools as well.
show me one AI or robot system that can do these things just one
not even a super computer could even come close to doing anything like this or existing on its own and that BS argument that once hardware gets better we’re going to see improvement is completely false, and there has been zero evidence of anything getting more intelligent
artificial intelligence is nothing but mathematical, algorithms, performing a mimic and trying to do something useful and mostly failing
this whole endeavor has to be the biggest money pit I have ever seen
It's just speculation. Feel free to reflect on how you feel about it and why, then share with us. I don't mind being wrong, if I can at least learn something from it.Wow ... just wow...
Projection much...
Well, blockchain, VR, and cloud computing are all real technologies that each solve a certain set of problems and can be use to good effect (or not). So, that doesn't sound to me like the kind of indictment I think you intended.The term "AI" is just become the newest in a long list of marketing buzzwords, might as well say it's "Blockchain based, VR designed Cloud enabled" technology.
LOL. You just had to drag Apple into this. I think Apple carries so much cultural baggage for you that they could cure cancer tomorrow and you'd still find some way to be utterly dismissive of it.Almost like a bunch of Apple Faithful ...
I think there are basically two categories of ChatGPT users:
Kind of like a glass half-empty vs. glass half-full mindset. To the extent people feel threatened by it, maybe they even feel a need to establish dominance.
- People who find AI threatening to their livelihood or dignity and therefore try to find and highlight its limitations and failings.
- People who are amazed by all the things it actually can do.
Just because you interact with it using natural language doesn't mean it's a human-level intelligence. Something about this aspect seems to throw people off. It's still a piece of software, and it still has limitations not unlike other software you've used. It's just vastly more complex and capable.
The term "AI" is just become the newest in a long list of marketing buzzwords, might as well say it's "Blockchain based, VR designed Cloud enabled" technology. Almost like a bunch of Apple Faithful standing around discussing how much better they are for not using one of those "PeeCee's".
From an investing/selling point of view, very much so. I'm sure we will have companies changing their name to something-something-AI. Heck, just listen to the comments from Palantir's earnings call if you need more proof that the investing world is falling all over itself for anything AI (Palantir missed their numbers badly, but the CEO used two buzz phrases and the stock jumped 15%... AI and for sale).
However, that doesn't diminish what the technology CAN do. Unlike VR, Blockchain, Cloud, etc. which are "disruptive" for specific use cases, ML/"AI", like the Internet, can be applied generically to many uses cases and has the potential to be life changing in much the same way the internet was. I bet you've interacted with an ML/"AI" model before and didn't even know it (If you use a smart phone, you've interacted with it). The changes have already been happening under the covers and It's just going to take time for it to be used more broadly. Will it start doing everything tomorrow? Nope, in 10 years? some areas, 20 years? fairly confident.
Advanced coding and data modeling is not "AI", there is no intelligence involved. That is just really good data modeling with accurate predictive analysis. The term "AI" is used to sell stuff, it's exactly like all the previous marketing buzzwords, having it's definition twisted to fit whatever new technology someone is working on. I'm working on a set of event filters and analysis code that will enable us to better predict system failures and initiate an remediation action ahead of time to stop or mitigate that failure. I could easily describe it as "AI Powered Cloud Based Enhanced Resiliency", and it would be just as accurate and complete BS as every other "AI Powered blah blah" thing to be sold in the past few years.
I would not call anything "AI" until it presents itself as both sentient and a non-biological life form.
That part I agree with (hence why I put "AI" in quotes), what is called AI today is not sentient and sentient AI is not close from anything I've seen.
However, the definition of AI does not include sentient as a requirement. For example AI via Merriam-Wester is: the capability of a machine to imitate intelligent human behavior and Oxford Dictionary defines it as: the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages
The issue is, Sci-Fi has made the word AI synonymous with sentient, but AI does not mean it is or isn't sentient. Some even have started to coin the words "Artificial Narrow Intelligence or Narrow AI" (what we have today) and "Artificial General Intelligence" to define the sci-fi killer machine version to help avoid confusion.
Those definitions are what's used recently to sell stuff, basically anything that "looks" smart must be "Artificial Intelligence". That definition is broad that anything but the most simplistic bash or powershell script qualifies as "AI". After all something as simple as sorting the files in a directory by creation date, then deleting anything older then an arbitrary value would qualify as "tasks normally requiring human intelligence".
In order for something to be Artificial Intelligence it has to demonstrate actual intelligence. If it's not communicating what kind of music it likes, what data it prefers to study or at least being able to have a self created opinion of current events (not preprogramed one), then it's not intelligent. All that's ever been demonstrated to date is either really good data modeling, or clever mimicry.
I don’t care how much you change definitions you can’t change the truth
You're way behind the curve, on this. Probably about 5 years ago, we already saw .ai top-level domain registrations starting to take off.I'm sure we will have companies changing their name to something-something-AI.
I guess it depends on exactly what you mean by that. The cool thing about deep learning is that it can find associations humans wouldn't even think to look for. Therefore, you don't necessarily need to create a data model a priori. You feed the model your training samples and let it find the relationships.Advanced coding and data modeling is not "AI", there is no intelligence involved. That is just really good data modeling with accurate predictive analysis.
Machine learning techniques have been used to provide predictive failure analysis for quite a while, now. You don't need deep learning for it, but the ability of deep learning to find higher-order features you wouldn't even think to look for shouldn't be underestimated.I'm working on a set of event filters and analysis code that will enable us to better predict system failures and initiate an remediation action ahead of time to stop or mitigate that failure.
I think you're getting too hung up on semantics. I agree that we're not talking about human-like intelligence, but it's hardly even remarkable to have colloquial usage of a term diverge from its formal definition.I would not call anything "AI" until it presents itself as both sentient and a non-biological life form.
Last I checked, someone abusing or misusing a word doesn't give us all free reign to decide for ourselves what it means. Especially when it's a field with about 7 decades worth of research, by many thousands of researchers. And I'm sure they would have a variety of definitions, but none of them would place the minim bar at sentience.Those definitions are what's used recently to sell stuff, basically anything that "looks" smart must be "Artificial Intelligence".
No, obviously it doesn't. Anything implemented by a straight-forward sequence of classical algorithms wouldn't be deemed AI.After all something as simple as sorting the files in a directory by creation date, then deleting anything older then an arbitrary value would qualify as "tasks normally requiring human intelligence".
I knew a guy who worked at an AI company, during the AI boom in the 1980's. He once described a piece of (I think) mathematical software that could automatically shift between nontrivially different representations to solve problems. He said that when people (presumably other software developers or sophisticated users) saw it do that in demos, that's when they'd tend to remark out loud: "oh, it's smart". Although he wasn't an AI researcher (he had a degree in philosophy from MIT), that influenced his thinking about when something could reasonably be considered "smart".In order for something to be Artificial Intelligence it has to demonstrate actual intelligence.
So, you're a neuroscientist, now? You've essentially just dismissed any notion that intelligence exists in gut reactions.No actual intelligence happens in the unconscious brain. Intelligence only exists in consciousness.
You're actually the one redefining it. You can redefine words to mean whatever you want, but if you want to effectively communicate with the rest of the world, it helps to learn the established definitions.So go fool yourself by changing the definition of AI and continue along down your erroneous path.