I think search will increasingly leverage this technology.ChatGPT=Google search
I think search will increasingly leverage this technology.ChatGPT=Google search
https://www.engadget.com/google-chatgpt-ai-chatbot-search-170007802.htmlI think search will increasingly leverage this technology.
Not only that, but also consider that it wasn't trained on any carefully-vetted material. If you made sure to feed it only forum posts marked as "best answer" and actual motherboard manuals and PC repair books, I'll bet it would give better advice than most of us here!The author is nitpicking on what is a prototype. Even at its present form, the AI's guide is useful, as it forms a meaty framework that a human guide could flesh out with details, thereby shortening his workload.
ChatGPT (and its inevitable kinds) will inevitably be improved.
I think we're very likely to see it used in troubleshooting, diagnostic, and repair contexts. There has long been a field of AI known as Expert Systems, which seek to fill this role. ChatGPT has demonstrated a lot of potential, there.IMO, step-by-step how-tos are not a high bar to clear. They don't require any creative thought or reasoning.
Maybe that just demonstrates its designers focused more on memory/recall than trying to ensure logical consistency of what it knows. Seems like a potentially fixable problem, to me.When an algorithm contradicts itself by saying:
"France has not won the 2018 World Cup. The 2018 World Cup was won by France."
...maybe it's time to admit ChatGTP lacks any intelligence whatsoever,
To some degree, it does. The model isn't nearly big enough to simply store every single fact in its training data, so it builds higher-level structures which (ideally) provide representational efficiency and logical consistency. We just don't know how sophisticated its knowledge representations are. And, just like a human, if you drill incorrect information into it enough, it's likely to stick.the bot doesn't know, respectively isn't aware, that it doesn't even have any butt to begin with.
It can take them a while to figure out what's what.But even children usually don't end up listing whatever as their accomplishment just because someone else claimed it to be an accomplishment.
You misjudge what "thinking" really is. There are already AI techniques, like GAN, that replicate key aspects of thinking and contemplation.Humans are different, we posses the ability to critically think beyond our 'programming'.
True. It's a bit like a sophisticated parrot. Worse, due to its black-box nature and complexity, it's hard to know when to trust it and exactly the extent of its capabilities and knowledge. This is probably one of the biggest dangers AI's pose.The problem is tools like ChatGPT and AI in general are expected to be like humans, which it cant (yet?)
Define "intelligence". If you try really hard, I think you'll probably conclude you're mystifying something that's not as magical as we like to believe.Just stop calling it AI. the name is meaningless. There is NO intelligence
I'm actually amazed that it gave an answer that even sort of makes sense, given that it wasn't trained specifically to be an expert on that field.ChatGPT is completely unreliable. I asked it to name the best DAWs (digital audio workstations) for Windows, and ChatGPT came up with "Logic Pro", which is only available for Mac OS and certainly not for Windows.
But thats what we're being led to believe.Define "intelligence". If you try really hard, I think you'll probably conclude you're mystifying something that's not as magical as we like to believe.
I'm sure people said this, when kids started using sliderules ...and then when they used calculators ...and then when they used graphing calculators ...and then when they could start using their laptops to take exams.And lazy college kids are tripping over themselves to use this "fancified" Google search to help them with assignments??
Yeah, "the future is so bright, I gotta wear shades. "
Who's saying it's magic? Not the designers or expert practitioners in the field.But thats what we're being led to believe.
Magic.
The hype, as you say.Who's saying it's magic? Not the designers or expert practitioners in the field.
Yeah, there's a lot of hype about it, because it represents a step-change relative to what came before. When you see a child start walking well enough to get around the house, you don't complain that it's not able to complete a marathon. Instead, you probably celebrate that you no longer have to haul the stroller with you, everywhere you go!
Some publishers think an AI can write articles, but results say otherwise.
I Asked ChatGPT How to Build a PC. It Said to Smush the CPU. : Read more
I've been a reader of Toms since the early 90s and love it. Huge fan of all of the content.
However I dont think this author read enough articles about ChatGPT before deciding to write an article about ChatGPT, and sort of misses the point.
It is designed to be iterative. Sure the first answer is high level, but you can follow that first question with another... "give me more detail about inserting the CPU". Or "add more about all of the parts I will need". Or "is it important which pcie slot to use?" The tech will either answer your follow up, or if you desire, enhance the article with that content.
You could even say "rephrase from the perspective of a more experienced and empathetic person" or "explain as if you were Mr Spock". Its not perfect but if your mind is not blown by the results you are not paying any attention.
I believe you need to do far more research into the subject and you’ll come to the opposite conclusion. This is a huge thing with the big thinkers. It is not known how it works. So for you to say that is completely off base.Define "intelligence". If you try really hard, I think you'll probably conclude you're mystifying something that's not as magical as we like to believe.
It's a very impressive proof-of-concept, but anyone who has really tested it out knows that it can't be trusted like a human can be (at least not at present)
...we're seeing real businesses use AIs to write content that real humans are relying on. So I think it's worth interrogating that notion.
I have been thinking a lot about self-driving cars and where those stand after years of hype.
Consciousness is not needed for intelligence - imagine an encyclopedic as big as a planet created by an alien race with all the answers to our questions. It would not be conscious, even if loaded into a computer. Creativity is also no required, although nice to have.I believe you need to do far more research into the subject and you’ll come to the opposite conclusion. This is a huge thing with the big thinkers. It is not known how it works. So for you to say that is completely off base.
Without consciousness, there is no intelligence. Period.
algorithms can’t think or feel. They can’t be conscious. Not computers either so there’s no intelligence. I would love to see you prove me wrong, because then you can go up against all the panels of scientist to say you’re wrong.
prove me wrong and you’ll win the Nobel prize for sure
https://www.wordnik.com/words/intelligence
Read the first line.
noun The ability to acquire, understand, and use knowledge
The ability to acquire to understand and use the knowledge. I assert that computers will never understand the knowledge therefore, there is no intelligence. They can be taught to use data and maybe even have some sensors to acquire it, but they will never understand it. But data is not knowledge. The very definition of intelligence is violated.
knowledge is defined
knowledge
nŏl′ĭj
noun
All AI can do is mimics. Maybe someday when we have PosiTronic brains like data does in Star Trek but not now.
- The state or fact of knowing.
- Familiarity, awareness, or understanding gained through experience or study.
- The sum or range of what has been perceived, discovered, or learned
therefore, my conclusion is, there is no intelligence behind artificial intelligence whatsoever.