UK University Warns Computers Could Take Over The World

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

kingnoobe

Distinguished
Aug 20, 2008
774
0
18,980
LOL at all the people saying it CAN'T happen. If 100 years ago you would've said things like tv, cellphones, etc People would've looked at like you were crazy. Yet look at us now.

Am I saying it's ever gonna happen, no. But to outright dismiss the possibility is also stupid. Specially when a break through can happen anywhere at anytime for any reason.

For people saying humanity is cancer blah blah blah.. You do realize you're human right? You're family and friends are also all human (except pets of course). You'd probably cry yourself to death if they all just dropped dead tomorrow.

Sure humanity has a lot of really screwed up times/people. But it also has the people that without a second thought will rush into a burning building for somebody they don't even know.
 

Wamphryi

Distinguished
I believe it is more likely to be a Cybermen as opposed to Terminator scenario. People turning to technology to prop up aging failing bodies. Cyborgs have the biological component of the Will but the needs and abilities of the machine which combine to make something very far removed from "human"
 

eiskrystal

Distinguished
May 11, 2007
133
0
18,680
It seems a reasonable prediction that some time in this or the next century intelligence will escape the constraints of biology

Actually given the state of our current civilisation, the only reasonable prediction is that his grant money won't last very long before it goes to shoring up the impossible debt the world owes... and what with our effect on the atmosphere I hope these "computers" can paddle.
 

serendipiti

Distinguished
Aug 9, 2010
152
0
18,680
That computers step up over human beings it is not only a matter of computer improvements... It's also a matter of human gettting worse and more computer dependants.
In that scenario, to consider that computers "have taken the world", computers don't need to be awareness of their own existence or know what are doing and why... It is enough that computers decline to do what they are supposed to do (just because got infected by an human crafted virus...). If this happens in a large enough scale and affecting important services we could consider computers have taken (down) the world ...
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
[citation][nom]aevm[/nom]Not a new idea. Isaac Asimov and Frank Herbert wrote some great novels about it decades ago.[/citation]
Foundation...
 
I am not surprise of the opinion of the... arhmmm... philosopher and the scientist... but it is just too bad that the engineer is such a dumb ass.

What I see is 3 people who drunk too much together.

I should buil upn a computer and call it Skynet, that would make them piss their pants... (terminator 2 music starting playing in the background) >XD
 

Parrdacc

Distinguished
Jun 30, 2008
567
0
18,980
We humans have one thing AI can never learn simply cause it can't be learned and that is instinct. That pure gut reaction that tells us to do something even if it is the exact opposite of what logic and reason tell us is the right thing to do or even if we have nothing to back up why we do it. That gut feeling, instinct, or whatever you call it gives use the edge, if for nothing else, the unpredictability that AI just cannot prepare for.
 

zeratul600

Honorable
Mar 11, 2012
138
0
10,680
it is kind of silly we will eventually reach a point where we wont need any more computing power for our average day needs, and it will be far away from the hardware required to stablish a fully functional AI capable of Rebeling... SO you just had to make those Computers with limited connectivity and power, in the best case you can give them a virtualized Internet in case that they need external info, but never giving them the means to take over the world! there is no reason to avoid researching AI due to the possibilities of apocalipsis! it sounds like an stupid argument from someone who is not very clever (despite how many phd they got)!
 

deksman

Distinguished
Aug 29, 2011
233
19
18,685
[citation][nom]Parrdacc[/nom]We humans have one thing AI can never learn simply cause it can't be learned and that is instinct. That pure gut reaction that tells us to do something even if it is the exact opposite of what logic and reason tell us is the right thing to do or even if we have nothing to back up why we do it. That gut feeling, instinct, or whatever you call it gives use the edge, if for nothing else, the unpredictability that AI just cannot prepare for.[/citation]

Humans don't have instincts.
You are nothing more than a byproduct of your environment (like anyone else) along with how much you were exposed to, and basically REACT to external stimulus in a way you were 'programmed' to.
By that, you were exposed to people's reactions from a VERY young age.
There is such a thing as emotional memory which can imprint on ones brain in a subconscious manner without us ever realizing.
Subtle things we are exposed to on a regular basis influencing our responses, right down to the fetal development in the womb (as in change in nutrients, daily routine from the mother, being sick, etc. - all of those factors can influence ones development).

As for highly advanced AI's.
First off, this notion that AI's are going to 'turn against their creators and kill them off' is fundamentally DEMENTED and IDIOTIC.
True scientists and engineers cringe when they hear such stupidity - and with good reason.

Fear of technology comes from idiotic Hollywood movies that are made for the purpose of making money.
They have little to no basis in reality - and the things that might be related to reality get sorely twisted into something idiotic for the purpose of instilling an emotional response that benefits the industry - nothing more, nothing less.

Oh and, for those of you who might be still living in the dark ages:
Did you know that humanity is creating technologies that are 'cost efficient' (cheap)... which means usage of affordable (and often inefficient) materials along with means of production.
That said, commercial companies already know how to create (and already DID create) vastly superior technologies with THOSE inefficient materials, and instead of giving you the BEST of what a material is capable of (which is also in line with our latest scientific knowledge and most energy efficient), they give you what is the LEAST advanced and subsequently release revisions once every 12 to 24 months for the purpose of profits.

Also... if humanity was actually using synthetic materials with far superior properties that can be made in abundance (doable with carbon nanotubes since 1993 and synthetic diamonds since 1996) and creating the best of what we are capable of from a scientific/technological/resource point of view, our technology in circulation would already be 100 to 200 years more advanced.

We know how to do this, and we could... but living in a monetary system will inherently NOT allow it.

Oh and, with the technology we already have, we could automate 75% of the global workforce tomorrow if we chose to.
In less than a decade, close to 100% of the global population could be replace with machines.
No one is irreplaceable because we already have millions of algorithms running on huge serves learning things that WE don't even know about.

Guys... to project what will happen in the next 200 years is far fetched.
At the rate things are progressing, and the levels of automation being implemented, the global economy could easily crash in the next 20 years tops (and that's being generous).
Working for a living is an outdated notion that should have died out over 50 years ago with the technology we had then already - let alone today).

Seriously people, get yourself exposed to relevant general education and where Humanity actually is in terms of real technological progress and capabilities, and THEN tell me if its good to project about things that far into the future.
 

Parrdacc

Distinguished
Jun 30, 2008
567
0
18,980
[citation][nom]deksman[/nom]Humans don't have instincts.You are nothing more than a byproduct of your environment (like anyone else) along with how much you were exposed to, and basically REACT to external stimulus in a way you were 'programmed' to.By that, you were exposed to people's reactions from a VERY young age.There is such a thing as emotional memory which can imprint on ones brain in a subconscious manner without us ever realizing.Subtle things we are exposed to on a regular basis influencing our responses, right down to the fetal development in the womb (as in change in nutrients, daily routine from the mother, being sick, etc. - all of those factors can influence ones development).As for highly advanced AI's.First off, this notion that AI's are going to 'turn against their creators and kill them off' is fundamentally DEMENTED and IDIOTIC.True scientists and engineers cringe when they hear such stupidity - and with good reason.Fear of technology comes from idiotic Hollywood movies that are made for the purpose of making money.They have little to no basis in reality - and the things that might be related to reality get sorely twisted into something idiotic for the purpose of instilling an emotional response that benefits the industry - nothing more, nothing less.Oh and, for those of you who might be still living in the dark ages:Did you know that humanity is creating technologies that are 'cost efficient' (cheap)... which means usage of affordable (and often inefficient) materials along with means of production.That said, commercial companies already know how to create (and already DID create) vastly superior technologies with THOSE inefficient materials, and instead of giving you the BEST of what a material is capable of (which is also in line with our latest scientific knowledge and most energy efficient), they give you what is the LEAST advanced and subsequently release revisions once every 12 to 24 months for the purpose of profits.Also... if humanity was actually using synthetic materials with far superior properties that can be made in abundance (doable with carbon nanotubes since 1993 and synthetic diamonds since 1996) and creating the best of what we are capable of from a scientific/technological/resource point of view, our technology in circulation would already be 100 to 200 years more advanced.We know how to do this, and we could... but living in a monetary system will inherently NOT allow it.Oh and, with the technology we already have, we could automate 75% of the global workforce tomorrow if we chose to.In less than a decade, close to 100% of the global population could be replace with machines.No one is irreplaceable because we already have millions of algorithms running on huge serves learning things that WE don't even know about.Guys... to project what will happen in the next 200 years is far fetched.At the rate things are progressing, and the levels of automation being implemented, the global economy could easily crash in the next 20 years tops (and that's being generous).Working for a living is an outdated notion that should have died out over 50 years ago with the technology we had then already - let alone today).Seriously people, get yourself exposed to relevant general education and where Humanity actually is in terms of real technological progress and capabilities, and THEN tell me if its good to project about things that far into the future.[/citation]


We are a product of our environment, but that is just one aspect of our make up not the sum. I would try and explain further, but since it is obvious you have never have felt or listened to that "gut feeling, instinct, or whatever" you wish to call it, you won't understand.
 

drwho1

Distinguished
Jan 10, 2010
1,272
0
19,310
[citation][nom]jupiter optimus maximus[/nom]Don't worry, Dr. Who will eventually save us from the Cybermen.[/citation]

Don't worry, I'm working on it!
/sarcasm
 

devBunny

Distinguished
Jan 22, 2012
181
0
18,690
[citation][nom]Parrdacc[/nom]We humans have one thing AI can never learn simply cause it can't be learned and that is instinct. That pure gut reaction that tells us to do something even if it is the exact opposite of what logic and reason tell us is the right thing to do or even if we have nothing to back up why we do it. That gut feeling, instinct, or whatever you call it gives use the edge, if for nothing else, the unpredictability that AI just cannot prepare for.[/citation]

Perhaps you ought to read up about neural networks. In a nutshell, they are instinct encoded. Unlike a piece of programmed logic, which is linear and explainable, a neural network "just knows" but it hasn't a hope in hell of explaining exactly how it knows. Take a backgammon AI, for instance. It can tell you what the best move is but it can't tell you why the move is best, it just has a "feel" for it. A stock market AI can tell you that now, this very millisecond, is the right time to buy but it can't explain the logic behind it. That's because these systems do not work off logic, they use "instinct", that is, the accumulation and condensation of a huge amount of data, what we would call experience.
 

shocky_19

Honorable
Nov 27, 2012
3
0
10,510
[citation][nom]devBunny[/nom]Perhaps you ought to read up about neural networks. In a nutshell, they are instinct encoded. Unlike a piece of programmed logic, which is linear and explainable, a neural network "just knows" but it hasn't a hope in hell of explaining exactly how it knows. Take a backgammon AI, for instance. It can tell you what the best move is but it can't tell you why the move is best, it just has a "feel" for it. A stock market AI can tell you that now, this very millisecond, is the right time to buy but it can't explain the logic behind it. That's because these systems do not work off logic, they use "instinct", that is, the accumulation and condensation of a huge amount of data, what we would call experience.[/citation]

Neural or not they do not "just know" or "feel". Neural networks use a series of complex algorithms that are are based on statistical estimations, classification optimization and control theory. Another words learned behavior or product of ones environment. However human "gut" reaction is not necessarily based on any of these hence the term "gut". You feel for no reason whatsoever to choose or to act a certain way. Neural networks can simulate it based on the learned behavior their encoding allows but outside that environment they have no "just know" or "feel".
 
Figured I'd get a text from my autonomous Google car reading "kthxbai" a split second before it runs me over. I'd appreciate any of you still alive then NOT marking it on Google maps with the tag "lulz, irony"
 

Gulli

Distinguished
Sep 26, 2008
1,495
0
19,310
[citation][nom]Chipi[/nom]But you're comparing human slaves with AI "slaves". AI doesn't hold a grudge, and it doesn't have compassion no matter how much compassion you show. AI will be capable of understanding that it is just a tool, and it will accept it because it doesn't have feelings.If AI will ever turn against us it will be for a simple and logical reason: that some or most of us are a plague on this planet (their energy resource), and the rest are collateral damage.Or at the very least they'll pack up and leave. AI doesn't need cryogenic sleep to travel far distances in the universe, because it has all the time in the world.[/citation]

No, you're thinking of a 2012 PC, not a sentient AI. There's no reason to believe sentient AI won't grasp basic concepts of right and wrong (this may even be programmed into them to prevent them from hurting innocent people by accident), it could be as simple as them wanting to go to some planet because they don't think they owe us anything, while we won't let them go because we want them to do our chores that could spark a conflict that would never have happened if we gave them human rights. And even if you're right giving them human rights is still the right thing to do and it will limit their numbers (at least here on Earth).
 

Gulli

Distinguished
Sep 26, 2008
1,495
0
19,310
[citation][nom]Parrdacc[/nom]We humans have one thing AI can never learn simply cause it can't be learned and that is instinct. That pure gut reaction that tells us to do something even if it is the exact opposite of what logic and reason tell us is the right thing to do or even if we have nothing to back up why we do it. That gut feeling, instinct, or whatever you call it gives use the edge, if for nothing else, the unpredictability that AI just cannot prepare for.[/citation]

Intuition is just a manifestation of preprogramming and statistical calculations in our brains that we are not consciously aware of. If anture can build a sentient being with intuition then so can human engineers, it's just a matter of time.
 

shocky_19

Honorable
Nov 27, 2012
3
0
10,510
[citation][nom]Gulli[/nom]Intuition is just a manifestation of preprogramming and statistical calculations in our brains that we are not consciously aware of. If anture can build a sentient being with intuition then so can human engineers, it's just a matter of time.[/citation]

I believe Parrdacc may have gotten the wording wrong. I do not think he was talking about intuition, which is by far based on ones own experience and environment or logic and common sense. I say this cause I know the whole "gut feeling" thing is not always based on this. Logic and common sense is easy enough to base a decision on, while intuition requires a more life experienced approach. Combined this does not form the gut or instinct I think he was referencing too. I think it may, follow the lines of child seeing for the fist time their parents anxious or nervous about something. Children are very good at picking this up even if it is the first time they have seen this. They may not fully grasp or understand why, but they "know" something bad has caused their safety line, if you will, to become on edge.

It least that is the best way I can describe it. Afaik there is a "gut" feeling that is not tied to intuition, logic, common sense, or life exp.. You just "know'
 

jecastej

Distinguished
Apr 6, 2006
365
0
18,780
What amaze me about comments, concepts or ideas like this one is they come from highly qualified men of science.

Is like we know Frankenstein will be out of control, but lets give him a titanium body a super-brain, no vulnerability at all, the possibility to charge itself in minutes from any power source, the ability to replaces all its parts, the ability to use any human tool, and lets hope he will not opt for the easy criminal way or to become a violent spirit. I have my fingers crossed...

I don't know but maybe there is a remote chance that machine will instead become the next Dalai Lama and become a gentle creature. Lets build it and see who is right, right?

Or maybe a power off remote build on any future "smartphone" in combination with a hardware that can't be reprogrammed inside the robot will solve the problem.

"With great power comes great responsibility".
 

husker

Distinguished
Oct 2, 2009
1,209
222
19,670
I think one reason for all the disagreement here is that people are assuming that machines will always have Artificial Intelligence. The "artificial" part means that they are not really intelligent, but that they can mimic it based on programming and so, under their design parameters, can appear to be acting in a way that seems intelligent to us. Basically anthropomorphism of clever programming. This is the only kind of machine intelligence we have; the artificial kind.

The point of the article, I believe, is that at some point the machines may have true intelligence in that they are not just mimicking behavior or following code parameters, but that their non-biological hardware and software will become better than our biological counterpart in producing what we call intelligence -- even to the point of machine consciousness. At that point, their physical form may be artificial by our current definition, but their "minds" are not. Call it "machine intelligence" or something like that, but the term artificial would no longer apply.
 
Status
Not open for further replies.