Guesstimate of computers in 20 years

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
What do you notice? I tell the machine what the input is. As long as I have to do that, even if it can predetermine what the cout should be, or what the process is, computers will never be sentient.

My surroundings supply my mind with input, does that mean I am not sentient?
 
Is,

As it turns out AI and neural networks are having difficulties with the most basic concepts.

For instance:

Is a dolphin a fish or a mammal? This is a difficult thing for the AI to figure out. Fish = animal that swims through the water has fins and lives in the ocean, and is cold blooded, Mammal = warm blooded, breathes air, hair...

But you have to tell it that some mammals have characteristics that are also of an amphibian. Now it has difficulty defining which is which once you start adding each characteristics... Let’s not even get into the duck billed platypus...

Also,

If man + woman = marriage = good then man + woman (mistress) = bad somehow?...

These basic concepts have still given AI difficulties... There is a long way to go...
 
I didn't think of it that way. Good point. Now how do I rebut this?.....
I can't exactly put this into words but I believe this article can do it better
When IBM's Deep Blue computer defeated chess champ Gary Kasparov last week, the victory seemed to mark the coming of age of a new kind of computer -- one that can think well enough to outsmart one of the most brilliant minds of our era.
But that's not what happened. Cleverly written software beat a chess champion in a game that computers can play quite well. That's all.
The idea that computers can think is hogwash. Computers have a hard enough time doing what they do best -- calculating, adding and multiplying digits, pushing bits of information here and there, splashing pictures on your screen. They're not equipped to think.
Yet this idea that a really sophisticated software program can make a computer think persists in our popular culture. We seem ready to ignore everything we know about computers just to give them a touch of human personality.
A child of 3 years old thinks. A child that age adapts to the environment, learns new responses, discards unsuccessful techniques and knows how to tease.
Find a computer that can tease.
Want another example? Your Uncle Henry has more brains in his sleep than any computer does. That's because he dreams.
Find a computer that dreams.
What we know about thinking is almost nothing. But we know about computers fills thousands of books. That's why we tend to be smug about a dangerous topic. If we know a lot about something, we feel close to it. If we can understand something, we can assign a rating to it. We can turn it into something familiar.
How does your computer know that you just misspelled a word? Well, it looks up the letters you typed in a hash table, computes the checksums, zeros out the rigmarole and diddles with the flangerang. Or something like that. Trust me, the technospeak is gruesome, but the process is not that hard to follow.
How do you feel about the Detroit Tigers? What do you think of sunsets over the ocean? Have you ever really fallen in love?
Beep! Beep!
That's what you'd get from Deep Blue. The rest of us, humans all, would be able to write essays or sing songs about sunsets and love -- and maybe even the Tigers.
Computers that think? Pshaw. Give me computers that know how to open up the same window in the same place on my screen twice in a row. Or that know how tired I am on Thursday mornings. Or that know how to laugh.
That last part -- the element of humor -- is the only area in which I'll give a computer an even break. I made up my mind the other day when I saw a cartoon someone passed along in the office mail.
It showed a message from Hal, Arthur C. Clarke's mastermind computer from "2001."
"Gentlemen," Hal says in the cartoon, "I have just decided to remove Windows 95 from my hard disk."
Now that, as my Macintosh friends would say, is a thinking computer.
Copyright © 1997, The Syracuse Newspapers

Interesting read, if I do say so myself. As to your original question of doesn't your surroundings counts as an input, I ask you this: What part of your surroundings? Will a abstract computer be able to pick out the specific portion of its environment that will act as an input? At anytime there are millions, if not more, forms of input flowing too us. Can we develop a machine that will be able to sort through this data in real time and still be able to make both a cin and a cout?
 
The Deep blue reference is a great start.. The problem is that the application being run on the machine which then plays the game is still based on basic IPO...

Input, Process, Output... When the gray areas are added to this process is where the issues start to result.

Chess is actually easy since it involves a rigid standard and set of rules. As it turns out the reason they chose to beat a human in this (chess) is because the environment/rules are very well BOUND.

When adding additional environmental changes (like adding the capability for the machine to be distracted by a bug for instance) this becomes a much more different tasking for the computer.. This being just a single type of distraction/environmental change. The human not only processes the game but its surroundings/temperature/lighting/smoke/illness/hunger/thirst/...

That is where the differences come in...
 
This is like becoming Star Trek and Matrix fantasy. Cool. Whatever it is I want one.
I don't care if they can't think or never will be able to think. That in no way negates the fact that when Core Pentium XXXV Octo comes out, I'll be the first in line to get it. 😀

@ches111
Frankly all this topic will be for a while is an intellectual debate. There are still alot of gray area's and who knows, I may be wrong. At least I hope not. If computers become sentient, how long will it take them to see that we humans are a threat to our own existence and decide hat the only way to preserve our survival is to enslave or do something else to us a la the Matrix or I, Robot... :?
 
When you consider the ability of the Human to fully adapt to its environment is what will be the defining standard.

Sense of touch, smell, see, taste, hear and a suite of other environmental recognitions...

Computers can already do many of these things... Problem is they cannot process them all at the same time with accuracy and adapt to those inputs (see DARPA)..

Even some simple tasks like navigating through/around a mud hole can be difficult...
 
That article implies that there is something more to our brains than electrons flying back and forth. Most people share this opinion and I can see why - Frankly, we (our brains) are the most complex system on this planet, maybe this universe, so naturally we have a great amount of reverance for ourselves. It feels good to know that our minds are the utmost in evolutionary progress and it's hard for modern science to even scratch the surface. But ultimately, our brain, like everything created by nature, is a rule-based, physically constructed, definable system of interactions between components. It can be understood, and it will be understood within some finite amount of time. I'm now starting to touch on what my other post said. Basically, what happens when we create a perfect model of human brain on a computer, complete with every neuron, synapse, etc? I'll agree that's a mammoth task, but we already simulate lots of "real" things on computers every day to a precision that mimics reality. We don't have to progran intelligence or any such thing, we need to simply create a model of the physical construct and evidently (as evidenced by our species) intelligence will follow.
 
This is the reason you see (in movies) the typically four unalterable directives (prime directives).

The biggest of which is no machine will ever harm/or allow harm to a human...

See Issac Asimov... (although written as proposed science fiction he is well known as the father/founder of robotic ideals/laws).

Isaac Asimov, who, in my opinion, is the greatest writer of all time, published his three laws in a short story called "Runaround" which was published by Street and Smith Publications, Inc. in 1942. The three laws were stated as follows:

* A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
* A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
* A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Several readers have asked about Asimov's Zeroth law. Finally, I have found time to include it. The following paragraph was quoted from Rodger Clark's page on Asimov's laws:

Asimov detected as early as 1950, a need to extend the first law, which protected individual humans, so that it would protect humanity as a whole. Thus, his calculating machines "have the good of humanity at heart through the overwhelming force of the First Law of Robotics" (emphasis added). In 1985 he developed this idea further by postulating a "zeroth" law that placed humanity's interests above those of any individual while retaining a high value on individual human life.

Zeroth law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

****Now the cool part****

We are already attempting/accomplishing the breaking of nearly all the laws. We have for now (manned robots) with the capability to find and destroy targets/enemies...

We are purposely breaking these rules because for the most part these devices/robots are still controlled manually (see predator UAV).

They are however working on autonomous vehicles which could carry out these tasks without the manned part.
 
I'll agree with you on this point. We can possibly mirror the workings of the human brain.
Do I believe that it'll happen? Obviously no. Buy could it be a possibility? From the information that you've shared, I'll change my position to a minuscule maybe... a very minuscule maybe....
Now as too can we duplicate inteligence. I'm not to sure about it. Can we make a computer develop wisdom? Hell no.
 
Express what about the impacts of something as simple as adrenaline?

You see the brain is just that a brain... Without the further inputs it receives that tells it to speed up/slow down the body, increase envirionmental awareness (if in danger), decrease environmental awareness (sleeping)?

Without the input where are you?... Also the brain has proven to be totally adaptive in and of itself... It can even reconfigure regions of itself to take on tasks normally performed by other parts of the brain (when a trama happens).

Just a few questions...
 
* A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
* A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
* A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Zeroth law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
Would any of the Prime directives prevent a Matrix like scenario?
 
This is a good chance to introduce a good read to everyone.

"The Case for Faith" by author Lee Strobel.

I know, I know, it is a Christian book... But a good read none the less..

The author starts out on a journey to disprove the existence "not of God, but any superior being/beings". He then finds that science actually HELPS PROVE instead of disprove the existence of an "Intelligent Creator".

It is a good read no matter your religious affiliation...

Some of its contents apply directly to what we are talking about now...
 
Da,

Here is the cool part... What exactly is the definition of "Injure"?

Who puts that definition in/describes it?

How well do you define it? (see my comments above about gray areas/Dolphins)

What are the constraints surrounding it?

Scary to think that if we miss a constraint what will happen..

Can a robot acting on its orders/directives enslave us to protect us?

It could happen....
 
The safest thing to do would be to not enable a "Sky Net" 😉 type of situation...

The other safest thing to do would be to not give capabilities to robots that would far exceed our own (arm it with a nuke for instance).

Taking out a single or even a small uprising of robots would be easy right if the two above things are in place?

Taking out armys of connected devices that are adaptive to their combat and armed with devices that could greatly harm us is a "not so easy" task.
 
Actually it goes further than that..

It is not only up to the user to interpret but to also implement correctly...

You see if the guy in charge has nothing but the best of intentions he/she could still make a mistake...

They could miss something, or have a bug/defect in something that they actually caught...

The possibilities are endless in which the robots will now be interacting with a somewhat endless input called the human...

You ever hear of idiot proofing code? All that means is you will eventually find the bigger/better idiot.. :)

How do you constrain EVERYTHING?
 
I'll agree with you on this point. We can possibly mirror the workings of the human brain.
Do I believe that it'll happen? Obviously no. Buy could it be a possibility? From the information that you've shared, I'll change my position to a minuscule maybe... a very minuscule maybe....
Now as too can we duplicate inteligence. I'm not to sure about it. Can we make a computer develop wisdom? Hell no.

First, I'll agree that mirroring the physical processes of human brain is not the same as creating intelligence. But they really go hand-in-hand; once we have one, the other will almost certainly follow shortly.

Your post is implying one of two things:

(a) Technology, and by extension our species, will reach a limit and will simply no longer progress. I'd like to hear you explain why and how this would happen. Specifically, what would prevent us from being able to fully model a brain? Not to be unfair, but if you reference any degrees of something or use any numbers in your argument ("It's not possible because the brain has X of Y, which is far too many for a computer to model") then you're probably gonna be proven wrong by the sheer force of technological growth.

(b) There is more to our existence than simply interactions between tangible physical components. That perhaps there is a spiritual or mystic aspect to us that science will simply never be able duplicate. Although I respect such an opinion, it unfortunately basically ends our discussion about this subject.
 
Well I never wanted to bring my spirituallity into this discussion but yes, as you put it, there is a spiritual or mystic aspect to us that science will simply never be able duplicate. I'm sorry if it ended the debate but I could play devils advocate if you want to continue.
 
As far as my thoughts on where computers will be in 20 years...

I see a huge push in miniturization.... Everyone will have converged devices..

Devices that keep up on your latest email/news/communications/accounts/everything.

These devices will be the new credit card (see smart pass).

You will be able to receive your bill from the restaurant and approve the payment and see it automatically/immediately debit from the account of your choice.

Wirelss will play a much larger role in our lives as all these devices will be broadband wireless capable.

Essentially you will not be able to go anywher without your TRUE PDA/phone..+
 
Well I never wanted to bring my spirituallity into this discussion but yes, as you put it, there is a spiritual or mystic aspect to us that science will simply never be able duplicate. I'm sorry if it ended the debate but I could play devils advocate if you want to continue.

That's all right, we played this out to its conclusion in a calm, civil manner which is good enough for me. Thanks for the stimulating discussion. Besides this thread has already gotten kinda OT.
 
If you think about it, this was completely on topic. The thread is about the future of computing, and discussing whether or not computing will become sentient is completely topical. Besides that, you provided excellent counterpoints to my arguments. Thank you so much. I hope I can see you around here more.

@ches111
Essentially you will not be able to go anywhere without your TRUE PDA/phone..+
Even with my love of all things electronic, I think that all this connectivity is ultimately detrimental to us.
 
My phone I have currently is already very close to this ideal...

It is not the best implementation but it does somewhat act in nearly all the roles described above. (minus the restaurant interaction 😉).

UT Starcomm XV6700 - Whoooo Hoooooo!!

I love the device....
 
This is where it is the responsibility of the users to maintain their own humanity...

Kind of unfortunate... As you stated in a previous thread this media is what most call a "Cold Media" as emotion is very difficult to implement in written communications.

The phone is often somewhat of a "Cold Media" as well...

Personal interaction is now and will always be where "it" is "at"... 😉

But also you have to think that if not for all this connectivity we would not have had this conversation... So some good will come from it and some bad will come from it (both sides already proven daily)...