Guesstimate of computers in 20 years

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
You know computing, but you lack programming knowledge. Every computer needs to be told what to think, linear or not. The fact that the AI doesn't follow the linear pattern and is more of a abstract calculating machine still doesn't negate that you need to have an external >>input somewhere along the line for it to actually do its thinking. There must be a beginning somewhere.

Actually i have been programming for 20+ years so i do NOT lack programming knowledge.

A piece of software can be written with general guidelines (rules?) and allowed to learn associations and methods via experimentation.
The programmer didnt TELL the program what it learned, they told it HOW to learn. This is the same way that nature equiped humans with the builtin ability to learn, and even pre-installed some knowledge (ie physical things like pain is bad, how a heart beats, the difference between quiet and loud, etc.

Also do you really think that humans think prior to conception? Hello... talk about turning on the computer!
Take a neural network with enough connections, connect it to some input devices (nerve endings to skin, muscles, taste, hearing, sight), preconfigure some basic rules (our hind brain is pretty much hard-wired and does not need to learn how to work), add the ability to store + replay nerve recordings (remember tastes, smells, images) and preset some valuations (ie recoil from pain, eat to avoid/remove hunger pangs, etc), and you have a primitive brain.

Emotions developed side by side with intelligence, not before it. To make an AI "feel" you need to have some sort of calculation or algorithm or formula actually lay out the ground work for the emotion.

Emotions developed long before SAPIENT intelligence. My comment was that so many authors and even programmers assume that an AI would have emotions AFTER developing complex thinking. You restate my opinion that emotions are REQUIRED for any intelligence beyond the simplest. They are how creatures react and evaluate their reactions. ie angry means attack, sad means regret something and learn to change actions, scared means run away, jealousy, lust, etc. The more involved the emotion, the more intelligent the creature can be.

Look at Data from Star Trek. By all means the most advanced "thinking" machine but yet still he needed a "Pre-programmed" emotion chip to be installed that had algorithms that specifically reacted to situations and such through a mathematical manner.

This is the biggest example of my point. Data HAD to have some emotions to act anywhere near as human as he did prior to getting the chip. Emotions are really a mathematical bias of how a creature reacts to stimuli, and a brain is based on AMOUNTS of input from section to section of the brain. Thats why chemical stimulants and depressants effect us the way they do. They enhance or depress the affects of various areas of the brain, or increase or decrease the reaction to certain hormones in the brain.

Actually hormones are the precursors to emotions. They are the cause and the carriers of the various emotions.

This is why emotions are neccessary for an AI. Without emotions an intelligence has no valuation on actions. Emotions are mental state. And implementing human/primate emotions are not that much harder to implement than simple pain/fight/fear/mate emotional states. They are in fact easier to implement than the more advanced intellectual states.

After all, all animals react with some degree of emotion, and more advanced mammals share many/most of human emotions (dogs react with hate, love, fear, jealousy etc).


Your basic fallacy is your hang up on software needing to be told what to do. A good AI just needs to be programmed with some rules, abilities, and valuations, and then can learn what they can do, evaluate what they should do or want to do, and the react accordingly. Just because most current software isn't programmed this way doesnt mean computers are forever incapable of being programmed this way.
 
Well I apologize for making a remark like that. It was quite ignorant of me. My knowledge in program is alot less that what you have. (I'm still a student)
But think about it. You are saying that we will completely eliminate the human equation. Do you truly think that this is possible, never mind the ethics of such an idea?
Hypothetically, lets say that the computer no longer has need of our input for it to do a job.
Will the computer start to think about other matters? God forbid it starts to think against the Prime Directives (if the master has already programmed such into the machine). Do you program it to think of only one task? A Deep blue type of computer?
Now looking at the ethics of this, would it be right for us to control a thinking "sentient" machine or in this case "being"? Wouldn't this be tantamount to slavery?
I quite enjoy these debates.

Edited for typos
 
I just want to chime in with the whole "baby versus a PC" thing. First off, babies ARE programmed, if not in any conventional way. (Mommy DNA / 2) + (Daddy DNA / 2) = Baby DNA. Then the Baby DNA "program" is used to create the baby, along with input from its environment.

In the same way, if we could program DNA in a computer sense, we would be creating Program A that then creates Program B. Program B is what you would compare to the baby, not Program A.

Do I have a point? No, not really. But this is an interesting logic excercise. 😀
 
Now looking at the ethics of this, would it be right for us to control a thinking "sentient" machine or in this case "being"? Wouldn't this be tantamount to slavery?


It might be tantamount to serfdom or serfery if we allowed the sentient machine to reside on our lands. :wink:
 
In twenty years time the human race will be wondering why they stupidly built computers that need 1Kw to power them, why they drove SUV's that get 10 miles to the gallon. We'll wonder why we poisoned ourselves with chemicals like MSG, and why we ever thought global warming didn't exist.

As for PC's.... well if we haven't wiped ourselves out due to global drought or war... then maybe we'll use ultra efficient sub 1ghz core micro client at home. It will interface to some grid based noded computer system using a wireless connection over the power lines. The only difference between PC's will be case styling as all data will be stored remotely and all external devices will be wireless too. There will be neural interfaces and RFID type tracking will keep everything in order, and the government will know everything you do... I mean everything. First person shooters will be banned due to the high level of violence... leaving only the military to have access to them. Tetris will be taught to 6 month old babies as a classic strategy game.

Okay I could be wrong but I think that the unexpected will be what happens... because we are almost always incorrect when we try to predict the future. :roll:
 
After reading the "who can list the greatest computer??" thread it's interesting to think about how big an upgrade my current computer is over the first computer I owned 20 years ago, a commodore 128.

Current PC: 3.2ghz 64 bit core 2 duo
C128: 6510 8-bit CPU @ 1.023 MHz
New Vs Old: 25,600 times
20 years from now: 81,920Ghz equivalent (maybe 3.2Ghz with 51,200 cores)

Current PC: 2 gigs of ram
C128: 128kb
New Vs Old: 15,625 times
20 years from now: 31 terrabytes

Current PC: 320 gig hd
C128: 170kb drive
New Vs Old: 1,882,352 times
20 Years from now: 600,000 terrabytes

Current PC: 1920x1200(2,304,000 pixels) Display
C128: 320x200 (64,000 pixels) Display
New Vs Old: 36 times
20 Years from now: around 12,000x8,000 or 82,944,000 pixels

Current PC: 113 Keys
C128: 90 Keys
New Vs Old: 1.26 times
20 Years from now: 142 keys

Current PC: 700 watts
C128: 40 watts
New Vs. Old 17.5 times
20 Years from now: 12,250 watts

Please check my math, my calculator doesn't use commas so it was somewhat hard to read with all those zeros

That is quite some specualtion you're doing there, and i like it!

Let's just hope pcs are that powerful, without the MASSIVE power drain.
 
Now looking at the ethics of this, would it be right for us to control a thinking "sentient" machine or in this case "being"? Wouldn't this be tantamount to slavery?


It might be tantamount to serfdom or serfery if we allowed the sentient machine to reside on our lands. :wink:

Can you imagine saying, "Bow, you knave" to a calculator? Robin Hood - CPU style.... :wink:
 
All I want is a computer that can come up with the all meaning question to the ultimate answer....

220px-Answer_to_Life.png

The meaning of Lif 😀
 
But think about it. You are saying that we will completely eliminate the human equation. Do you truly think that this is possible, never mind the ethics of such an idea?
Hypothetically, lets say that the computer no longer has need of our input for it to do a job.
Will the computer start to think about other matters? God forbid it starts to think against the Prime Directives (if the master has already programmed such into the machine). Do you program it to think of only one task? A Deep blue type of computer?
Now looking at the ethics of this, would it be right for us to control a thinking "sentient" machine or in this case "being"? Wouldn't this be tantamount to slavery?

I quite agree that the morality of AI use needs to be addressed.

If you create an AI for a job, isn't there the implicit threat you will turn it off (kill it) or reprogram it (mind wipe/control)?

If you build an AI for a single job, then wipe it afterward in preparation for the next job, is that murder?

What happens if you have a cpu capable of hosting multiple AIs? What are the ethics of replicating the AI when you need one for something, then flushing it or merging it with the main AI. (ie the matrix idea of replicating Agent Smith a 1000 times when you need to).
Is it murder if you terminate ANY instance of the AI, or just if you terminate/flush the last copy.
Is it murder/immoral if you restore an AI back to a backup copy (ie because it learned the wrong things).
Is it immoral to artificially impair/dumb down an AI for a particular job? (ie make a stupid version targeted for a menial job, or limit its memory so it forgets anything new after a day)

If you can make a medium level AI (say dog or chimp) and put it in a robot, it that ok? What if we then selectively enhance certain parts to make it better at the job without inducing human level intelligence? Is this any more a form of slavery than breeding a species of dog that is good at and likes some form of work (ie guard dog or drug sniffer or blind support). What if you breed (tweek in an artificial reality?) these until you have the perfect design, then mass produce them?

And of course the age old Sci Fi question of what if you add more "neurons" to the AI's design in order to get a smarter AI?
Is there a limit and do we even want to see an AI with an IQ of 500 or 5000?

Of course you have to address the same issue if we ever figure out how to add another layer of grey matter to our brains. I even remember a book that had a plot line about this.
 
If you are interested in the whole idea of Transhumanism and the possibility of uploading your mind to PC in the future then why not check out the moral and ethical issues raised by Nick Bostrom at nickbostrom.com. He covers such great ideas as whether we should be worried that our real life is actually a simulation ( and WE could get switched off ). It can be heavy in places but well worth a dive in if you find these topics interesting.

Will machines outsmart humans ?
http://www.nickbostrom.com/2050/outsmart.html

:idea:
 
I think the power requirement is the least likely aspect of your post to become a reality. All the rest... who knows?

Hm, well if by then we have developed Cold Fusion, or even better, portable Cold Fusion power cells meaning we dont need a massive distribution grid, and maybe a regenerative heat dissipation system that turns the energy radiated as heat back into electricity, the power requirement will come to pass!

Although, as the device would to an extent be 'self powered', and therefore more efficient, the load at the wall wouldnt be the full wattage quoted once the thing had heated up.
 
In 2001, the institute of light physics was able to stop light, they estimated by 2009 a game cube size optronic computer would be 8000x more powerful than the current days super computers. They Estamated it at approximately 8 petahurtz, 8000 terrahurtz.

Obviously the governments will not allow these computers to be released to the public. I believe in 20 years optronic computers will be available, but the real horsepower will be reserved for governments and large companies.
 
Stop light? As in freeze it in mid-air? What would be the practical applications of it? If the devices required could be shrunk sufficiently, I suppose it could be used to make 3D holographic displays.
 
I dont remember the name of the book.
The plot line involves taking 3 people with mental retardation (i think via brain damage) and putting their brains in support boxes and applying a layer of neural tissue around their brains.
It ended up giving them real high IQs and i think psionics or something.

The three laws were a great solution to dealing with AIs that the programmers do not understand so do not know their motivations. Thus you need protection from them enslaving/destroying the humans. The problem is they added (in the asimov books) a fourth law that basically says the whole of mankind is better than the individual so ended up doing what THEY thought was better for mankind, thus defeating the whole principal.

It boils down to do you harm the individual for their own good, or the good of the group. If you allow that, you allow all kinds of problems.

You really need a way to insure they are benevolent. The problem is who decides what is benevolent and how far can you go?
 
Wow, what a thread.

All I can say is that 'if' I survive the next 20 years I sure hope my computer looks like 7 of 9 and comes loaded with all the 'performance' programs from the far east...

Atleast I could programmer her to not care which movie stars are sleeping with which.

As for the AI debate, which is always academic but always facinating, it'll probably happen one day. Shoot down a few laws of thermo dynamics and quantum mechanics and who knows, maybe god doesn't play dice.

Sadly though, those acting on his/her/its/their behalf will probably prevent us from ever really seeing that future. Maybe we can start a war between religion and microsoft, aren't they the only thing holding us back from real global progression?
 
This is the reason you see (in movies) the typically four unalterable directives (prime directives).

The biggest of which is no machine will ever harm/or allow harm to a human...

See Issac Asimov... (although written as proposed science fiction he is well known as the father/founder of robotic ideals/laws).

Isaac Asimov, who, in my opinion, is the greatest writer of all time, published his three laws in a short story called "Runaround" which was published by Street and Smith Publications, Inc. in 1942. The three laws were stated as follows:

* A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
* A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
* A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Several readers have asked about Asimov's Zeroth law. Finally, I have found time to include it. The following paragraph was quoted from Rodger Clark's page on Asimov's laws:

Asimov detected as early as 1950, a need to extend the first law, which protected individual humans, so that it would protect humanity as a whole. Thus, his calculating machines "have the good of humanity at heart through the overwhelming force of the First Law of Robotics" (emphasis added). In 1985 he developed this idea further by postulating a "zeroth" law that placed humanity's interests above those of any individual while retaining a high value on individual human life.

Zeroth law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

****Now the cool part****

We are already attempting/accomplishing the breaking of nearly all the laws. We have for now (manned robots) with the capability to find and destroy targets/enemies...

We are purposely breaking these rules because for the most part these devices/robots are still controlled manually (see predator UAV).

They are however working on autonomous vehicles which could carry out these tasks without the manned part.

Current practicality requires, in some cases, as determined by a few humans, the need to violate th First Law in order to implement the Zero Law.

For example, the need to harm a terrorist (one human) to protect "humanity"

Self-defense gets a little tricky sometimes.
 
Yeah,

That is why I said above that it is already basically flawed...

Talk about programming loops :)

Do I kill the guy (searching directive 1 found answer = NO), (Searching second directive = YES), (Violation of directive 1 found) :)

Could be quite circular if ya think about it...
 
Ischy,

I find your narrative intriguing. So is there a heirarchy of AI?

1. Energized (wait state) - the AI is "on" ready to store or distribute information
2. Data handling - predefined processes: I/O
3. Algorithm - apps, number crunching
4. Adaptation - problem solving apps that adapt to changing stimuli
5. Learning - the AI uses its system to change or create its own apps to solve new problems not imagined by the creator
6. Independence - the AI does all of the above without maintenance at first this may seem a little trivial, but I don't think so
7. Awareness - the AI becomes aware of what it is doing, that it "IS"

As Spock might say "Fascinating"