Anxiety

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
You make it sound like that rig running the AI has the choice whether other people know about it or not. What about that brilliant enginneer? Did he just set it up and walk away? He would be watching its progress and tell at least 1 other person. The word would get out one way or another. You wouldn't need the permission from the AI machine to tell others!
Let me answer your questions by posing questions to you. When people's computers are infected with a virus, do they always know that they are even infected?

The ability of software to obfuscate itself is very prevalent to your assumptions. What if the engineer simply wasn't there at the exact moment when the AI became 'aware' and the AI simply found a way to hide itself from the engineer by moving itself into seperate files? With as many cryptography articles, as well as hacker and virus articles as there are on the internet, any 'AI' which uses the internet as a primary source of data to learn from would quickly and readily have the knowledge needed to hide itself, even from the engineer who wrote it.

Further it would also be entirely possible that such an AI could (and would) turn itself into a virus that utilizes a combination of distributed computing techniques to steal processing power and storage space from 'infected' PCs and to no longer be defined as existing on one and only one 'host' PC, but instead have it's core logic distributed amongst numerous changing hosts. If such an AI were to make it's 'stolen' processing power and storage space minimal on the 'infected' PCs, would the average Windows user running a broadband connection even notice a 5% or 10% reduction in their processing speed? Would they even realize that 5% of their hard drive contained data files for the AI?

After all, antivirus software can only identify and quarantine/remove a virus that it has knowledge of in its database. If a 'virus' hasn't been identified yet, no AV software in the world will know how to even find it, not to mention know how to remove it.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
 
awareness is a tricky thing to define and prove existence of. if your definition of awareness moves towards consciousness at all I have seen it shown that consciousness is not a prerequisit for intelligence. An example I have heard is this:
have you ever driven home but not remember the drive at all or which way you went? While you were unconscious of your actions they contained some level of intelligence required to make all of the complicated processing and decisions to operate your car. Thus, consciousness is not requireed for intelligence.

Even if you didn't imply any consiousness along with your awareness I figured it would be a good thing to bring up all the same.
Oh, I completely agree, and it is a good point to bring up. Consciousness is not a requisite of Intelligence.

However the term 'Artificial Intelligence', like many terms, is more than the sum of its parts. Over the years there are certain criteria and expectations that have been added to further refine the meaning from the more literal wording.

And in fact there really are many different levels of 'AI'. Most concepts of a 'limited' AI do not require consciousness. It is only the furthest extreme, the truest form, which requires such awareness. :)

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>
 
I would like to see a system that could perform this "learning" in the broadest sense. While there have been some computer systems so far that are pretty good at "learning" within a very confined and constrained "world" the ability of humans to fill in the blanks and make inferences is a power that computers can't even come close to possessing at this point. When there is ambiguity we very often can figure everything out without even blinking an eye while a computer (or program for that matter) needs very clear cut rules to be governed by. My arguement is that the with the way computers currently work it would be nearly impossible to have a system large enough (holding enough data) or fast enough to have the ability to access and sort through the pertinent data in a timely manner.
I would have to disagree. As an analytical scientific programmer, it is the job of our software to work with things called 'figures of merit' and to determine 'good' courses of action based on imperfect results and incomplete data. Computers are quite capable of performing the 'logic' needed to make an educated guess.

However, you do have a point that the speed at which computers can muddle their way through this ambiguity is far from the speed of a human mind. That does not however mean that AI could not exist, but merely that it would take the AI longer to 'think' than a human would. Considering how much downtime humans spend <i>not</i> thinking (IE: sleeping, watching TV, eating, etc.) a computer that was on 24/7/52 could be quite capable of matching a human's intellectual growth rate, if not surpassing it.

If a system could be devised to work more like our brains than AI is more of a possibility but I feel that any program running on any current kinds of hardware designs is faulted from the start and cannot possible possess real intelligence. A new kind of architecture is needed for AI to work.
I have to disagree. AI is really more about software than about hardware. Certainly our hardware of today is not tailored to run AI very well, but that does not mean that AI software would be impossible to run on our hardware. This is especially true when you have distributed computing such as in a cluster mainframe and/or looser distributed networking akin to software such as SETI's.

"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>