Guesstimate of computers in 20 years

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Thank you for clarifying your point. Yes, I did think you meant how the computer looked and yes I also thought it was the stupidest **** ever. Can you blame for taking what you said literally, though? You didn't elaborate so naturally I assumed you meant exactly what you said.

No, you just didnt try to make sense of what i said and was quick react.

Also, you could have told me in your first reply what you actually meant. I think it was obvious by my post that I thought you meant the 'looks' of the computer.

I disagree whem you say that computers today perform with astonishing speed. It can take us hours to transcode a movie file with current tech. That's pretty inconvenient to a great deal of people. However, when we can do the same operation in 5 minutes then I will concede that it is astonishingly fast.

I agree with you on that, that for speficic task the computer does have ways to go, but for the general market of the people that buy the majority of computers, there isnt much left for improvement. If you buy a $1000 computer today or a $1000 computer a year ago, it would make any diference to anyone not in desing or computer gaming. I think, that the percentege of the market that uses it for this will decrease as 3rd world countries develop. Getting better efficiency is imperitive. There will allways be a high-end market, so dont worry about that, is just that i think that in the future there will be alot more advancements in efficiency that there have been in the past.


(The paradox being, of course, that 5 minutes will seem astonishingly slow when several years later we can do it in 1 second.)

In addition, I do not feel that Ageia is going off on a useless tangent. Their implementation of hardware physics may be off base but it is a revolutionary concept that will pay off for gamers in the future.

As it has been proven before, having a GPU act as a PPU of sorts, in my opinion, is alot more viable than just having a separate card for the PPU. currently having a PPU can make games run slower instead of faster(http://www.guru3d.com/article/Videocards/381/.
 
The best guesstimates are as follows:

2025 $1000 PC is about 1/10th as smart as a human.
2030 $1000 PC is as smart as a human
2040 $1000 PC is as smart as 1000 humans
2050 $1000 PC is smarter than all humans who have ever lived, combined.

Get your Future on at http://www.kurzweilai.net

Smart: showing intelligence and mental alertness.

Computers exhibit neither characteristic, nor do they "think". They simply do what they're told. Do adaptive processes exist? Of course, but it's not "thinking" if the adaptation is based on predefined rules created by humans.

This thread is only interesting to the Star "Trekkie's".
 
Man can and will make more intelegent computers(software and hardware) than Man, but he'll never make a computer to have feelings.
 
first off, good topic for discussion.

I've not had time to read all of the replies yet, but here's my guess. Things will be much more portable. Entire computers in iPod size cases (or smaller) to carry around. and eye glasses that wirelessly connect with built in "displays", but not conventional displays like we see now. more like transparent glasses with images on the lenses so you can see where your walking/driving and get visual feed back from the PC in your pocket. voice controlled Operating systems with voice feedback. Star Trek style. "computer..."

oh yeah cell phones, pagers, PDAs, watches, wallets cease to exist, and probably car keys too. clothes will no longer have pockets except for the PC pocket which is probaly in your shoe anyways. Only wedding rings and PCs will be carried on a daily basis.

some of this may take a little longer than 20 years, but I expect it will happen in our lifetime.

Intel - intel inside
Intel - Leap ahead
Intel - integrate
 
Man can and will make more intelegent computers(software and hardware) than Man, but he'll never make a computer to have feelings.

Intelligence – Ability to think and learn

1.the ability to learn facts and skills and apply them, especially when this ability is highly developed

This won't happen in our lifetimes, if ever.

I would guess that anyone believing that doesn't know much about how computers work.
 
Pockets will continue to exsist - just like T-shirts will continue to exsist - becasue they are so convenient. I'll hazard a guess to say that books will persist as well.

As for 'intelligence' this comes down to the prospects of AI. Like I said above, the only thing keeping me in my job is the lack of a capable AI combined with good voice recognition technology (or trained chimps that could talk - damn dirty apes). But 'good' AI has been "around the corner" for more than a decade. Will it be like fusion? Time will tell.


And as for power requirements, we'd need substations to handle that projected load. More likely are increases in efficiency as well as capability so that while more power is required than now, it is not wildly more power.
 
Intelligence – Ability to think and learn

1.the ability to learn facts and skills and apply them, especially when this ability is highly developed

This won't happen in our lifetimes, if ever.
You are wrong about this. For example a CounterStrike BOT utility is learning how to play on different maps.

I would guess that anyone believing that doesn't know much about how computers work.
I think you are wrong again.
 
Actually AI has been around since the early 80's. The AI lab at MIT spun out at least 2 companies, Thinking Machines and Symbolics that licensed the hardware technology developed there.

The LISP language was largely developed there as well. Systems based upon these ideas have been failures for the most part. The exception is in the field of robotics.
 
Computers will only be as "smart" as us when they can figure out how to do what THEY WANT. They will have to have wants and needss and such. So for all the work that would have to be put in to make something that WANTS something seems counter productive. Especially since the point is to kind of create a servant that is autonomous and low matainance and that we can boss around. Sorta the ultaimate lacky.

So in short to make computers smarter than us would be dumb.
 
Intelligence – Ability to think and learn

1.the ability to learn facts and skills and apply them, especially when this ability is highly developed

This won't happen in our lifetimes, if ever.
You are wrong about this. For example a CounterStrike BOT utility is learning how to play on different maps.

I would guess that anyone believing that doesn't know much about how computers work.
I think you are wrong again.

Sorry, but it's not "learning" anything. This is an example of a rule based adaptive system. I'm sorry you don't get it, but maybe after a bit more studying you will.
 
I don't know about the next 20 years but I recall seeing a new miniature LCD screen development (PopularScience?) that supposed to enable very hi-resolution viewing for head mounted glasses. It was a couple of years ago since I saw the article but still have not seen anything on the market. There are some out now but they are lower resolution and apparently cause problems and headaches for some viewers. They are supposed to be the same as looking at a large screen of about 50" some articles said but have the added benefit of being able to include more of a 3D image because of the seperate LCD for each eye.
I am hoping I wont have to wait 20 years for this tech to improve along with available software to make good use of this. (Oblivion) This would (WorldofWarcraft) be great for (Fear)3D Cad design work. :wink:
 
Machines can't "think". So that shoots down that theory. Besides, we all know that aliens don't really exist... or do they?
Come to think about it, if we are the pinnacle of intelligent thought in the universe, we are all doomed.
 
Machines can't "think".

And? I only have your word that humans can 'think' (whatever that means). So if I program my PC to say 'Yes, yes, I do think. I really do, honest', I have just as much proof that machines do.

Besides, we all know that aliens don't really exist... or do they?

If they did, we wouldn't have to ask that question. Any half-competent technological alien race could colonise the galaxy in under a million years, and they'd start dismantling the stars for raw materials long before then.
 
You picked the wrong day to engage me in this debate.
You said it yourself. If you program a PC to say "I can think", it is not the same as the machine becoming sentient and saying, "I can think." Define thought if you are so sure about yourself.

As for the other point I'm not arguing this. Its absolutely pointless.
 
If you program a PC to say "I can think", it is not the same as the machine becoming sentient and saying, "I can think."

Prove it. You're claiming that humans are 'sentient' (whatever that means), but I only have your word for that. Why should I trust a human to tell me about their internal state any more than I trust my PC?

Define thought if you are so sure about yourself.

How about you try defining it? Because most people I meet who use the word don't seem to understand what it means. 'But it's obvious, innit?'
 
Apparently reading comprehension isn't very big with you. Go back and reread my post.
If you program a PC to say "I can think", it is not the same as the machine becoming sentient and saying, "I can think."
Merriam Webster Defines Thought as this:

1 : to form or have in the mind
2 : to have as an intention <thought to return early>
3 a : to have as an opinion <think it's so> b : to regard as : CONSIDER <think the rule unfair>
4 a : to reflect on : PONDER <think the matter over> b : to determine by reflecting <think what to do next>
5 : to call to mind : REMEMBER <he never thinks to ask how we do>
6 : to devise by thinking -- usually used with up <thought up a plan to escape>
7 : to have as an expectation : ANTICIPATE <we didn't think we'd have any trouble>
8 a : to center one's thoughts on <talks and thinks business> b : to form a mental picture of
9 : to subject to the processes of logical thought <think things out>
intransitive verb
1 a : to exercise the powers of judgment, conception, or inference : REASON b : to have in the mind or call to mind a thought
2 a : to have the mind engaged in reflection : MEDITATE b : to consider the suitability <thought of her for president>
3 : to have a view or opinion <thinks of himself as a poet>
4 : to have concern -- usually used with of <a man must think first of his family>

A computer or a machine can do none of these things without first having the ability programmed or forced into it.
A human baby can think even if it is left on its own, not being taught. That doesn't mean that how it thinks will be socially acceptable, but it can sill make decisions without the need of a master to input data into it.

In C++, f the computer doesn't receive the right cinput it will not be able to interpret the data. A human can adapt and overcome by either looking for this data or making it up himself.
 
Prove it. You're claiming that humans are 'sentient' (whatever that means), but I only have your word for that. Why should I trust a human to tell me about their internal state any more than I trust my PC?

Here is the definition of sentient:
Etymology: Latin sentient-, sentiens, present participle of sentire to perceive, feel
1 : responsive to or conscious of sense impressions <sentient beings>
2 : AWARE
3 : finely sensitive in perception or feeling

A computer can not feel. It can not take into account emotions unless that ability to interpret emotions are first programmed into it as a mathematical equation. But that is not true sentience.

Lets look at the word aware now.

1 archaic : WATCHFUL, WARY
2 : having or showing realization, perception, or knowledge
- aware·ness noun
synonyms AWARE, COGNIZANT, CONSCIOUS, SENSIBLE, ALIVE, AWAKE mean having knowledge of something. AWARE implies vigilance in observing or alertness in drawing inferences from what one experiences <aware of changes in climate>. COGNIZANT implies having special or certain knowledge as from firsthand sources <not fully cognizant of the facts>. CONSCIOUS implies that one is focusing one's attention on something or is even preoccupied by it <conscious that my heart was pounding>. SENSIBLE implies direct or intuitive perceiving especially of intangibles or of emotional states or qualities <sensible of a teacher's influence>. ALIVE adds to SENSIBLE the implication of acute sensitivity to something <alive to the thrill of danger>. AWAKE implies that one has become alive to something and is on the alert <a country always awake to the threat of invasion>.

'Nuff said.
 
Intelligence – Ability to think and learn

1.the ability to learn facts and skills and apply them, especially when this ability is highly developed

This won't happen in our lifetimes, if ever.
You are wrong about this. For example a CounterStrike BOT utility is learning how to play on different maps.

I would guess that anyone believing that doesn't know much about how computers work.
I think you are wrong again.

Sorry, but it's not "learning" anything. This is an example of a rule based adaptive system. I'm sorry you don't get it, but maybe after a bit more studying you will.
Yes, it is learning. I used it as example. Can you tell me what I don't get and what to study?

For example, I can say the same to you. You can start with using the google engine. You can also visit Wikipedia as a first step for noobz:

http://en.wikipedia.org/wiki/AI

Artificial intelligence (AI) is a branch of computer science that deals with intelligent behavior, learning, and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition. As such, it has become an engineering discipline, focused on providing solutions to real life problems, software applications, traditional strategy games like computer chess and other video games.

Than you can try with some more advanced papers:

http://www.inl.gov/adaptiverobotics/humanoidrobotics/pubs/special-issue.pdf

Reinforcement learning provides an "unsupervised," learning-with-a-critic approach where systems can learn mappings from percepts to actions inductively through trial and error. Evolutionary methods begin with an initial pool of program elements and use genetic operators such as recombination and mutation to generate successive generations of increasingly better controllers.
Using these approaches and others, robots can learn by adjusting parameters, exploiting patterns, evolving rule sets, generating entire behaviors, devising new strategies, predicting environmental changes, recognizing the strategies of opponents, or exchanging knowledge with other robots. Such robots have the potential to acquire new knowledge at a variety of levels and to adapt existing knowledge to new purposes. Robots now learn to solve problems in ways that humans can scarcely understand. In fact, one side effect of these learning methods is systems that are anything but explainable. Careful design no longer suppresses emergent behavior but encourages it.

Or I can recomend you some professional scientific literature about the fundaments of AI:

Anderson, J. (1980) Cognitive units. Paper presented at the Society for Philosophy and Psychology, Ann Arbor, Mich. [RCS]

Block, N. J. (1978) Troubles with functionalism. In: Minnesota studies in the philosophy of science, vol. 9, ed. C. W. Savage, Minneapolis: University of Minnesota Press. [NB, WGL]

(forthcoming) Psychologism and behaviorism. Philosophical Review. [NB, WGL]

Bower, G. H.; Black, J. B., & Turner, T. J. (1979) Scripts in text comprehension and memory. Cognition Psychology 11:177-220. [RCS]

Carroll, C. W. (1975) The great chess automaton. New York: Dover. [RP]

Cummins, R. (1977) Programs in the explanation of behavior. Philosophy of Science 44: 269-87. UCM]

Dennett, D. C. (1969) Content and consciousness. London: Routledge & Kegan Paul. [DD,TN]

______. (1971) Intentional systems Journal of Philosophy 68: 87-106. [TN]

______. (1972) Reply to Arbib and Gunderson. Paper presented at the Eastern Division meeting of the American Philosophical Association. Boston, Mass. [TN]

______. (1975) Why the law of effect won't go away. Journal for the Theory of Social Behavior 5:169-87. [NB]

______. (1978) Brainstorms. Montgomery, Vt,: Bradford Books. [DD, AS]

Eccles, J. C. (1978) A critical appraisal of brain-mind theories. In: Cerebral correlates of conscious experiences, ed. P. A. Buser and A. Rougeul-Buser, pp. 347 55. Amsterdam: North Holland. [JCE]

______. (1979) The human mystery. Heidelberg: Springer Verlag. UCE]

Fodor, J. A. (1968) The appeal to tacit knowledge in psychological explanation. Journal of Philoso phy 65: 627-40. [NB]

______. (1980) Methodological solipsism considered as a research strategy in cognitive psychology. The Behavioral and Brain S^ciences 3:1. [NB, WGL, WES]

Freud, S. (1895) Project for a scientific psychology. In: The standard edition of the complete psychological works of Sigmund Freud, vol. 1, ed. J. Strachey. London: Hogarth Press, 1966. UCM]

Frey, P. W. (1977) An introduction to computer chess. In: Chess skill in man and machine, ed. P. W. Frey. New York, Heidelberg, Berlin: Springer Verlag. [RP]

Fryer, D. M. & Marshall, J. C. (1979) The motives of Jacques de Vaucanson. Technology and Culture 20: 257-69. [JCM]

Gibson, J. J. (1976) The senses considered as perceptual systems. Boston: Houghton Mifflin. [TN]

______. (1967) New reasons for realism. Synthese 17: 162-72. [TN]

______. (1972) A theory of direct visual perception. In: The psychology of knowing ed. S. R. Royce & W. W. Rozeboom. New York: Gordon & Breach. [TN]

Graesser, A. C.; Gordon, S. E.; & Sawyer, J. D. (1979) Recognition memory for typical and atypical actions in scripted activities: tests for a script pointer and tag hypotheses. Journal of Verbal Learning and Verbal Behavior 1: 319-32. [RCS]

Cruendel, J. (1980). Scripts and stories: a study of children's event narratives. Ph.D. dissertation, Yale University. [RCS]

Hanson, N. R. (1969) Perception and discovery. San Francisco: Freeman, Cooper. [DOW]

Hayes, P. J. (1977) In defence of logic. In: Proceedings of the 5th international joint conference on artificial intelligence, ed. R. Reddy. Cambridge, Mass.: M.l.T. Press. [WES]

Hobbes, T. (1651) Leviathan. London: Willis. UCM]

Hofstadter, D. R. (1979) Goedel, Escher, Bach. New York: Basic Books. [DOW]

Householder, F. W. (1962) On the uniqueness of semantic mapping. Word 18: 173-85. UCM]

Huxley, T. H. (1874) On the hypothesis that animals are automata and its history. In: Collected Essays, vol. 1. London: Macmillan, 1893. UCM]

Kolers, P. A. & Smythe, W. E. (1979) Images, symbols, and skills. Canadian Journal of Psychology 33: 158 84. [WES]

Kosslyn, S. M. & Shwartz, S. P. (1977) A simulation of visual imagery. Cognitive Science 1: 265-95. [WES]

Lenneberg, E. H. (1975) A neuropsychological comparison between man, chimpanzee

and monkey. Neuropsychologia 13: 125. [JCE]

Libet, B. (1973) Electrical stimulation of cortex in human subjects and conscious sensory aspects. In: Handbook of sensory physiology, vol. 11, ed. A. Iggo, pp. 74S 90. New York: Springer-Verlag. [BL]

Libet, B., Wright, E. W., Jr., Feinstein, B., and Pearl, D. K. (1979) Subjective referral of the timing for a conscious sensory experience: a functional role for the somatosensory specific projection system in man. Brain 102:191222. [BL]

Longuet-Higgins, H. C. (1979) The perception of music. Proceedings of the Royal Society of London B 205:307-22. [JCM]

Lucas, J. R. (1961) Minds, machines, and Godel. Philosophy 36:112127. [DRH]

Lycan, W. G. (forthcoming) Form, function, and feel. Journal of Philosophy [NB, WGL]

McCarthy, J. (1979) Ascribing mental qualities to machines. In: Philosophical perspectives in artificial intelligence, ed. M. Ringle. Atlantic Highlands, N.J.: Humanities Press. UM, JRS]

Marr, D. & Poggio, T. (1979) A computational theory of human stereo vision. Proceedings of the Royal Society of London B 204:301-28. UCM]

Marshall, J. C. (1971) Can humans talk? In: Biological and social factors in psycholinguistics, ed. J. Morton. London: Logos Press. [JCM]

______. (1977) Minds, machines and metaphors. Social Studies of Science 7:47588. [JCM]

Maxwell, G. (1976) Scientific results and the mind-brain issue. In: Consciousness and the brain, ed. G. G. Globus, G. Maxwell, & 1. Savodnik. New York: Plenum Press. [GM]

______. (1978) Rigid designators and mind-brain identity. In: Perception and cognition: Issues in the foundations of psychology, Minnesota Studies in the Philosophy of Science, vol. 9, ed. C. W. Savage. Minneapolis: University of Minnesota Press. [GM]

Mersenne, M. (1636) Harmonie universelle Paris: Le Gras. UCM]

Moor, J. H. (1978) Three myths of computer science. British Journal of the Philosophy of Science 29:213-22. UCM]

Nagel, T. (1974) What is it like to be a bat? Philosophical Review 83:43550. [GM]

Natsoulas, T. (1974) The subjective, experiential element in perception. Psychological Bulletin 81:611-31. [TN]

______. (1977) On perceptual aboutness. Behaviorism 5:75-97. [TN]

______. (1978a) Haugeland's first hurdle. Behavioral and Brain Sciences 1:243. [TN]

______. (1979b) Residual subjectivity. American Psychologist 33:269-83. [TN]

______. (1980) Dimensions of perceptual awareness. Psychology Department, University of California, Davis. Unpublished manuscript. [TN]

Nelson, K. & Gruendel, J. (1978) From person episode to social script: two dimensions in the development of event knowledge. Paper presented at the biennial meeting of the Society for Research in Child Development, San Francisco. [RCS]

Newell, A. (1973) Production systems: models of control structures. In: Visual information processing, ed. W. C. Chase. New York: Academic Press. [WES]

( 1979) Physical symbol systems. Lecture t the La Jolla Conference on Cognitive Science. URS]

______. (1980) Harpy, production systems, and human cognition. In: Perception and production of fluent speech, ed. R. Cole. Hillsdale, N.J.: Erlbaum Press. [WES]

Newell, A. & Simon, H. A. (1963) GPS, a program that simulates human thought. In: Computers and thought, ed. A. Feigenbaum & V. Feldman, pp. 279-93. New York: McGraw Hill. JRS]

Panofsky, E. (1954) Galileo as a critic of the arts. The Hague: Martinus Nijhoff. UCM]

Popper, K. R. & Eccles, J. C. (1977) The self and its brain. Heidelberg: Springer-Verlag. UCE, GM]

Putnam, H. (1960) Minds and machines. In Dimensions of mind, ed. S. Hook, pp. 138 64. New York: Collier. [MR, RR]

______. (1975a) The meaning of ''meaning." In: Mind, language and reality Cambridge University Press. [NB, WGL]

______. (1975b) The nature of mental states. In: Mind, language and reality Cambridge: Cambridge University Press. [NB]

______. (1975c) Philosophy and our mental life In: Mind, language and reality Cambridge: Cambridge University Press. [MM]

Pylyshyn, Z. W. (1980a) Computation and cognition: issues in the foundations of cognitive science. Behavioral and Brain Sciences 3. [JRS, WES]

______. (1980b) Cognitive representation and the process-architecture distinction. Behavioral and Brain Sciences [ZWP]

Russell, B. (1948) Human knowledge: its scope and limits New York: Simon and Schuster. [GM]

Schank, R. C. & Abelson, R. P. (1977) Scripts, plans, goals, and understanding Hillsdale, N.J.: Lawrence Erlbaum Press. [RCS, JRS]

Searle, J. R. (1979a) Intentionality and the use of language. In: Meaning and use, ed. A. Margalit. Dordrecht: Reidel. [TN, JRS]

______. (1979b) The intentionality of intention and action. Inquiry 22:25380. [TN, JRS]

______. (1979c) What is an intentional state? Mind 88:74-92. UH, GM, TN, JRS]

Sherrington, C. S. (1950) Introductory. In: The physical basis of mind, ed. P. Laslett, Oxford: Basil Blackwell. [JCE]

Slate, J. S. & Atkin, L. R. (1977) CHESS 4.5 - the Northwestern University chess program. In: Chess skill in man and machine, ed. P. W. Frey. New York, Heidelberg, Berlin: Springer Verlag. Sloman, A. (1978) The computer resolution in phylosophy Harvester Press and Humanities Press. [AS]

______. (1979) The primacy of non-communicative language. In: The analysis of meaning (informatics s)> ed. M. McCafferty & K. Gray. London: ASLIB and British Computer Society. [AS]

Smith, E. E.; Adams, N.; & Schorr, D. (1978) Fact retrieval and the paradox of interference. Cognition Psychology 10:438-64. [RCS]

Smythe, W. E. (1979) The analogical/propositional debate about mental rep representation: a Goodmanian analysis Paper presented at the 5th annual meeting of the Society for Philosophy and Psychology, New York City. [WES]

Sperry, R. W. (1969) A modified concept of consciousness. Psychological Review 76:532-36. [TN]

______. (1970) An objective approach to subjective experience: further explanation of a hypothesis. Psychological Review 77:585-90. [TN]

______. (1976) Mental phenomena as causal determinants in brain function. In: Consciousness and the brain, ed. G. G. Globus, G. Maxwell, & 1. Savodnik. New York: Plenum Press. [TN]

Stich, S. P. (in preparation) On the ascription of content. In: Entertaining thoughts, ed. A. Woodfield. [WGL]

Thorne, J. P. (1968) A computer model for the perception of syntactic structure. Proceedings of the Royal Society of London B 171:37786. UCM]

Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines, ed. A. R. Anderson, pp.4-30. Englewood Cliffs, N.J.: Prentice Hall. [MR]

Weizenbaum, J. (1965) Eliza - a computer program for the study of natural language communication between man and machine. Communication of the Association for Computing Machinery 9:36 45. [JRS]

( 1976) Computer power and human reason San Francisco: W. H. Freeman. URS]

Winograd, T. (1973) A procedural model of language understanding. In: Computer models of thought and language, ed. R. Schank & K. Colby. San Francisco: W. H. Freeman. [JRS]

Winston, P. H. (1977) Artificial intelligence Reading, Mass. Addison- Wesley; JRS]

Woodruff, G. & Premack, D. (1979) Intentional communication in the chimpanzee: the development of deception. Cognition 7:333-62. [JCM]


And this is recent one, it might be more usefull:
Goertzel, Ben; Pennachin, Cassio (Eds.) Artificial General Intelligence. Springer: 2006. ISBN 3-540-23733-X
 
I love these people who have such egos they think that being human is "MAGIC" and that computers will never be sentient or have feelings.

What is a baby's brain (or even that of any other newborn creature) but a PRE-PROGRAMMED information repository and input-response generator and learning device.

There is no quantum magic that makes neurons so special they cannot be duplicated artificially.

Yes current AIs and brain simulators are weak compared to neural systems, BECAUSE THEY ARE LOW CAPACITY AND LINEAR!

Once we start developing asyncronized, fully parallel circuits, where all the circuits are a work at the same time, and communicate with adjacent circuits, then we will be on our way to developing artificial brains.

Forget about quad cores, we need every memory cell/logic gate to be a simple CPU, with multiple inputs and outputs, and processing that outputs a different data depending on which direction/combination of inputs activates it.

And to those people who insist that AIs won't have feelings; emotions are much more primitive and simpler to implement then intelligense.

After all, emotions evolved much earlier than even primitive intelligense. Even the stupidest animals can react in fear, hunger, lust, anger, etc.
 
lschiedel is right, there is no magic involved and our brains really are just a large set of highly parallelized circuits. If nothing else, we'll eventually have the computing power to fully simulate every neuron in a human brain, which being an exact replicate of a physical brain would obviously have the ability to think. Computers are already used to simulate extremely complex, computationally intensive biological processes, such as folding. It's a trite, overused argument, but it should be pointed out that X years ago, nobody thought computers would ever be able to do that. (I don't know precisely what the X is here, maybe 20 years?)

Really though, this whole thing comes down to one question; Do you think that humans will hit a brickwall and technology will simply no longer advance past a certain point? If you do, please explain why/how you think that. And if you don't then computers as smart as humans (and other stuff that would be "magic" to us) are logically inevitable. There is no matter of degree of here; it's either (a) there is a limit or (b) there isn't.
 
You know computing, but you lack programming knowledge. Every computer needs to be told what to think, linear or not. The fact that the AI doesn't follow the linear pattern and is more of a abstract calculating machine still doesn't negate that you need to have an external >>input somewhere along the line for it to actually do its thinking. There must be a beginning somewhere.

Emotions developed side by side with intelligence, not before it. To make an AI "feel" you need to have some sort of calculation or algorithm or formula actually lay out the ground work for the emotion.

Look at Data from Star Trek. By all means the most advanced "thinking" machine but yet still he needed a "Pre-programmed" emotion chip to be installed that had algorithms that specifically reacted to situations and such through a mathematical manner.

I love these people who have such egos they think that being human is "MAGIC" and that computers will never be sentient or have feelings.
......... No comment.
 
Take a look at this:
[code:1:7aa832a95f]// Lindon Juan Augustin Thomas
// ENG*230 Fall 2006
// September 14, 2006
// Resistor Value

#include <cstdlib>
#include <iostream>

using namespace std;

int main(int argc, char *argv[])
{
// Declearation
double R1; // Storage for first resistor
double R2; // Storage for second resistor
double R3; // Storage for third resistor
double RParallel; // Storage for calculated parallel resistance
double RSeries; // Storage for calculated resistance series
// Input
cout << "Please provide the resistance value for the first resistor (in ohms): \n";
cin >> R1;
cout << "Please provide the resistance value for the second resistor (in ohms): \n";
cin >> R2;
cout << "Please provide the resistance value for the third resistor (in ohms): \n";
cin >> R3;

//Process
RParallel = 1.0 / (1.0 / R1 + 1.0 / R2 + 1.0 / R3);

//Output
cout << "The parallel resistance of these three resistors is " << RParallel << endl;

//Process for series
RSeries = R1 + R2 + R3;

//Output for series
cout << "The series resistance for the three resisitors is " << RSeries << endl;

system("PAUSE");
return 0;
}
// Yes I'm DaSickNinja. Hello.
[/code:1:7aa832a95f]

What do you notice? I tell the machine what the input is. As long as I have to do that, even if it can predetermine what the cout should be, or what the process is, computers will never be sentient.

If I alter the code:
[code:1:7aa832a95f]// Lindon Juan Augustin Thomas
// ENG*230 Fall 2006
// September 14, 2006
// Resistor Value

#include <cstdlib>
#include <iostream>

using namespace std;

int main(int argc, char *argv[])
{
// Declaration
double R1; // Storage for first resistor
double R2; // Storage for second resistor
double R3; // Storage for third resistor
double RParallel; // Storage for calculated parallel resistance
double RSeries; // Storage for calculated resistance series
// Input
cout << "Please provide the resistance value for the first resistor (in ohms): \n";

cout << "Please provide the resistance value for the second resistor (in ohms): \n";

cout << "Please provide the resistance value for the third resistor (in ohms): \n";


//Process
RParallel = 1.0 / (1.0 / R1 + 1.0 / R2 + 1.0 / R3);

//Output
cout << "The parallel resistance of these three resistors is " << RParallel << endl;

//Process for series
RSeries = R1 + R2 + R3;

//Output for series
cout << "The series resistance for the three resistors is " << RSeries << endl;

system("PAUSE");
return 0;
}
// Yes I'm DaSickNinja. Hello.
// The cin >> statements have been removed.
[/code:1:7aa832a95f]
By removing the cin>> even if the computer can think abstractly, can do asynchronous calculations, if it almost or 90% mirrors the human mind, you will get a logic error.
I'm not trying to bait or anger any of you. This is just how I see things. in logical succession. As a matter of fact, I quite enjoyed this discussion and hope it continues.