[citation][nom]fuzznarf[/nom]I did something similar to this 10 years ago. An adaptive model that can aggregate new data and re-compile to a new model, transfer control to the new model, and shut down the old. That's the 'easy' part. The hard part is the overwhelming amount of digestion required of 'stimulus'. Just using a 23x23 photo sensor (camera) provided unbelievable amounts of data to churn, creating exponential cases (many leading to the same conclusion). Good luck to her and her team.think of watson, and that is all just text-based NLP (natural language processing) and it required HUGE processing power to work through all the potential amounts of input. Add physical stimulus like sight, sound, and touch and this is an gigantic undertaking. Its doable, but at the very least, it won't all fit in a human sized machine any time soon.[/citation]
Exactly. I once heard the claim that all the desktop computers in the world combined were roughly equal to the processing power of a single human brain. Using this and Moore's Law (provided it holds), we can estimate the point at which a single computer could conceivably emulate the brain.
Let's say it takes a billion computers to equal the human brain. Moore's law says that every 1.5 years, computing power doubles (assuming doubling transistor count and other advances double speed, which so far it has). So log base 1.5 of a billion is the number of years it will be until a single computer is roughly analogous.
Surprisingly, the amount of time needed is only about 51 years. Isn't exponential growth neat?