DavidC1 :
Not a very convincing argument. We barely understand ourselves. The idea that some compute power will naturally turn into sentient beings borders on the fantasy akin to popular sci-fi plots.
Okay, but nobody said that.
DavidC1 :
Technology advancement happened not because randomly, but because its understood well. It takes great understanding to make simple devices but randomness for something far greater as a sentient machine?
One of the interesting things about deep learning is highlighted in the article - that it's often the case that people who train a neural network to do a certain task often don't actually understand how it performs that task. This lets some air out of the idea that we actually need to understand everything about the human brain, in order to emulate it. A lot is currently understood about how human brains work, and I'll go out on a limb and say that if we had hardware with the equivalent computational performance, it wouldn't be so hard to teach it abstract reasoning, the ability to form hypothesis, and other higher-order cognitive tasks.
What's so interesting about Google's new development is that they're enabling people to design neural networks to accomplish a given task, with even less understanding of how neural networks function. To the extent that this technology can be applied to higher-order cognitive tasks, we might find that true machine intelligence is a bit closer than we might've assumed.
And I agree with everyone who said that the most dangerous aspect of deep learning is the things people will use it for. Like cyber-criminals or terrorists who might use it to help them bring down the grid or gain access to some country's nukes.