Google's AutoML AI Won't Destroy The World (Yet)

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
Interesting yet dangerous at the same time. Transhumanism on the rise and CTs becoming true. However, it would be foolish to think a machine can ever be smarter than a human, algorithms are incapable of free will (But thats a good thing).
 
Why are you so sure that no sufficiently complex system of algorithms cannot possibly acquire free will at least to the extent that we have such freedom ourselves?
 
Machines have already shown to be smarter than humans in some respects, but as to free will, that will only happen when and if machines get sentience and become conscious of their own existence.
 
> algorithms are incapable of free will

No problem if algorithms can't unilaterally destroy us. As humans run the algorithms, we may use them to destroy ourselves. A subset of "garbage in garbage out."
 
The brain is a machine, it would be foolish to think it is a "unique" one. Things are moving so fast, hard AI will probably be here within another few decades.
 
"The brain is a machine, it would be foolish to think it is a "unique" one. Things are moving so fast, hard AI will probably be here within another few decades."

Not a very convincing argument. We barely understand ourselves. The idea that some compute power will naturally turn into sentient beings borders on the fantasy akin to popular sci-fi plots. Technology advancement happened not because randomly, but because its understood well. It takes great understanding to make simple devices but randomness for something far greater as a sentient machine?
 
" Transhumanism on the rise and CTs becoming true."

The problem with us is that the leaders worry about practically inconsequential problems like these but refuse/ignore real world problems.
 

Okay, but nobody said that.


One of the interesting things about deep learning is highlighted in the article - that it's often the case that people who train a neural network to do a certain task often don't actually understand how it performs that task. This lets some air out of the idea that we actually need to understand everything about the human brain, in order to emulate it. A lot is currently understood about how human brains work, and I'll go out on a limb and say that if we had hardware with the equivalent computational performance, it wouldn't be so hard to teach it abstract reasoning, the ability to form hypothesis, and other higher-order cognitive tasks.

What's so interesting about Google's new development is that they're enabling people to design neural networks to accomplish a given task, with even less understanding of how neural networks function. To the extent that this technology can be applied to higher-order cognitive tasks, we might find that true machine intelligence is a bit closer than we might've assumed.

And I agree with everyone who said that the most dangerous aspect of deep learning is the things people will use it for. Like cyber-criminals or terrorists who might use it to help them bring down the grid or gain access to some country's nukes.
 
"The problem with us is that the leaders worry about practically inconsequential problems like these but refuse/ignore real world problems." Which leaders are discussing this problem? I'm not asking rhetorically I just haven't heard of many world leaders discuss the looming problem of automated machine learning or even AI generally. Unless you're talking about privacy, which would make more sense, but even then it would be much less 'inconsequential' than you're making it seem.
 
AutoML was used to create an image recognition system and a new, more efficient lstm cell. The paper and blog did not mention speech recognition.
 
Status
Not open for further replies.