Google's AutoML AI Won't Destroy The World (Yet)

Status
Not open for further replies.

lionheart051

Reputable
Sep 28, 2014
14
0
4,510
Interesting yet dangerous at the same time. Transhumanism on the rise and CTs becoming true. However, it would be foolish to think a machine can ever be smarter than a human, algorithms are incapable of free will (But thats a good thing).
 

Tairaa

Reputable
Aug 29, 2014
10
0
4,510
Why are you so sure that no sufficiently complex system of algorithms cannot possibly acquire free will at least to the extent that we have such freedom ourselves?
 

Verisimilitude_1

Prominent
May 28, 2017
1
0
510
Machines have already shown to be smarter than humans in some respects, but as to free will, that will only happen when and if machines get sentience and become conscious of their own existence.
 

Nei1

Distinguished
May 13, 2007
28
0
18,530
> algorithms are incapable of free will

No problem if algorithms can't unilaterally destroy us. As humans run the algorithms, we may use them to destroy ourselves. A subset of "garbage in garbage out."
 
G

Guest

Guest
The brain is a machine, it would be foolish to think it is a "unique" one. Things are moving so fast, hard AI will probably be here within another few decades.
 

DavidC1

Distinguished
May 18, 2006
494
67
18,860
"The brain is a machine, it would be foolish to think it is a "unique" one. Things are moving so fast, hard AI will probably be here within another few decades."

Not a very convincing argument. We barely understand ourselves. The idea that some compute power will naturally turn into sentient beings borders on the fantasy akin to popular sci-fi plots. Technology advancement happened not because randomly, but because its understood well. It takes great understanding to make simple devices but randomness for something far greater as a sentient machine?
 

DavidC1

Distinguished
May 18, 2006
494
67
18,860
" Transhumanism on the rise and CTs becoming true."

The problem with us is that the leaders worry about practically inconsequential problems like these but refuse/ignore real world problems.
 

bit_user

Polypheme
Ambassador

Okay, but nobody said that.


One of the interesting things about deep learning is highlighted in the article - that it's often the case that people who train a neural network to do a certain task often don't actually understand how it performs that task. This lets some air out of the idea that we actually need to understand everything about the human brain, in order to emulate it. A lot is currently understood about how human brains work, and I'll go out on a limb and say that if we had hardware with the equivalent computational performance, it wouldn't be so hard to teach it abstract reasoning, the ability to form hypothesis, and other higher-order cognitive tasks.

What's so interesting about Google's new development is that they're enabling people to design neural networks to accomplish a given task, with even less understanding of how neural networks function. To the extent that this technology can be applied to higher-order cognitive tasks, we might find that true machine intelligence is a bit closer than we might've assumed.

And I agree with everyone who said that the most dangerous aspect of deep learning is the things people will use it for. Like cyber-criminals or terrorists who might use it to help them bring down the grid or gain access to some country's nukes.
 

Jake0519

Prominent
May 30, 2017
1
0
510
"The problem with us is that the leaders worry about practically inconsequential problems like these but refuse/ignore real world problems." Which leaders are discussing this problem? I'm not asking rhetorically I just haven't heard of many world leaders discuss the looming problem of automated machine learning or even AI generally. Unless you're talking about privacy, which would make more sense, but even then it would be much less 'inconsequential' than you're making it seem.
 

Shanice_1

Prominent
Jun 28, 2017
1
0
510
AutoML was used to create an image recognition system and a new, more efficient lstm cell. The paper and blog did not mention speech recognition.
 
Status
Not open for further replies.

TRENDING THREADS