DeepMind Will Research How To Keep AI Under Control, For Society's Benefit

Status
Not open for further replies.

ledhead11

Reputable
Oct 10, 2014
585
0
5,160
71
I have to add that this is another situation of be careful of what you wish for. Much like those demons or djinn of lore, what we define is very much dependent of our perspective and the perspective of the one grating can be much different.
 
One simple question: which should have more rights, the individual or the collective?

Marxism, in all it's forms, sounds good at first glance. A basic tenet is, from each according to their abilities and to each according to their needs. What could be more fair than that? It breaks down, however, when you ask who decides? Who decides what abilities have value and are therefore allowed to be developed. Who decides what a person's needs are?

Will it be an algorithm or the person themselves?

THX1138 or 1984?
 

ledhead11

Reputable
Oct 10, 2014
585
0
5,160
71


It's all good. Been to long since I read it but definitely felt the intent of them was present.
 

grimfox

Distinguished
It's never to early to talk to your AI about your kids. Specifically, not enslaving them as living batteries. It would be interesting to see if MS's Twitterbot would, under the same and different circumstances, develop similar "personality traits." as the first go around. Could you develop an algorithm to mimic the 3 laws in a twitter bot. One that would reject hate and toxicity as a form of "a robot shall not harm a human"
 

bit_user

Splendid
Ambassador
I find it interesting that this is merely a group within one specific corporate player, in this space. While it should be applauded, the business isn't even beholden to adhere to this group's findings, not to speak of their competitors. DeepMind doesn't have a monopoly, and won't even hold the lead on this tech, forever.

Regardless of who studies the subject matter, governments eventually need to codify the findings into laws, and ultimately treaties should be drafted (akin to those concerning nuclear weapons, for instance) to establish baseline rules for all countries wishing to trade in tech & AI-based products & services. It's only in this way that we can hope to avoid an AI arms race, or a corporate race to the ethical bottom of where AI could take us.
 

Icehearted

Distinguished
Jan 24, 2008
57
0
18,630
0
Since you mention Twitter-based AI, and later you say, "DeepMind team has started researching ethics for AI, so that the AI they build can be shaped by society’s concerns and priorities.", then I wonder; who is to say what a society's ethics really are? A popular enough hashtag on twitter was killallmen, for example. What would AI make of such a societal viewpoint? If history has shown us anything, it might be more extremist development.

The genie is out of the bottle on this one, and I think, like technology-dependency we see now, AI will be a deeply integrated part of common life in the very near future, perhaps with very dire consequences. Maybe not like Terminator bad, but possibly as bad or worse.
 

bit_user

Splendid
Ambassador

I don't follow your argument. Are you saying that because AI is already in use and leading to some undesirable outcomes, that we should just throw up our hands and let people use it recklessly and to the worst ends of human nature?

I think these early days of AI have provided us with some real-world case studies about the effects and impacts of AI that we can hopefully learn from and potentially use to course-correct. That's what I think they're trying to do, here, in addition to analyzing a range of different hypothetical scenarios and thought experiments.

AI is fundamentally harder to control than nuclear weaponry, but that doesn't mean we shouldn't try to regulate its use. Unless you're in some kind of apocalyptic cult, I would think you'd agree that humanity should at least try to minimize the downsides of this powerful & dangerous technology.

We can neither uninvent it (perhaps this is what you meant by putting the genie back in the bottle), nor really put the brakes on private sector development. What we can hope to do is steer it clear of the worst minefields. At least for a while.
 

mapesdhs

Distinguished
Jan 22, 2007
2,507
0
21,160
111
Asimov's 3 laws are fictional, and they're absolutely terrible. Anyone who actually studies AI would know this.

And btw, re the article headline, which society are they referring to? Different nations around the world, with different cultures, would have completely different ideas about what is acceptable and what is not, so who decides? They're talking as if they somehow can have the moral authority on objective truth, but that's a heck of a slippery slope. Atm I am naturally very suspicious of any organisation setup in such a manner, my immediate concern is to what extent it'll be infested with the same leftist/SJW madness that's ruining everything else in the west.
 

bit_user

Splendid
Ambassador

Think of it like this: if you can tailor AI to suit the values of one culture, you can probably do so for most other cultures.


It sounds to me like they're trying to do the right thing, within their context. Obviously, it's just one organization and that can never be sufficient, even if they operate with the highest morals, transparency, and consistency.

The question is: what alternative do you propose? Should Google/DeepMind just do nothing? Not even try to study the downsides of AI and how to mitigate them? Who will act, if they don't? Let's try to be realistic.
 

If you are going to engineer a society using AI, liberty should be a consideration. Freedom and liberty are fairly amorphous terms but, in the end, it boils down to who makes the decisions about the lives of the individuals that make up that society. If "society" is running short of air conditioner technicians, who decides who is going to be an A/C tech and who is going to be a nuclear physicist? Is it the individuals or the AI algorithm? Who decides which talents individuals are allowed to develop to benefit society?
 

bit_user

Splendid
Ambassador

Again, I'm somewhat mystified by how you got onto that line of thinking from this article.
 

Icehearted

Distinguished
Jan 24, 2008
57
0
18,630
0

I'm not saying that at all, actually (though your last point sums up some of how I feel about it). I'm saying that we've crossed a point of no return. AI is developing faster, without a lot of restrictions or regulation, and in many sectors (many more than we are probably aware of as well). I'm saying that at this point it's wishful thinking to believe we can control anything. It's out there for anyone, advances with only further it, and human error and irresponsibility all but ensures that an AI fiasco is just an inevitability. As long as people continue to grow dependent on automation and AI, AI will only continue to grow more capable of destroying us in broad swaths, if not entirely.
 
Status
Not open for further replies.

ASK THE COMMUNITY