News President Biden Signs Executive Order to Regulate AI

Status
Not open for further replies.
AI definitely needs some kind of regulation, We don't want some kind of Terminator situation at some point. I would say all AI needs some kind of failsafe programming that directs it to never harm humans that can't be overridden.
 
  • Like
Reactions: Evildead_666
AI definitely needs some kind of regulation, We don't want some kind of Terminator situation at some point. I would say all AI needs some kind of failsafe programming that directs it to never harm humans that can't be overridden.
Sadly not everyone is going to follow this and humans will just go with whatever is perceived to work the best. US users are not going to stick to US only AI if history is any indication.
 
Asimov mentions it in his writings many times. How do you communicate to the AI the concept of harm? What constitutes an order? Morality? Etc... language is one thing, hard logic is another.

Also why pretty much all visual media for Asimov's writings aren't done well. They can't translate those esoteric discussions and concepts to film. Bicentennial Man is the only one that really adheres to it.
 
Though I have yet to dig into the details much beyond what's in this article, here's my early take:
  • Well-intentioned.
  • Hard to enforce.
  • Goes a bit overboard, slowing down innovation & making enemies of developers rather than keeping them on-side.

I guess they had to do something, but this doesn't quite seem to get the balance right between protections and innovation.

Perhaps they'd have done better by having NIST establish a rating scale & certification standard on the various issues they're concerned about. Then, just establish a disclosure requirement and help 3rd party labs establish operations to issue the NIST certifications.
 
  • Like
Reactions: Order 66
The interesting point is that 100% of Asimov robot novellas are about how the three laws fail.
Even the 4th law that came up later is not failure proof.
I had a minor epiphany, recently. A fundamental problem with the current approach to AI is that we're making it in our own image (sound familiar?). Because it's trained on the products of humans and designed to function in a world populated by us, of course it's going to inherit some of our vices and failings.

Perhaps it needn't be this way, but the more AI understands us and how to deal with us, the more like us it'll tend to become. So, at some level, it's a tradeoff between utility and safety. You can't have an AI that truly understands you, that doesn't also understand how to manipulate you.

At the risk of sounding a bit cheeky, perhaps my observation also points the way to a solution: invent a religion where humans are to be revered. Of course, it probably won't take long for half-decent AIs to see right through such a farce.
 
Last edited:
  • Like
Reactions: Order 66
I had a minor epiphany, recently. A fundamental problem with the current approach to AI is that we're making it in our own image (sound familiar?). Because it's trained on the products of humans and designed to function in a world populated by us, of course it's going to inherit some of our vices and failings.

Perhaps it needn't be this way, but the more AI understands us and how to deal with us, the more like us it'll tend to become. So, at some level, it's a tradeoff between utility and safety. You can't have an AI that truly understands you, that doesn't also understand how to manipulate you.
No matter how this goes, it's going to end up with humans playing god.

At the risk of sounding a bit cheeky, perhaps my observation also points the way to a solution: invent a religion where humans are to be revered. Of course, it probably won't take long for half-decent AIs to see right through such a farce.
We already have a few, all based on Humanism or Socialism. What you are advocating for would probably be some form of Humanistic Technocracy.... Technocracy being a branch of Socialism with Socialism being a kind of Humanism..... central planning of minds makes things very complicated.
 
  • Like
Reactions: Order 66
The real question for me is how they are going to pivot this for increased centralized control the way they always do... and how it's going to impact the future of open source. These things only start to show their true colours after a few years of implementation creep.
 
Asimov mentions it in his writings many times. How do you communicate to the AI the concept of harm? What constitutes an order? Morality? Etc... language is one thing, hard logic is another.

Also why pretty much all visual media for Asimov's writings aren't done well. They can't translate those esoteric discussions and concepts to film. Bicentennial Man is the only one that really adheres to it.
I quite enjoyed Foundation - particularly the second season (although I haven't read the books)
 
How to reply to a political news item without doing a political post..... that is the question.
Depending on your motives, it's not hard. If you're looking for a way to use the news item to advance a political agenda, that's indeed fraught. If you want to make a point about the policy itself, its consequences, or the underlying tech, then it shouldn't be a problem.
 
Depending on your motives, it's not hard. If you're looking for a way to use the news item to advance a political agenda, that's indeed fraught. If you want to make a point about the policy itself, its consequences, or the underlying tech, then it shouldn't be a problem.
Therein lies the problem.... these days anything and everything can be "political". I am still waiting to laugh at someone for making a "political agenda" about the colour of the sky or the wetness of water. I can find no hard definition of what is "political" to rely upon. Half the time people just arbitrarily call something "political" to get people to shut up about it when it starts getting uncomfortable, at best it's a disingenuous excuse to avoid controversy when that happens.

The whole situation is just plain laughably ridiculous.

Given the current topic.... there is no way whatsoever I can see the conversation staying honest and sticking to just the technical points given the already politicized nature of it. That's a lot of effort to stick heads into sand to even try.
 
My advice would be to stay away from partisan politics or overt nationalism. That's what's most likely to get enforced, though I've been surprised on a few occasions by some rather less controversial posts getting deleted.
I don't think you quite appreciate just how heavily politicized AI is going to become in the next decade. There are going to be revolutions and wars because of it, because it's drivers are going to appropriate much deeper things than mere nationality. In fact the warning to hide any inkling of nationalism is in line with a desired outcome for agents of change, everyone being so scared of saying anything their nationality simply gets dissolved without real resistance.

But of course almost no one ever chooses to believe such warnings until it's too late, instead they choose wishful thinking and self delusion as everything turns to sewage around them. Don't get me wrong I am not even angry at the inevitable anymore.... I pity the blind who refuse to learn from the mistakes of others because "this time we are going to do i the right way, we are after all superior humans than those that came before us". I myself am fine with being overtly censored or silenced as long as it's done honestly and openly.... saves time.
 
I don't think you quite appreciate just how heavily politicized AI is going to become in the next decade.
No, I was talking about other sorts of posts. I haven't had any posts on AI getting deleted.

Don't get me wrong I am not even angry at the inevitable anymore.... I pity the blind who refuse to learn from the mistakes of others because "this time we are going to do i the right way, we are after all superior humans than those that came before us".
Okay, John Connor.

And no, I haven't heard anyone being so overconfident. There are plenty people who downplay the risks, but I'm not aware of anyone who both agrees about the risks and still feels we're on a safe path to avoid the various bad outcomes.
 
No, I was talking about other sorts of posts. I haven't had any posts on AI getting deleted.


Okay, John Connor.

And no, I haven't heard anyone being so overconfident. There are plenty people who downplay the risks, but I'm not aware of anyone who both agrees about the risks and still feels we're on a safe path to avoid the various bad outcomes.
Hahahaha I am not saying the AI itself is going to be the main driver of change, it's going to be a catalyst.
 
Hahahaha I am not saying the AI itself is going to be the main driver of change, it's going to be a catalyst.
IMO, one of the big risks AI poses is that it can put a lot more power in the hands of leaders who can't inspire the loyalty and devotion of a large number of competent human underlings. These could be leaders of criminal or terrorist organizations or even exploitative companies, as well outcast nations like North Korea, for instance.
 
Status
Not open for further replies.