"Last year, Microsoft also proposed that facial recognition systems should be regulated by governments because the potential for harm is significant."
But isn't most of the potential harm due to government use? E.g., using flawed facial recognition to justify warrants and arrests that don't truly have probable cause?
So this is even worse than asking the fox to guard the henhouse. This is asking the fox to guard the fox. The government is the one who will abuse flawed facial recognition, so Microsoft is asking the government to regulate the government.
Microsoft is on point with the finding. Machines are inherently non bias and non random by nature.
Things have reached a level where if machine AI makes a bias judgement based on facial structure, heritage, social status, race, belief or any other various metric that it would have a poor backlash in the boom/bust legacy of modern day social politics.
Machines are by nature inherently bias, we're still in denial. Give AI another 25-35 years and the mainstream public might be (hopefully) better equipt to deal with rational conclusions reached from information and bias.
Sad, but I can't fault MS for the finding, I don't believe they're wrong.
MS is absolutely correct. And there is no problem to develop and build in a sandbox.
After all simulation is the developers game.
But we need to rebuild and advance civilisation first and remove all the social constructivists that created the 9bn people boondoggle. AI is not for the masses, its to make paradise for the few intelligent and beautiful creating few intelligent and beautiful offspring. Giving AI to Gammas is pointless. In a later instance conscious AI will be a great companion and not a traffic cop for idiots. In the third instance it will surpass us. Lets keept the record clean as not to muddle future relations.
So yes hats off for MS to consciously criticice the current approach.
Have not heard of Apples Cook. They use that stuff already in their credit checks etc.