News Researchers build AI-powered security systems that predict criminal behavior, claims 82.8% accuracy in predicting felonies with CCTV monitoring

When I see things like this I sometimes think to myself:

Who were the unscrupulous software developers who actually agreed to build this and WTF is wrong with their brains?

It's like a few months ago, Google had some layoffs that hit their software engineer teams. Sorry, no. I have zero sympathy for you. You willingly built a spying machine for Google to steal our data. Perhaps you even deserved to lose your job.

I just don't understand what is wrong with people like these.
 
I predict IT will predict what is was trained on.

If it ever comes to a western country like the US or western Europe, where population ethnicity is far less homogenous than South Korea, it will encounter a problem.

That's because some ethnicities have higher crime rate than others. Its is denied by the left and used by the right as a way to justify their racism. But it is the results of different factors (including structural racism) that put these ethnicities more at the lower end (income wise) of the society. And that means that they are more suceptible to turn to crime.

Since the model will not have access to the socio-economic profile of the person it is monitoring via CCTV (presumably), I think you could try to make it ignore skin color, but it will reduce its efficacy, since skin color is, in reality, a predictor. We, as human, understand that we should not use this parameter, because we have a moral compass. But AI models don't. And by the behavior of some policemen, you undestand that their own 'training' incorporates this parameter.

TL;DR : very bad idea.
 
The ability of the AI to track a known habitual criminal and detect his "Pre-crime indicators" is amazing.

Unfortunately, the ability to track every single citizen and detect any "undesirable" behavior is just a matter of scaling up and changing a setting.
 
heh, a few months ago with AI, it seemed we we creating the beginings of Skynet, now, it seems we are creating Project Insight.

🤣 🤣 🤣 🤣 🤣 🤣
 
Has anyone noticed that in retail stores facial recognition prompts security to target specific people? Even when they know they are wrong? Then ignoring actual thieves, mothers and their children, families, the same ethnicities and cultural indifferences or racism.

Does everyone know that specific people and their likeness were put into the AI, language models and CCTV, facial recognition, etc? The families that do not like certain people and have multi-generational grudges target who they choose to. Even when they were the fault of the conflicts. It was designed to limit the ability to even have the access to goods and essentials and to always feel uncomfortable, watched, be harassed and even to provoke violent attacks or incidents to further justify the personal vendettas. Facial recognition was mass implemented in what 2010 - 2012?

It is uncomfortable. But there are no regrets.