Below Supernav ↴

How AI could help police predict crimes a week before they happen

  • Researchers have studied how AI can be used to predict crime
  • One study built model with 90% accuracy
  • Technology has been shown to have biases

 

Main Area Top ↴

AUTO TEST CUSTOM HTML 20241211205327

AUTO TEST CUSTOM HTML 20241212105526

(NewsNation) — Pre-crime: Solving crimes before they happen.

Up until now, the concept has only been possible in movies such as “Minority Report.” In the film, and the short story it was based on, psychics tip off authorities about murders before they happen so police can arrest the would-be killers before anybody dies.

The film is set in 2054 — 30 years from now — but experts say pre-crime predictions are already possible and we don’t need psychics to make them. We just need artificial intelligence.

In 2022, data analysts and social scientists at the University of Chicago developed an algorithm they say can predict crimes one week in advance.

They loaded it with three years worth of data focusing on violent crimes including homicides and assault, as well as property crimes such as burglary and car theft. Then, the algorithm divided the city into tiles roughly a thousand feet apart and calculated hot spots based on where the recent crimes happened.

A week later when the team reviewed the actual police blotters, they found that 90% of Chicago’s homicides, assaults and thefts had happened in the hot spots selected by the algorithm. They tried the model in seven other cities and the results were almost identical.

But what about catching or even preempting the perps? It turns out the algorithm can help with that, too.

Chicago police used it to compile a list of people who might one day be involved in a violent crime either as a victim or a perpetrator. The list was made up of people who spent time in one of the algorithm’s hot spots and whose demographics — their age, gender and race — match criminals and victims of the past.

Other AI crime models take it a step further.

For years now, judges and parole officers have been using artificial intelligence to calculate an inmate’s chance of reoffending. Factors that play in those algorithms include prior convictions, education and past employment. The model then calculates just how likely that person is to commit another crime and what crime it might be.

The ultimate goal, say the promoters of AI-based crime fighting, is to combine these models to pinpoint where a crime might happen, who the victim might be and which potential suspects were in the area at the time.

On paper, it just seems like a great idea. So, why isn’t it being widely implemented?

It turns out there’s a fly in those models — human beings.

Since the algorithms are chewing on data from people, they have the very same blind spots and biases that people have. The crime hot spots that were pinpointed in Chicago and the other cities were mostly black and Latino neighborhoods. The list of potential victims and suspects included 56% of the black men in Chicago, whether they had criminal backgrounds or not.

The algorithms used for parole decisions were far more likely to label white inmates incorrectly as low risk than inmates of color.

That doesn’t necessarily mean there isn’t a future for pre-crime predictions. Experts say the best way to fix the bad data is with more data. The more information an AI system gets, the more time it gets to learn, and the better it will be at spotting the prejudices and flaws in our own minds.

But that could take a while, probably decades. We may even have to wait until 2054.

Banfield

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. regular

 

Main Area Middle ↴

Trending on NewsNationNow.com

Main Area Bottom ↴