AI algorithm, which predicts possible crimes a week in advance, has been tested in various US police departments in recent years, with some units even integrating it into their work. However, a new study conducted by The Markup and Wired has revealed that the technology has not lived up to its initial expectations.
For instance, in the city of Plainfield, New Jersey, the AI predictor proved to be both cost-prohibitive and ineffective. Specialists examined 23,631 reports on the forecasts generated by the GEOLITICA program, formerly known as Predpol. The analysis of information from February 25 to December 18, 2018, showed that the accuracy of the calculations was less than half a percent.
Originally, Geolitica was designed to identify “hot spots” – areas with a heightened risk of criminal activity based on in-depth data analysis.
Captain David Guarino of the Plainfield Police Department openly criticized the inefficiency of the technology, stating, “Why did we adopt Predpol? We wanted to enhance crime-fighting efficiency. If we knew where to expect trouble, the process would genuinely become easier. However, I highly doubt that it helped… It seems like we hardly used it, if we used it at all. Consequently, the decision was made to abandon its use.”
The work principles of Predpol have long been questioned by experts, especially after a previous study revealed that the AI algorithm predominantly focused on areas with low incomes and ethnic minority populations.
Geolitica has not yet commented on the study’s results. However, according to Wired, the company plans to cease operations by the end of this year, and some team members have already joined another firm in the law enforcement field.