Robotics and AI

AI to predict crimes: from science fiction to reality

2 July 2020 | Written by Alessandra Martina Ceppi

The controversial role of technological development within the context of the fight against crime

In the Impactscool Magazine we have already mentioned the applications that have been used, mainly in the United States, during demonstrations in favor of the Black Lives Matter movement. In particular, we briefly talked about the Citizen application, which allows you to monitor the position of the police in real time.

 

Geo-located dashboards and crimes. In Italy, an AI-web app has recently appeared to access all the open-data on crimes that have not been systematized and reported on the web so far: Mine Crime, which presents itself as “the only one (but will it be for so long? ) platform in Italy for the search for geolocalized crimes “. An interactive map, graphs and dashboards to see how the crime is distributed within the chosen municipality. Indeed, a premium subscription allows, among other advantages, to access an exclusive community.

The application, created by a start-up born just this year, is part of a trend that starts from afar and further mixes advanced technologies with the most disparate sectors up to making available, for normal citizens, applications and software that triggers numerous questions: is it ethically correct to use them? Are they safe? Are they actually legal or are they going to insert themselves into a regulatory vacuum which Italian legislation, a slow and heavy machine, cannot keep up with?

 

The controversial use of AI to predict crimes: from science fiction to reality. As happened in Steven Spielberg’s film Minority Report, freely inspired by the story of the science fiction master Philip K. Dick, in a hypothetical dystopian future violent crimes had been almost completely eradicated following the development of a sophisticated technology that allowed arrest the “potential” culprits before they committed the crimes.

Today this science fiction vision is increasingly similar to reality and numerous governments and law enforcement departments have decided to exploit artificial intelligence to support and simplify the war on crime, often obtaining controversial results so much more than software has been accused of racism and, consequently, its use has been discontinued. An open letter has recently been taken up by numerous sector publications including Techcrunch, written by a collective of over 2000 researchers and artificial intelligence experts who have decided to openly take sides against research that should have been published soon by Springer. same editor of the prestigious scientific journal Nature. The research was announced by Harrisburg University of Science and Technology; within the press release it was announced that a group of professors and a graduate student from the university had developed automated facial recognition software that could predict whether someone could, in the future, carry out a criminal act. Not only that, the intervention proceeds by stating that “with an accuracy rate of 80% and without racial prejudices, the software is able to predict if someone is a criminal based exclusively on an image of his face. The software is intended to help law enforcement agencies prevent crime. ”

It is no wonder that a large number of experts felt compelled to intervene and make their voices heard in relation to this announcement. In particular, the appeal of the experts was addressed to the editor to withdraw, as he actually did, his intention to publish such controversial and risky research; not only that, there is an invitation to other potential publishers to reject similar future research publications. The reasons are disparate but the focal point on which to focus is caution, which must necessarily be used when dealing with similar issues.

 

The charge of racism and inefficiency. As anticipated, this type of technology and its use on the population has long been criticized for its ineffectiveness and for the ethical and regulatory implications it entails. One of the first to adopt predictive systems to support the fight against crime was, in 2008, the Los Angeles Police Department: among the software there was one, LASER, which identified the hottest areas in which it was probable that occurred a violent clash with a firearm and then another, PredPol, which was supposed to help “calculate” and identify the areas with the highest rate of crime related to theft. Both programs were discontinued in 2019 following an internal investigation by the department which highlighted the strong inefficiencies.

Krittika D’Silva, a computer science researcher at the University of Cambridge, said that “numerous studies have now shown that machine learning algorithms, especially facial recognition software, have racial, gender and of age, as for example supports a study from 2019, which indicates that facial recognition works poorly on women and on older and black or Asian people “.

 

The good, the bad and the ugly in technological progress. Let me be clear, Artificial Intelligence in itself is neither good nor bad, it is not born racist or more inclined to “blame” minorities. As is well remembered in the open letter, machine learning programs are not neutral; these machines are powered with available data sets that are purchased by the companies that then develop them. The responsibility for verifying the impartiality of the software lies directly with those who perform control roles on the data sets, on the teams that work on the project and on learning the machine. The type of information and the way in which it must be processed is the result of the work of human beings. The development of Artificial Intelligence does not only pass through software but also through psychology, philosophy, anthropology and social sciences.

Returning to the application from which we started to develop our reflection: what is the border? Where should technology stop? Where should designers or entrepreneurs step back? There is a risk of making the management of issues that are already controversial for human beings even more complicated.

Alessandra Martina Ceppi
Alessandra Martina Ceppi

Junior marketing & communication manager at a software house in Milan. I always ask myself a lot of questions. Interested in the impact of technology on people's lives and the future.

read more