Robotics and AI

Artificial intelligence and discrimination: are human rights at risk?

25 February 2019 | Written by Giulio Siciliano

The algorithmic decision-making process is revolutionizing the world, but are we sure that the revolution is for everyone?

The Council of Europe Anti-Discrimination Department has recently published a study by Frederik Zuiderveen Borgesius, a professor at Radboud University in Nijmegen, on new forms of discrimination related to artificial intelligence and algorithmic decision-making processes.
The documents of this type adopted by supranational institutions are beginning to be more numerous. This trend echoes the growing attention shown by individual countries, as happened during the last World Economic Forum, towards the development of the technology sector and related regulations. If is it true that “from great powers derive great responsibility”, all institutions finally begin to understand that it is no longer possible to postpone the debate and reflection linked to the implications and risks of new technologies.

 

The document. In the text, the researchers of the anti-discrimination department have in the first place sketched the state of the art on the use of the I.A. at the European level, both in the public sector and in the private one. In the examples shown, from the management of one’s private bank account to the selection of staff from large administrative structures in the public, the document emphasizes that the use of decision-making processes based on the I.A. can concretely determine more or less significant and large-scale effects on people.
However, there is a huge problem behind all this. Artificial intelligence and algorithmic decision-making may seem rational, neutral, and impartial, but, unfortunately, it may not always be the case. Both of these technologies can indeed lead to new forms of unjust and illegal inequalities if not adequately controlled.

The study, in describing the problem, focuses on four fundamental aspects:

  1. The socio-economic fields in which the I.A. and algorithmic decision-making can lead to new forms of discrimination;
  2. Current existing regulatory guarantees and security measures in respect of IA;
  3. Recommendations to organizations using IA to prevent discriminatory drift;
  4. What types of action (legal, regulatory, self-regulation) can reduce these risks.


What discriminations?
Artificial Intelligence can determine discrimination in many different ways. From the definition of the classes and the reference targets to the methods of data collection, passing through the eventuality (not too remote) that the decision-making process based on IA may “Learn” from already discriminating training dates. Without underestimating the fact that these systems can also be used intentionally for discriminatory purposes. Fortunately, effective remedies already exist for these widely known forms of discrimination.

The real problem is the new forms of discrimination, which are beyond current laws. Most existing non-discrimination statutes (such as the ECHR, the European Convention for the Protection of Human Rights and Fundamental Freedoms, a text that has long been considered one of the pillars of the protection of fundamental rights), apply only to discrimination based on protected characteristics, such as skin colour. These statutes do not apply if an artificial intelligence system invents new classes not correlated with the protected characteristics to distinguish people. It is not sure that these differentiations must necessarily be unjust, but they could be, thus going to reinforce social inequality. All these considerations arise from the observation that the I.A. they often behave like impenetrable “black boxes”: what remains far from a possible evaluation is not the goal to which the I.A. it is in charge, but the way in which decisions.

 

How to reach shared decisions? To want to use a metaphor to describe the current state of the debate around the I.A., we could imagine a dark path where only the start and the arrival are illuminated. The various participants in the debate are all basically in agreement on these two points: the problems and needs (the departure) that underlie the development of the I.A. they are known to all, as they are shared by all the objectives to be reached once these problems have been solved (the arrival).
What does not make everyone agree is the path: what are the intermediate decisions that we will have to make in order to achieve this improvement?
In the way, we respond to this question we observe the profound diversity of the approach adopted by individual countries. To want to recover the metaphor of the path, this document wants to confirm once again the European approach to the issue: although the point of arrival is visible, we are worrying about giving light also to the path, even at the cost of arriving later than others arrive to the final goal.

It is equally true that in the name of “swiftness”, other countries have decided to adopt very different approaches. For example, China, in its interpretation of tech management, is adopting more and more pervasive forms of control over the company, with all due respect for privacy and privacy.
In the midst of these reflections, it seems that at the global level a retro thought is spreading, not even too veiled towards new technologies. In the race towards progress, we will have to sacrifice something on the altar of innovation. Europe, in its assessment of values such as privacy and human rights as fundamental, has perhaps already decided to sacrifice part of the economic return that could have led to certain technologies being the first to arrive. We have seen that other countries are making diametrically opposed assessments. Probably it will be only time to come and tell us who was right.

Giulio Siciliano
Giulio Siciliano

Giulio Siciliano graduated in law from the LUISS Guido Carli University of Rome and works as a consultant in the field of Management and Innovation.

read more