Artificial intelligence and law: the revolution has begun
24 January 2019 | Written by Giulio Siciliano
The use of A.I. in judicial systems it’s now reality: but have we asked ourselves all the necessary questions? We talked about it with Luciano Floridi
On December 4th, 2018 the European Commission for the Efficiency of Justice – CEPEJ – adopted the first European ethical charter for the use of artificial intelligence in the administration of justice.
Despite the growing attention to the issues of innovation in general, and particularly towards the A.I., the adoption of this document has a revolutionary scope. Before this moment, in fact, it had never been given such a specific focus, as surely is that related to the administration of justice, to the supranational reflection on the use of artificial intelligence.
The document in question outlined 5 general principles concerning the use of A.I. in this matter:
- Respect for Fundamental Rights, in order to ensure that the design and implementation of the A.I. and the services connected to them are compatible with them;
- Non-Discrimination, with the specific objective of preventing the development or intensification of any form of discrimination between individuals or groups of individuals.
- Quality and safety. With reference to the processing of data and decisions, in order to process these data with a multidisciplinary approach and, above all, in a secure technological environment.
- Transparency, impartiality and fairness. The methods with which the data will be processed must be accessible and comprehensible, as well as the authorization of external consultation.
- Principle of “under user control”: a prescriptive approach must be precluded, and it must be ensured that users are informed actors and in full control of the choices made.
The document in question has already been richly commented on by many authoritative experts of the subject, who have not failed both to praise the innovative scope of the paper and also to raise some perplexities. The exhaustive treatment of all these principles (especially in relation to the complexity of the issues to which they inevitably connect) deserves a legal treatment that shouldn’t be discussed here. For this reason, we decided to focus on the more purely ethical level of the document in question, with the help of one of the most authoritative scholars of the subject: Luciano Floridi, professor of philosophy and ethics of information at the University of Oxford, expert ethics of innovation and new technologies, as well as a member of the European expert pool that is contributing to the realization of the ethics charter for the development of AI.
“What I will tell you – underlined Floridi before the interview – will be my personal vision. I do not speak on behalf of the group, but as a single scholar and researcher”.
How much need there was of a document of this kind, aimed at outlining the ethical guidelines for the use of the A.I. in the administration of justice?
Let’s start immediately with a difficult question! (He laughs) I think it was absolutely necessary for the European Union to adopt a document in which it took a position on the ethical use of the A.I.
This is because ethics is the point of view from which to fall, in cascade, on everything else, from legislation to self-regulation.
In fact, a structured ethical framework helps to shape an equally structured legislative framework which in turn allows a detailed definition of economic and business aspects in general.
The definition of these two areas then allows us citizens to have a conscious access to some actions and information.
In this context ethics must be seen as the largest matriosca, the one that contains all the individual areas and contexts in which we will see operating the A.I.
What we must avoid with all our strength is that the creation of this framework is a mere declaration of intent. If we limited ourselves to deleting it from the list of to do things without then using it to inform with the general practices and procedures of self-regulation of corporate and government production, population expectations and legislation to support all of this, we will certainly have wasted a great opportunity.
In Europe we took a step back and asked ourselves: in which society would we like to live? How do we want to promote the growth of this company through the use of A.I.? If this is the goal to be achieved, then we can say that we have done a good job.
With reference to this last point: the paper shows the state of the art regarding the use of A.I. in the administration of justice in different legal systems. It seemed to me that there was some uncertainty coming from those experiences, such as the American one, which decided to use the A.I. in the judicial system without first asking these questions.
I would start with a Latin phrase: festìna lente. Let us hasten, but with caution. Our solution reflects this choice: we want to proceed quickly, but while paying attention to the implications of what we are creating. I think it’s irresponsible to go fast without worrying about having regulation and, above all, without knowing where we want to go.
If we wanted to draw two caricatures, we would have a result of this type: on one side of the Atlantic (in America) innovation at all costs and very liberalized markets, but without detailed rules; on the other side (in Europe) top down regulations and much more plastered markets, but with innovation under control.
Beyond this gross simplification, I believe that we Europeans have done well to move forward by creating a framework of reference and that the Americans are caring too little about the ethical-legislative implications related to the A.I.
In Europe we have felt the need to prevent, so as not to collect in the future the damage resulting from the misuse of a powerful and revolutionary tool like A.I.
Definitely self driving cars. In the United States we have already recorded some deaths related to this technology. I do not think that a correct answer to this data can be “On the whole we will save more lives”. For the moment we record the deaths, we can’t know yet how many lives have been saved. Making beta testing on the skin of the citizen of a technology that I do not know well certainly does not leave me satisfied. Not by chance, in fact, we have already witnessed acts of vandalism against cars without drivers.
Is there a risk of putting too many limits to the development of innovative solutions?
Of course, the risk is concrete. I recently viewed a document, but I cannot give you too many coordinates since it has not yet been approved, in which the requests and technological expectations are absolutely absurd.
Imagine asking that at any time, whether before or after data processing, citizens are given the opportunity to extract their data from the processing of the A.I. in question. It would be like someone coming to ask you for some sugar to make a cake. But then, once the cake was made, he would think again and ask for the sugar to be returned. It can not be extracted, it has now been used.
Ask for something like that, maybe it means that in the end the cake does not really want to do it. Between not bothering to find the rules and creating too many there is a middle ground: that’s where we have to move. Otherwise we could find ourselves in the embarrassing situation in which Europe will have the ethics of the I, but not the A.I.
What do you think are the solutions to find this way of compromise?
Surely in Europe we should talk more with companies and with the world of research and business development. Only from this comparison can we avoid situations like “if we put these limits, this work we can not do it”.
I think there is also a lot of hypocrisy in this. However, we will end up using many of these, even if they have been tested elsewhere because here we have considered their development unacceptable. Look at the GMOs: here in Europe we have banned them, but we all know that if we want to feed the world we must develop them quickly, to help solve the problem of hunger and world malnutrition, in a context of radical climate change. I would like to avoid these hypocrisies and I would like Europe to assume a mature ethical leadership in favor of innovation and development.
In light of what is happening it seems clear that, once again, we will see opposing ideologies and blocs in the development of A.I. Although Common Law systems may perhaps be more predisposed to receiving change like this, there is no doubt that the whole legal world desperately needs to innovate. If it is true that the law “only makes fouls of reaction”, it is also true that today we can no longer afford that reaction be so slow with respect to the phenomena of the world.
The race to A.I. it has now begun. Many already risk comparisons with the 19th century gold rush. Surely artificial intelligences are already a portentous instrument, and, in the future, they can only increase their impact on society.
The countries that will be the first to give it will have a very strong competitive advantage over the others: what we have to wish is that the development of this tool is open to seize all opportunities and at the same time aware of the risks and the questions it entails.