An ethical code for Artificial Intelligence
21 December 2018 | Written by La redazione
The first draft of the ethical guidelines for the development of the IA was recently published. We talked about it with Luciano Floridi
The development of Artificial Intelligence is happening at an exponential rate and some AI systems have entered our lives without us noticing it. Today the machines reach targets unimaginable until a few years ago and growth is not destined to stop. For this reason, the European Union has formed a group of high-level experts to work on the drafting of a document that contains ethical guidelines for the development of this technology, which are effective but above all shared. The first draft of this code of AI ethics was published on December 18, waiting to be finalized in March.
The group, which has been working since June, is made up of 52 experts selected from the academic, industrial and civil society sectors. Among these there are four Italians: Andrea Renda, researcher at the Center for European Policy Studies in Brussels, Giuseppe Stefano Quintarelli, president of the Agency for digital Italy, Francesca Rossi of the University of Padua, on leave at the IBMbm research center of Yorktown Heights in New York and Luciano Floridi of the University of Oxford.
Luciano Floridi, contacted by our editorial staff, offered us his personal point of view on the work of the group of experts, underlining the importance and urgency of a document such as the one they are working on. “An ethical framework – he explained – contributes to forming the regulatory framework, which is then reflected on economic and social issues. So ethics is fundamental, it’s like the biggest matryoshka, the one that contains everything. The document is still a draft and I would say that this is quite evident. In some respects, therefore, it is a bit “unripe”, but these are elements that we will improve also thanks to the feedback that will arrive before the final draft, which is scheduled for March. I hope that in the end a good job will come out of it, but the definitive confirmation we will have it only in spring”.
The successes. The document presents 5 fundamental principles that underlie the development and dissemination of artificial intelligence: beneficence, non-malefiecience, autonomy, justice and explicability. These principles are the fruit of the work of Floridi and his team. “I am particularly proud – Floridi told us – of the section in which the five principles are presented, which are at the basis of the theoretical system of the document. This part was created on the basis of a project I have directed, entitled AI4People, which took place alongside the activities of the European Union’s expert group. We have collaborated productively and effectively with the commission and I am proud of the fact that we have recognized the great work we have done, which it is almost completely present in the document just published”.
The 5 fundamental principles are the basis of the 10 guidelines contained in the document, which should then be translated into concrete rules for the development and use of artificial intelligence.
The failure. Not everything that is contained in the document, however, convinces Luciano Floridi. “What I personally did not like very much, – he told us – is the part related to the limits not to be overcome with the development of artificial intelligence. In particular, there are some recommendations on what will never be done with this technology and in this section, in my opinion, there is a slide. It is said, for example, that AI should not be created with the conscience or able to conquer man: a sort of Terminator, in short, who is concerned with science-fiction scenarios but who finds no concrete evidence of reality. It is as if the zombies are in very good bioethics. On this side I fought inside the group, but only with partial success. I think these are slips that a serious document at European level of this magnitude should avoid: we are talking about science and technology, not about Star Wars”.
Insert these posts, considered science fiction, according to Floridi could unnecessarily worry people and affect in some way on funding. “If people are told that these are the developments to be avoided, – he concluded – we assume that this thing can be created. If not, what is the concern? This, however, is science fiction, we are worrying people for no reason. Moreover, in this context, millions of Euro also play in financing and boosting fear for no reason is only counterproductive”.