For a more ethical technology it is necessary to move ahead
31 January 2020 | Written by Thomas Ducato
Our column dedicated to the ethics of technologies continues: we interviewed Piercosma Bisconti Lucidi expert on socio-cultural implications of the impacts of technologies
On February 28, in the Vatican, the big names in the tech industry such as Microsoft and IBM will meet to discuss the ethical impacts of AI and to sign a ‘Call for Ethics’. It is a document that aims to help companies in an assessment process of the effects of technologies connected to artificial intelligence, of the risks they entail, of possible regulatory paths, also on an educational level. An important sign of how it is increasingly necessary to connect to technological development a profound reflection that goes hand in hand with innovation.
Our column dedicated to the ethics of technology continues: after listening to Steven Umbrello’s reflection, today we are going to interview Piercosma Bisconti Lucidi, PhD student in “Human rights and global politics” at the Sant’Anna School of Advanced Studies and expert on the socio-cultural implications of interactive technologies.
When did you start talking about these issues in Italy?
We must make a premise: it is an interdisciplinary field and those who deal with it, often, are not experts in human or social sciences. Many issues, in fact, especially in the beginning, have also been raised by the engineers.
In Italy the most specific studies can be said to have been born when the term “roboethics” was coined by Giammarco Veruggio, professor at the University of Genoa. In the early days, robotics was at the center of the studies: the reflection on artificial intelligence came later, with the explosion of machines and deep learning. At the Scuola Sant’Anna in Pisa, a research team on these issues has been there since 2005, among the first to reformulate both the legal and ethical, philosophical and social levels the interpretative framework of these technologies. The basic assumption is that each technology is influenced and is influenced by the social, legal, cultural and economic context in which it is inserted.
And on a European level?
On a European level, the most relevant document is the one produced by the group of experts on artificial intelligence, in which some of the central elements of the debate have been highlighted thanks to the contribution of some of the most active scholars on the subject. The work highlights what are the main ethical concerns, with strong implications also on a social level, related to the development of this technology. Among these there is, for example, accountability, the need to explain the decision-making process and the legal implications, as well as the management of bias and errors.
But how can we move from “theory” to practice?
It is a central issue. Today there is no real academic glue that unites engineers with social scientists and, more generally, with humanists. What happens, therefore, is that the work of programming and creating technology takes place in parallel with the reflection on its impacts: the engineers do not care about the ethical and social implications, the humanists are not aware of the technical details and struggle to enter the specificity some problems.
But how can we move from “theory” to practice?
It is a central issue. Today there is no real academic glue that unites engineers with social scientists and, more generally, with humanists. What happens, therefore, is that the work of programming and creating technology takes place in parallel with the reflection on its impacts: the engineers do not care about the ethical and social implications, the humanists are not aware of the technical details and struggle to enter the specificity some problems.
What does this imply?
Let’s take an example: the general AI is still distant, worrying today that the machines can take control does not make sense. It is an unattainable goal for at least the next 50 years. Social scientists, however, do not have full knowledge of the state of the art of technological evolution and risk reflecting on irrelevant aspects today, leaving out the real problems instead.
So you only face problems when they arise…
Exact. Instead, governance of anticipatory technology innovation would be necessary: we need to move ahead of the arrival of technology, by relating the different actors involved in the process, from multinationals to research centers to governments. Something is being done, but it is still not enough. And also on an academic level there are aspects on which to intervene, an integrated approach is needed: today the work of social scientists comes after the development of technology, they can give an opinion but not contribute to creation. This leads to very “high” guidelines, but which find difficult applications on a concrete level.
For example?
To say that AI must respect human dignity is a very right concept. But what does it mean? How can an engineer apply it to the algorithm he developed?
This requires an integrated approach, with interdisciplinary teams, in which social engineers and scientists and philosophers work together to define what artificial intelligence can do on a technical level and what it must do from an ethical point of view. It would lead to time and trouble savings.
When it comes to ethics, how can cultural differences be overcome to find common lines for everyone?
The comparison with the ethical sense of the different cultures will be important but today it is a problem that we are not really asking ourselves yet. The level of discussion is still centered on the academic level, where the cultural difference is perhaps less felt. On the public level, however, there is still little talk about these issues, especially in some areas of the world. The issue will become extremely topical when AI begins to have a more visible impact on individuals’ lives, probably with the spread of robots and autonomous systems. An institutional push would be needed to involve companies, universities and individuals in the debate before it is too late.
What are the central themes today?
For example, as mentioned, there is a lot of talk about biases, which are not of the machine but within the data provided by man. This leads to an interesting reflection: engineers are directly involved in ethical and regulatory issues, which was not the case before. Some of these are more difficult to avoid because they fall within the sphere of “preconceptions”, central elements of human nature that are born with our experience. Some of these, however, we do not want to transmit to the machine, which not only universalizes the bias but multiplies it. In this sense, analyzing technology can also be useful to reflect on human nature and society: AI can help us understand which elements we want to keep or strengthen and which to eliminate.
What will be the main challenges for the near future?
In the next 20 years one of the big challenges will be the automation of work. We should understand if we can transform the production process and change the employment sector of many individuals in a short space of time. We should change the economic system in which we move, in the hope of freeing ourselves from jobs we don’t want to do. But if we are no longer within the production process, we as men should completely restructure the way we refer to life.
Another very important aspect will be the social one and the interaction with machines: this is the area in which social robotics is concerned. It will be a long and complex process that will change the landscape of interactions.
Finally, we should be careful is the modification of political processes. I don’t think AI will bring greater democratization of processes, also because I don’t think it will be used in this way.