Towards an ethics of technology
17 January 2020 | Written by La redazione
Philosophy and technological development, two apparently distant worlds, have never been so close: an integrated approach is needed. We have some with Steven Umbrello Managing Director at the Institute of Ethics and Emerging Technologies
The development of emerging technologies requires a constant influx of new engineering approaches, scientific studies and refinements of new techniques, but to meet the best of the possible futures it is essential that an ethical-driven philosophical thought develops at the same pace. Steven Umbrello, Managing Director at the Institute of Ethics and Emerging Technologies, deals with these issues. After studying Philosophy, Ethics of science and epistemology between Canada and the United Kingdom, he is now pursuing a doctorate in the Philosophy department of the University of Turin, focusing on the role of philosophy in understanding the digital world.
What are the topics of your research?
My research primarily falls under the larger umbrella of philosophy of technology while being more specifically within engineering ethics. Currently, my primary research interests surrounding artificial intelligence systems and the design methodologies that we can employ to design those systems for human values, rather than designing them after the fact. To this end, I have been a long proponent of what is known as the value sensitive design (VSD) approach to designing technologies which is a principled approach to technological design that takes as its beginning premise that technologies are not value-neutral, not just tools, but are always designed with values. How we can modify this approach to be better suited to the varied development of AI systems is what I am currently exploring.
What are the main themes of the ethical and social debate on the development of emerging technologies?
I think that the main debate around emerging technologies is the underlying divide between technological determinism and interactionalism. The former camp, which is most predominant in the public sphere and primarily by silicon valley gurus is that technological development is inevitable and determines the values of society and culture. Interactionalism, which is primarily dominate in academia and philosophy of technology more specifically, argues that technology and society co-construct and co-vary with one another. This position allows use to be able to guide technology to desirable futures, rather than be merely guided by them.
What differences are you finding between Italy and the other countries in which you have worked?
Italy, like many other countries unfortunately other than the united states, the united kingdom and the Netherlands is somewhat lagging behind in terms of realizing the value of serious ethical deliberation with regards to the design and development of emerging technologies. The primary driver for innovation has primarily been economic value, rather than the other, irreducible values that are important to stakeholders globally. This philosophical underpinning however is changing. We are seeing a shift towards responsible innovation initiatives, the introduction of social impact assessment hubs as well as new university programs and research institutes that are goal driven towards this end. What will ultimately be needed however is the synthesis between the public and for profit spheres in a genuine way that allows for the convergence of the social and ethical considerations to be actualized.
At European level there are several activities focused on the ethical development of artificial intelligence. How is it going?
Progress is progress no matter how small. But small and slow it is. The primary difficulty with AI ethical development, as has been true of ethical progress in any form over the millennia is ethical agreement. Although it is almost universally accepted at the EU, if not global level, the social/economic/legal/cultural impacts of AI, the problem is the next step. A failure to translate ethics into practice is quite dangerous given the impacts that recalcitrant AI can have, if not simply misused AI systems. There has been some progress in academia however in consolidating the hundreds of ethical codes and guidelines into a single, small set that accurately reflects the values of stakeholders, however the next step is the means by which engineers and designers can implement these values early on and throughout the design of such systems.
If something is moving academically, there is still no awareness of these issues in companies and among citizens. How to overcome this gap?
This is a general gap between the ivory tower and the public at large. The motivations are varied ranging from personal disinterest by scholars of public perception or awareness to the capitalization of scholarship by large publishers making research inaccessible to the public, if not to other scholars more generally. There are both personal and economic barriers to the dissemination of scholarship that would otherwise change public perception and understanding of these issues, which is quite important since the publics that typically engage in these issues tend to be direct stakeholders and thus of central value to the design of these systems.
Academics as well as designers need to understand the value of engaging stakeholders as a fundamental and unignorable part of the design process of any technology. Scholars involved in these research programs, if not simply as researchers on how to formulate or strengthen these methodologies must also engage with the public in order to more accurately reflect how things actually work in the real world. The stakes of poorly designed AI or other emerging and transformative technologies is too high to remain purely academic or conceptual, but must be situated in real world contexts.
How do you imagine the future of relationships in an increasingly technological and connected world?
Like any transformative technology there will always be beneficial outcomes as well as negative outcomes. Ultimately, novel technologies present us with possible worlds that ultimately involve significant and fundamental changes to society. I see no reason to be overly optimistic nor pessimistic. A cautious optimism that we will find the means to adapt and deal with transformative technologies will most likely prevail given the amount of resources and attention on the topic at a global level, but this should not make us complacent to any idea of certainty of beneficial outcomes. Similarly, imagining how the future will be is only useful in that it helps us to speculate about ethics and impacts, although making any strong predictions about our relationships with technologies ultimately is contingent on what those technologies are, how they were designed, for whom they were designed, how they function in different sociocultural contexts and why we built them in the first place.
In the end, all of these issues can begin to be addressed today, particularly at the grassroots level through education initiatives. Innovation hubs, impact schools and campuses and the general ability to think ethically in a formal and rigorous way provides thinkers and activists with the tools needed to meet these issues head on.