Future Society

The Ex Machina World: A Journey into the Philosophy of Automation

3 March 2020 | Written by Thomas Ducato

Exponential technologies are here to stay, it is, therefore, essential to think philosophically on these issues. We talked about it with techno-cultural philosopher Cosimo Accoto.

According to the World Economic Forum, by 2025 the working time of machines will exceed man’s working time. The human-machine relationship needs a deep and shared reflection that must start well before a robot or any new technological object reaches the public. Philosophy therefore finds new lifeblood in questioning how to manage the change in progress from a cultural and societal point of view and beyond. In a world increasingly managed and organized by machinery, what role does humanity play?

Our journey into the world of ethics of new technologies continues, after listening to Piercosma Bisconti and Steven Umbrello, we interviewed Cosimo Accoto, philosopher by training, author and culture innovation advisor, research affiliate at MIT (Boston). Accoto focuses his current techno-cultural research on software theory, data science, artificial intelligence, platform thinking and blockchain technology. MIT Connection Science fellow, Accoto is also author of several essays including: “Il mondo ex machina. Cinque brevi lezioni di filosofia dell’automazione” (Egea 2019, with a contribution of prof. Alex ‘Sandy’ Pentland); “In data time and tide” (translated by prof. Derrick De Kerckhove, Bocconi University Press 2018) and his latest essay: “Mani, menti, mercati. Automazione e ominazione tra machine experience and machine economy” (“Il primato delle tecnologie”, edited by Carlo Bordoni, Mimesis 2020).

 

The great changes taking place confront us with many questions. What role does philosophy play in this context?

 Digital, artificial, synthetic. In a word, “programmable”. The world has embarked on a profound and irreversible transformation of its ontological and ontogenetic foundations. A transformation that is beginning to be perceived as very “acted” but, at the same time, also poorly “thought”. Yet, we are at a moment of an epochal passage in the history of the human species. A passage that remains, to date, substantially overlooked and normally confined to the practice and research of specialists and experts. Sciences and technologies of significant environmental and social impact (such as quantum computing, synthetic biology, artificial intelligence, monetary cryptography, social robotics) are leaving the design and experimentation labs – to a certain extent so far controlled and limited – to be spread, in an ever wider way, on a planetary scale. All this tech transfer is happening today with little specific attention and no systemic awareness on the part of the society. Considering this shift “from science to engineering” (and from labs to markets progressively), I believe that the construction and activation of a renewed philosophical thought capable of facing the scientific and technological challenges to come is crucial. A speculative and prospective thought, open but attentive, capable of studying and accompanying, with the necessary theoretical density, this new phase of the Anthropocene. An era of positive potentialities, but also of risky vulnerabilities. Now is the time to imagine a “phil-tech” manifesto supporting our collective need to philosophically think about technology. If philosophy is “its own time apprehended in thoughts” (as Hegel argued), we must go back to thinking about our technological world by making above all an exercise in experience of its current speculative limit.

 

Philosophy has already questioned the role of machines in society. What are the issues, old and new, to reflect on in the present?

 Present and future automation emerges at the intersection of three engineering stratifications: it is mechanical, it is algorithmical, it is protocological. Or – with another and parallel qualification – it is machinical, it is computational, it is institutional. Together these three dimensions are the foundation and mobilization of the new ‘machine economy’. Through the activation of the physical and sensorimotor operation of the automata, of the computational and cognitive capacity of the algorithms, of the catallactic and institutive dynamism of the protocols, this new “economy of the machine” starts to create, conserve and circulate new digital value in surprising, neo-automated modes. We must certainly take advantage of the intelligence that culture has so far produced around technology, specifically around machine theory and then, more generally, around the philosophy of engineering. On this, you can see the book by Carl Mitcham “Steps Toward a Philosophy of Engineering: Historico-Philosophical and Critical Essays” (2019). I think it is necessary to deepen philosophically the technicalities of our present. Starting from the philosophical analysis of the software code (i.e. “software studies”), the study of data cultures and algorithmic practices, the examination of protocols and foundational cryptographic primitives for platforms and infrastructural stacks, the investigation of the new ontologies and metaphysics of the artificial, the synthetic and the quantum. Without forgetting to include all this in a new philosophically grounded reflection on society, ethics, law, politics: from the machine experience to the machine economy, ethics and politics. In so doing, we can imagine and build a better world.

 

Artificial intelligence, robotics, blockchain: what challenges should we prepare to face?

Automation is a planetary-scale, speculative and operational challenge. The philosopher Benjamin Bratton has wonderfully clarified this recently. If you enjoyed his full-bodied “The Stack”, quoted in my essay “Il mondo dato” (2017), so it will also be for his new, shorter volume: “The Terraforming” (2019). As I wrote in my “Il mondo ex machina” (2019), automation is to be read also and above all culturally and with an eco-planetary perspective. In the “Automation As Ecology” chapter, Bratton sharply writes: “Accordingly, we define automation not just as the synthetic transference of natural human agency into external technical systems, but as the condition by which action and abstraction are codified into complex adaptive relays through living bodies and non -living media. It is both a direct physical ripple and an association of semiotic signaling with its reception; it includes language as well as mechanical information storage and communication. This more ecological conception of automation is one of the conditions revealed by the contemporary intensification of artificial algorithmic intelligence today. It speaks to the already entangled condition of our species, agency, industry, and cultural dramas more than it does to the contemporary concern of proper humans being improperly replaced by machines”. In my perspective, many institutions and markets still cannot see the challenge and that is how these automations (mechanic, algorithmic, catallactic) are already crossing each other and intersect in unexpected ways. At the frontier, in December 2019, a pioneering conference on blockchain and robotics was held at the MIT Media Lab: researchers and entrepreneurs envisioned new tech solutions and business models combining decentralized protocols, artificial intelligence and swarm robotics. A great example of “imagination in action”.

 

In particular, deep learning confronts us with a new dimension of automation, which does not only concern action but also the development of a form of reasoning. How does our approach to the cognitive field change?

The changes taking place will be profound above all because today more and more artificial agents “experience” the reality through data, algorithms, protocols and infostructures. The machines we have considered so far simple things or inanimate tools sense and respond to the world in their own way. Let’s take an example. A self-driving car to go on the road without a driver must be able to understand the context in which it moves. The machine must be able to identify routes and buildings, independently recognize other cars and avoid pedestrians. And not only it must recognize the context, but also to make decisions precisely. Before the recent successes of artificial neural networks and deep learning, cars were not capable of autonomy of action on the street. Historically, we are therefore eroding the “solipsism” of machines: first, in software programs, moving from ‘batch’ to ‘interrupt’ and then, with learning algorithms, moving from ‘knowledge’ to ‘learning’. Even with many limitations, artificial neural networks, therefore, explore the “knowability” space of the world through data in a massive way and in this exploratory activity they are also able to identify “creative” solutions to problems. Operationally, we are talking about transforming data into a vectorised geometric space, optimizing the search for “fitting” functions, adjusting weights and deviations for “back-propagation” (back-prop) of the error. Of course, all this is not enough as the recent debate between Bengio and Marcus has testified. For the future, many imagine a closer contamination between logical-symbolic and empirical-neural approaches to simulate, in the “thinking” of machines, common sense, causal inference and much more.

 

Many technology companies have realized the importance of critical and philosophical thinking in technological development. Are we facing a “golden age” for philosophy and philosophers?

Philosophy must return to deal with the so-called “technicalities” which are today more and more the foundation and activation of the real. It seems to me that yes, there is a growing interest of companies and businesses in critical and philosophical thinking. And not just for the social, political and ethical issues that technological developments imply. And not just for artificial intelligence. Think about the blockchain. For me, the current focus on the engineering mechanics of blockchain technology risks obfuscating the foundational philosophical and institutional dimensions of what a “ledger” is. Historically, the ledger in its most abstract form is a social memory technology of the state of the world at a given moment. That is, it is an institutional device, a new institution after markets and firms (see Berg, Davidson, Potts: Understanding the Blockchain Economy). It therefore has an “epistemic” function in that it is capable of preserving, at any given moment, the “truth” of the state of entities and relationships over time. The fact, for example, whether or not I own a certain amount of digital money (“utxo” or “unspent transaction output” in the technical language of bitcoins). But it also has an “institutional” and fiduciary function since it is from that state of knowledge and truth certified by the register that it is possible to activate future changes/exchanges in the state of affairs. We can also think philosophically about the cloud and other atmospheric metaphors of computation. Today, cloud computing, fog computing, edge computing map the places of computation, its locations: on the cloud, in the fog or at the edges of the world (IOT nodes). But these “locations” of computation are also and above all different “configurations” of computation. Philosophically the cloud is not just a “place” of computation (a mode of being in the world), but above all it is a “way” of computation (a mode of being of the world). Thus, philosophy urges the exercise of strategic thinking.

 

One of the fundamental aspects is the ethical issue. What are the thorniest issues? Is it possible to create a common ethics that takes personal and cultural differences into account?

The philosophical orientation has the great advantage of helping to clarify and illuminate the principles and foundations of technological development, to question and criticize assumptions and application prejudices, to build new governance models and ethical guidelines capable of eliminating or mitigating, for example, the algorithmic discrimination or the automation of inequalities in decision-making processes. It is also in this perspective that, I believe, the new human condition in emergency (in the dual sense of novelty and vulnerability) should be explored and analyzed to understand if and how and why it is a condition that is said to be “augmented” by neo-automation or rather endangered. In fact, we have scenarios fraught with opportunities for the construction of a new automated and potentially more prosperous world, but also harbingers of real problems and risks in amplifying or creating new inequalities. There are two main horizons of vulnerability: “privacy” (past/present protection: intimacy, security …) and “destiny” (present/future protention: freedom, autonomy …). Well, now we know it. But actually, researchers and institutions are working on it. The current debate, focused just on risks and fears, should be advanced. Experimental solutions are being studied in laboratories and research institutes. In general, there is a great ferment and a growing awareness for solutions: committees and institutional bodies, documents for regulation and guidelines, ethical assessments for technologies, practices and approaches for open and certified algorithms, improvement of the quality of the input data, verified algorithms to ensure greater equity and freedom, new tools to make AI explainable beyond black boxes, etc. Yes, tough stuff, but a debate stuck only on problems is of little use today and is far from the state of the art. For sure, we will design and negotiate interoperable socio-ethical solutions that we will adjust over time.

 

In a world where automation is central, what will typically and exclusively remain human?

The economy of the machine, as we begin to define it, is developing and will produce new automatisms. In January 2020, the book “Competing in the Age of AI” by Iansiti and Lakhani documented with cases and stories (the Chinese Ant Financial, for example) this perspective of creation and capture of value that is largely automated. I have already mentioned the pioneering conference on possible crossings between robotics and blockchain to imagine smart contracts that manage drones in the form of “robot-as-a-service”. However, we must be clear on one point. What we call “human” is also the complex result of the technologies that have historically been developed. The human “is” not, the human “becomes”. The human is also the creature of its automatisms (biological, cultural, social, historical). Lastly, technology is the human way of being in the world. From when we chipped stones to make knives and various tools to today’s IT architectures to communicate and interact. My guess is that it is not possible to separate technology and humanity as if there were a human separable from the scientific, artistic, professional and cultural tools that historically we use. The contemporary human will be born in and from the digital, network, artificial and algorithmic, synthetic and quantum technologies that we have been designing in recent years. As Topol says in his “Deep Medicine”, the doctor’s work is already dehumanizing today (and not because there are machines or because machines will come), but as it is crushed by routine, inefficiencies and bureaucracy. Automation could free up the medical time to empathically and humanely devote to patients. But it is up to us to direct automation development towards this orientation because it will not happen automatically.

Thomas Ducato
Thomas Ducato

Graduated in Publishing and Journalism at the University of Verona and journalist since 2014, he deals with the press and communication activities of Impactscool, also taking care of the blog contents, their dissemination and sharing.

read more