City and Transportation

Flying is safe. And when the AI will help the pilots it will be even safer

9 August 2018 | Written by Stefano Tenedini

Airplanes disasters’ first cause is the human factor. A partnership between crew and computer would allow to better interpret the data of the instruments and correct the anomalies well before the problems grow to become critical.

Flying is safe. Everyone knows that, even those who start to sweat as soon as the airplane doors close. Statistics confirm that with respect to cars and bicycles, and obviously motorcycles too, the risk of accidents and the number of victims is infinitely lower. But aviation has a mantra that everyone honors and serves: safety first. And many spin-offs of aeronautical technology come from the drive to reduce the potential risk more and more. This is the ideal scenario in which to “enlist” artificial intelligence.

In flight there is not a single cause of accident, but a chain of errors that leads to a more or less serious event. First of all, the weakest link must be identified in this chain, to then remove it. And the first cause of a plane crash is almost always, invariably, the human factor. Not only the pilot, of course: the hitch can be nested between land services, material, communications, inaccurate assessments.

The fact is that human beings are naturally prone to error. Most of the time the problem is easily solved, but even a marginal event can be enough to cause a tragedy. The organism itself is a “magnet” for errors. We take the spatial disorientation, which explains the mysteries of airplanes crashing on the ground during a regular flight without malfunctions, or the incorrect identification of reference points. Or navigation assistance, which could derive concrete benefits by opening up to augmented reality.

For some years now one of the solutions proposed by engineers and technicians is to ground pilots and design airplanes conducted by completely autonomous robots, without anyone physically at the controls. Today the technology would actually be available, thanks to the steps already made with drones: but would passengers be happy to get on an unmanned plane? Not to mention the pilots themselves, who would certainly not want to disappear like dinosaurs…

The alternative to the choice between crew and robot exists, and it combines the best of both. The challenge is not to leave the pilot on the ground, but to remove the pilot’s mistake for a safer flight. AI doesn’t make mistakes typical of humans: if a pilot flies on board an intelligent aircraft, which independently manages a part of the control and routines, safety increases exponentially, to the delight of companies and passengers.

Today sensors and instruments present data and information that the pilot can see, extract and use. In this phase they can be born for different errors of reading, of interpretation, on the actions to be undertaken. With the technology already on board every aircraft the plane knows exactly where it is in every moment of the flight on the three axes, speed, quota and time. The problem then is ours: what to do with that data? Pilot and AI could work in the team by comparing what the plane does with what it should do, and react to the deviations.

In short, not a mutiny but a division of roles, with the aircraft becoming a partner in control, flight management and decision-making. Naturally here the usual ethical dilemma opens up: who is in charge? The partnership is fine, but what if there are different opinions? In the event of a conflict always commands the pilot or the system can “unplug it”? A case far from hypothetical, because there is already a procedure of “command authority override” in which the first officer can decide to deport the captain.

What if the human was out of the game? Let’s think, for example, of a hijacking: here the AI could intervene, dialogue with the controllers without the terrorists knowing it and, if necessary, receive instructions from the ground, take the initiative, change course, land alone. But even without assuming limit scenarios, self-monitoring allows to recognize errors at an early stage, to advise the pilot and to propose corrections early. It is a man-machine collaboration that the giants of the sky like Boeing watch with progressive trust.

Stefano Tenedini
Stefano Tenedini

Contributor

Journalist and correspondent for newspapers and magazines, press office experiences and external relations in finance and in Confindustria. Today he deals with communication for large and small businesses, professionals and start-ups. As a trainer, he helps entrepreneurs and managers to communicate better with the media and the markets.

read more