A machine that can recognize emotions
13 May 2019 | Written by La redazione
Facial recognition: many uses and just as many problems. We talked about it with Emoj startup CEO, Luca Giraldi, that we met during the first AI Forum.
On the screen, one’s face is superimposed with a series of points interconnected by lines that follow our movements and expressions. This is the algorithm developed by Emoj in action, an algorithm capable of recognizing emotions: those points are used by the machine to trace our face and the variations in distance and position between those points correspond to the different emotions. An innovative application of the concept of facial recognition: an increasingly widespread technology that is used in many areas, from unlocking smartphones and the use of emoji or animated Instagram filters, to more controversial approaches such as the use made of it in China to track the movements and actions of its citizens.
This technology uses the ability of artificial intelligences to recognize patterns, in this case the faces and their fundamental characteristics, eyes, mouth, nose. Facial recognition algorithms are trained to identify these fundamental forms in thousands and thousands of face images, becoming more and more accurate.
It is a powerful technology that on one hand carries with it ethical doubts in which the anonymity of the face disappears, anyone becomes identifiable by a camera and privacy is questioned, on the other hand, it can be used as a marketing tool and as support for various activities. This is the case of Emoj, an Italian start-up working in the world of Artificial Intelligences. They do something more than recognize faces: they can recognize emotions.
We exchanged a few words with the startup’s CEO, Luca Giraldi, to understand how this technology works, its uses and the risks it poses.
How this project was born and the technology behind it.
Emoj is a university spin-off, the project was born about 4 years ago inside the Marche Polytechnic. We are a group of researchers who wanted to put into practice the enabling technologies for customer experience, so we did the whole process of analysis and state of the art technologies that were present about 4 years ago and from there we started to develop algorithms for customer experience. We took a camera and with this, we were able to identify sex and age, from there we went further and went also on the emotions through the look of the user in a monitor or in an app.
To develop such technology you will need a lot of information to train your algorithm.
Exact. The algorithm itself is empty at the beginning so the initial effort was just to insert image datasets and catalog them. When we reached a sufficient amount of data, we allowed artificial intelligence to learn what it saw with the camera and today we are training and combining image datasets to increase the accuracy of the data we have and produce more accurate and more precise data.
Are there any expressions that are more easily recognized?
Surely the smile is the easiest expression to decipher. Disgust and anger, on the other hand, are a little more difficult to date, to obtain greater accuracy. We are working on it, so we are continuously acquiring new datasets and we are cataloging them to make the algorithm grow in real time with real expressions.
Many applications in many areas, immediately one thinks to marketing, have you come across particular contexts?
Yes, in the context of the art, just this summer we will be doing a project with the Macerata sferisterio. In that case, we will map the performance of the show to see customer satisfaction throughout the work so that the director can understand what is the moment or the act that has aroused most emotions, or has not aroused emotions, in the audience.
From the point of view of data and privacy, how is this data managed?
In the area of data storage as far as images are concerned we do not store them so we are in GDPR compliance. In other areas, such as the industrial one, the data reside in the company that requests the project and are used precisely for predictive maintenance or otherwise so based on the project we decide how and where to allocate these data.
In China we have seen that a similar system for the recognition of expressions could be used in schools to verify the level of attention of children. In your opinion, what is a limit that should not be exceeded from an ethical point of view for a technology like this?
I give you an example that my marketing professor always did: the doctor who has a scalpel could use it to save lives or injure people. We develop an enabling technology and then the company that uses them that decides if and how to treat it ethically.
Can you imagine unethical use of this technology?
I don’t have to imagine it. Someone has already asked for it, not in Italy. They asked us to analyze people to convey political choices. We did not, however, from my point of view this type of activity is not ethical.