Robotics and AI

Beyond Technology: preparing people for success in the Era of AI

5 June 2019 | Written by Thomas Ducato

A research carried out by Dale Carnegie photographs the relationship between artificial intelligence and the job market, offering numerous points for reflection on the future of this technology

Artificial intelligence is entering the lives of citizens and business processes. However, how do people react to AI progress? To find an answer to this question, Dale Carnegie, a reference point in the field of corporate and individual training, realized a study to picture the situation. The research, in particular, analysed attitudes and expectations of workers from different sectors and from companies with different dimensions, involving over 3,500 people, from CEOs to external collaborators, from 11 countries, including Italy.

Optimism, but with reserve. Although at first glance optimism towards artificial intelligence seems to prevail, the data tell of a far more complex situation. If, on the one hand, 44% of respondents agree that artificial intelligence will have positive impacts, on the other hand almost two-thirds are worried about losing their jobs in the near future due to technological advances. For some other, however, the revolution has already begun: 23% of respondents said that artificial intelligence and automation are already influencing their roles and another 44% expects that this will happen in the next 1-5 years.

Trust of the company’s leadership, transparency and awareness in its own means. The study identified three elements that would help employees to feel more positive about AI: trust in company leadership; transparency, that is a clear understanding of what artificial intelligence does; have the certainty of having all the ability to adapt to AI changes. In this last aspect, in particular, some soft skills like creativity, communication skills, critical spirit and teamwork will play a fundamental role.

We asked Mark Marone, director of the research institute at Dale Carnegie Training, to comment on the results of this research.

 

How was this study born?

At Dale Carnegie Training, we conduct research into topics that are important to our customers, and AI is something that many of them are now working on: close to a third of respondents in our recent survey said their company is already using AI to some extent. According to a study by PwC, it will have a $15.7 trillion economic impact on the worldwide economy between now and 2030. As we’ve talked with our partners in Dale Carnegie Training around the world, we’ve heard it more often in the last couple of years. In fact, our CEO, Joe Hart, spoke on the topic in Japan in 2018 at the invitation of the American Chamber of Commerce there.  Obviously, at Dale Carnegie Training, our focus isn’t on the technology behind AI, but on the human dynamics that will influence its successful implementation.

 

Europe and Scandinavian countries seem to have less awareness about the impact that artificial intelligence will have in the next few years. At the same time, they seem to be less open to this technology. Why, in your opinion?

While awareness isn’t the only factor that influences people’s openness to AI, awareness of and openness to AI are highly correlated. It’s easier for humans to trust something when they understand it. The more familiar people are with technology, the more positive their expectations are for what it will do for their lives. That’s one of the key observations we made from the data we’ve collected, and it suggests that leaders should make it a priority to educate their employees about what they intend to use AI for and how it will work.

 

The survey clearly shows that a deeper comprehension of AI technology is necessary to coexist with the AI itself. How organizations can reach this aim?

People don’t expect to understand every technical detail. They just want to be sure that AI is delivering decisions that are fair and in a way that can be explained. That doesn’t mean companies need to reveal the source code for every algorithm or avoid using deep machine-learning. It does mean that AI-experts should consider taking extra steps to explain the relationship between inputs and outcomes and the factors that are driving AI decisions. This becomes critical when AI is being used to make decisions that impact people directly, such as screening candidates for hire and evaluations of productivity and performance.
The other key will be communicating those decisions. People may respond differently to a decision they know is made by a machine versus one made by a human being. The more sensitive the decision, the more important it will be for the people sharing it to be trusted and skilled communicators.
So, even as AI makes more business decisions for us based on data, human leaders will need to evaluate the appropriateness of implementing those decisions, and to communicate them in a clear, empathetic, and convincing way.

The ability to adapt to this “revolution” would make easier to accept the AI technology. What is the role of a continuous training in this context? How important will it be to develop soft skills, also through specific paths?

If people feel confident that they will be able develop the skills they need to adapt to new roles, organizations have a better chance of maintaining their engagement. That’s crucial, because machines can’t succeed on their own. Humans and machines will need to work together. Machines already surpass humans in performing many routine tasks, and people sense that. Sixty-eight percent of respondents in our survey said that getting additional training would be very or extremely important to avoid losing their job given advancements in AI in the workplace. In the foreseeable future, humans will still be needed to manage the technology itself, but also in non-routine situations and for the tasks that require high levels of social and creative intelligence. Knowing that, companies should consider strengthening their employees’ creativity, critical thinking and social skills, and that was confirmed in our survey. More than 7 in 10 respondents felt that soft skills, rather than hard skills (such as science, math, technology and engineering) are what will be needed to stay relevant. Companies that help workers develop these skills have the opportunity to build both loyalty and capability in their workforce.

 

How much does the issue of privacy affect the lack of trust in AI technology?

AI brings a special set of concerns when it comes to trust. Privacy is one of them. In our survey, 63% of respondents are at least moderately worried about privacy issues, and 67% are worried about cyber security issues. It’s easy to imagine how privacy and security concerns might occur, for instance, in companies using AI to predict turnover and engagement through natural language processing of the content of employees’ social media, text messages, and emails. Employees have to trust that the information within their communications won’t be used inappropriately. Companies using AI to personalize an employee or customer experience must also be aware of how their employees and customers think about privacy issues. While people typically appreciate – and are even coming to expect – personalization, they also demand their data be kept safe. Data breaches, which have hit so many companies already, require delicate handling and often have lingering negative impacts and perceptions about an organization’s trustworthiness.

 

How are related trust in the organization and in its leadership, on one hand, and trust on artificial intelligence, on the other one?

Trust is the central issue, and it’s more complicated than it may first appear. In fact, we found that employees’ trust in their senior leadership to make the right decisions regarding implementation of AI has an inverse relationship with the respondent’s position in the organizational hierarchy. Only about a quarter of individual contributors (those with no direct reports) say they have a high level of trust in their leadership, as compared with just under half of managers. Those at the director level are far more likely to trust senior leaders, revealing a potential disconnect between leadership and the rest of the workforce: Senior leaders may be completely unaware when there is a trust issue. If there is an issue with trust in an organization, it’s likely that implementing AI (or any other strategic initiative that is perceived as threatening to employees) carries additional risk of failure. That means organizations are well-advised to assess the existing level of trust and make building and maintaining trust a priority.

Thomas Ducato
Thomas Ducato

Graduated in Publishing and Journalism at the University of Verona and journalist since 2014, he deals with the press and communication activities of Impactscool, also taking care of the blog contents, their dissemination and sharing.

read more