CAN ARTIFICIAL INTELLIGENCE ENHANCE HR?

“It’s like racing down the highway without a seat belt”

Artificial intelligence (AI) is in-creasingly finding its way into the workplace. In HR as well, smart systems are increasingly being used to streamline human resources. But are these systems always as smart as we assume? Mieke De Ketelaere, director of AI at the imec research centre (IDLab), certainly questions a number of fully automated applications.

AI was developed for DDDD tasks: dangerous, dull, dirty, and difficult. It can make a tremendous difference in that regard. In this context, AI can learn from data and decide and act independently. However, human resources also requires interpersonal skills. HR decisions require ‘a human in the loop’, a person who makes the decisions.”

Dirty, dull, dangerous and difficult

When this human factor remains present, De Ketelaere does see a great deal of possibility. “AI can perfectly take over DDDD tasks from employees. Filling out timesheets, managing an agenda… All of this is perfectly possible. It can also be used for retention policy. AI can provide a more complete picture of your employees, what they need and where the bottlenecks are. It can measure stress or fatigue in your employees, often better than they themselves can. AI can accurately measure the stress level and fatigue of employees via tiny radars. They provide honest and valuable information, which you do not always get from your employees themselves.” Are we not venturing out onto a slippery slope, we would like to know. Is it not risky for one’s employer to have this information? “It is certainly better than getting sick from stress,” says De Ketelaere. “Monitoring stress allows for remedial action.”

 

“HR decisions require ‘a human in the loop’, a person who makes the decisions.”

 

Using AI to screen CVs and filter valuable candidates from the incoming applications is not such an obvious application, according to De Ketelaere. “A data scientist abroad built an application for a Belgian company that unknowingly gave applicants from Brasschaat priority over those from Schaerbeek based on IP addresses. Because in the past these applicants had proved a more popular selection, and the system had learned this. AI builds on the data and decisions of the past. And thus maintains ‘bias’ or prejudices. Even if you try to use AI to pursue a more objective or diverse policy, it is still not easy to completely eliminate bias. Anonymizing data or not including certain information such as age and gender during the system’s training period is not enough. It is often very subtly included in innocent-looking fields such as in the email address, a field that inadvertently says something about age, for example. Certainly systems that are trained abroad using Belgian data – which happens frequently since data scientists are hard to find domestically – prove problematic. They are unaware of our local context and sensibilities.”

AI can use vast stores of data from CVs and evaluations for HR applications, but HR departments often look elsewhere for data as well. ”I know of AI systems that also take data from the Facebook and Linkedin accounts of applicants in search of additional information to complete the picture. I also saw systems interpret emotions from job application videos. For example, applicants with red spots on their face during the interview were judged less suitable. Strange, because a human recruiter would see and understand the broader context. People use empathic skills in their assessments. AI cannot, so it does not necessarily make the right choice,” explains De Ketelaere. “For those who are a bit up to date with AI, it is also relatively easy to use the right words in a CV in order to be selected. But will the company actually be hiring ‘me’? Or a version of myself pretending to be someone else?”

 

“What exactly is stored and tracked is usually very unclear. In terms of GDPR and privacy, there are certainly meaningful questions to be posed.”

Digital assistants

A number of companies are also experimenting with digital assistants for staff. They coach employees to achieve better results. “That is starting to look like Big Brother,” a concerned De Ketelaere adds. “It robotises people. It draws all the creativity and strategy out of a job. In addition, this assistant will use data derived from your behaviour to compare you against others. What exactly is stored and tracked is usually very unclear. In terms of GDPR and privacy, there are certainly meaningful questions to be posed.”

According to De Ketelaere, transparency on the use of AI is essential. “AI is still too opaque as a technology. There is hardly any legal framework. And an ethical framework is also often lacking. I like to compare it to driving in the 1970s, when seat belts and a driver’s license were not yet mandatory. AI is like sitting in the passenger seat of a car racing down the highway without a seat belt. One can only hope that the driver does everything right and that nothing goes wrong.”

Improvements are on the way, however. The EU is working on a legal framework. According to De Ketelaere, what she has seen of this framework so far is still based too much on applications of the past and focuses too little on the current landscape or the future. She would like to help redirect this. Companies are also gradually building ethical frameworks. “Germany is a bit ahead of the curve,” she states. “External parties are also involved in discussions, meaning that profit is not the only thing that is taken into account. That is a positive evolution.”

Employees are increasingly confronted with AI applications. From within the union, Vic Van Kerrebroeck monitors developments within the insurance sector. “We cannot let this pass us by. AI is coming. We certainly do not deny that it represents added value, including for employees. However, we do want transparency about what is happening.

A 1983 collective labour agreement says that employees and their representatives must be informed about new technological applications. The GDPR regulations also clearly state that employees must know what data is being recorded and stored. So we definitely have a role to play there.”

“We want to continue working with employers on clear ethical frameworks. We do not always have to invent them ourselves, some countries are a bit more advanced in this area. In the financial sector in Singapore, for example, there are already concrete agreements about an ethically acceptable application of AI. However here in Europe as well, a few months ago a number of initial general agreements were reached with the social partners from the insurance sector about AI applications in HR. Let us take continue down this path and support workers in other sectors as well.”

Author: Jan Deceunynck  |  Image: Shutterstock