MAKE ARTIFICIAL INTELLIGENCE AN ADVANTAGE FOR EVERYONE
In recent months, ChatGPT has reignited the fear of job losses due to automation. This time, newspapers are filled with articles about a job market downturn for highly skilled employees and creative jobs. But things are rarely as bad as they seem. “Change can be good or bad. But if you try to ignore change out of fear, things are more likely to turn out bad,” Rob Heyman of the Flemish Knowledge Centre Data & Society explains.
“A machine that can write its own texts signifies the end of all tasks. At least, that’s how everyone feels about this change,” Heyman says. “But if you look at the issue in greater detail, that is not the case. Artificial intelligence needs instructions. You need to explain to ChatGPT what you want the article to look like and to be about. And even then, the text is often not right the first time, so you need to indicate what needs to be rewritten. The work has not gone away, but a tool has been added. And you will need certain skills to guide the AI.”
A major difference with previous waves of digitisation is that this time the jobs of highly-skilled employees are also threatened.
Heyman: “Indeed, this wave includes text and pattern recognition. It may sound sinister, but I’m actually glad that AI is now affecting everyone. I hope it allows us to think about the issue more collectively, as it also affects management levels now. Everyone will need skills to deal with AI. And not everyone is cut out for that.”
“The advantage is that a lot of know-how about how processes work is needed. And the people performing these jobs now have that know-how. AI learns from labelled data. ChatGPT, for example, has analysed massive amounts of texts from the Internet to learn which words to use and how to write texts. But those analysed texts also included dirty talk. Well, it’s the Internet… So, people were needed to label the data and tell ChatGPT which words were appropriate or not.
It took thousands of photos labelled by doctors for AI to be able to detect cancers on photos. AI always needs active knowledge input. And the world keeps evolving. AI systems will always need evaluation and new input provided by humans.”
And ChatGPT can’t write this article. Because it wasn’t present for this conversation.
Heyman: “But it can help. Imagine that text recognition is already so advanced that AI is able to transcribe the conversation perfectly. That it eliminates all “uhs” and weird sentence structures, and transcribes the text in a clean way. Would you mind? It would be convenient, wouldn’t it? You could work so much faster. Isn’t that an advantage?”
But there is a disadvantage as well. If I’m asked to write a second article in the same time span because of efficiency gains, my job might become more stressful.
Heyman: “That’s a discussion about productivity and how far you can push it.”
Maybe we should make some agreements about that?
Heyman: “The legal framework is included in CLA 39 about the implementation of new technology in the workplace. This should be subject to discussion. But unfortunately, this doesn’t always happen in practice. Employers sometimes prefer to avoid that conversation because they don’t want to stir up a hornet’s nest. But they’re going to have to stir it up at some point, so you’d better get it over with. Then you can intervene before major costs are incurred. Employers themselves also benefit from those consultations. There are always risks related to the introduction of topdown radical changes. They also involve major investments. So, you hope for a sufficient return on investment. To achieve that, you better consult with those involved and ask them how they look at the situation. Where are the benefits for them? How does it improve their job? If they don’t see the benefits, the chances of failure are high.”
Can trade unions play a role in this respect?
Heyman: “Definitely. That is what we are trying to address with the Knowledge Centre Data & Society. We know which technologies are coming our way. So, we need to imagine what they are going to look like in the workplace. All parties involved need to anticipate what can go wrong and then find solutions to those potential issues. This requires a lot of vision development. Trade unions are not doing enough of that. They are flying blind, which is unfortunate. They should be at the wheel to help give timely instructions so that innovation has benefits for all.
ARTIFICIAL INTELLIGENCE IN HR POLICY
AI makes it possible to continuously collect and analyse data. This can make people in the workplace feel unsafe or even intimidated. After all, there is always the risk of errors piling up in the information chain. These errors can lead not only to inefficiency, but also to injustices to individual employees.
IBM’s CEO stated in 2019 that AI can predict which employees are about to quit their jobs with 95% accuracy. Yet AI can never determine a person’s exact intentions; it can only predict a ‘probability’.
Much depends on who manages the monitoring, evaluation and surveillance. It is always a human deciding to use AI. The risks also differ depending on the purpose of the AI analysis. Does one want to measure performance? Then the extent to which automated data processing produces accurate data is questionable. But if it is used to control employees, the risks increase even more.
Today, the GDPR provides the most important protection against the misuse of AI. That legislation requires employers to justify the collection of personal data, upholding the principles of transparency, purpose and proportionality. If AI is enabled, however, it is important that employee representatives can make agreements at a management level to limit the use of AI in workforce management.
Never Work Alone 2023 | Author: Jan Deceunynck | Image: Shutterstock