Artificial intelligence (AI) has been finding its way into numerous use cases in our lives, personal and professional, and those who have the demanding task of managing people may find AI an immensely helpful partner. People management is a very wide discipline, so one needs to be extremely specific about the problem being addressed.
For those in HR functions, generative AI (Gen AI) acts as a productivity booster by equipping human capital management (HCM) platforms with:
1. Faster and better application tracking systems (ATS), capable of summarising enormous number of CVs and profiles, saving a large amount of time for human specialists from non-added value tasks such as sorting and selecting documents.
2. Easily writing job descriptions without the support of a functional expert.
3. Chatbots are used to clear level 1 employees’ tickets. Employee support services are one of the most timeconsuming tasks HR staff faces.
4. Training content can also be easily written, allowing companies to provide more and better content for their people.
5. Accelerated analytics is one of the major benefits. It is not something exclusive to people management data, as it applies to every domain, but using the conversational capacities of Gen AI to convert raw data to useful information from instructions in natural language is a game changer.
“Describing responsible AI is ensuring that a system is accountable, transparent and can be audited”
No doubt, there is a world of opportunities; nevertheless, a bigger world of challenges awaits organisations. All this must be fuelled with a different kind of talent and skills. Are there enough human resources in the market, duly trained and experienced? Even if a company can acquire them, how about retaining them?
Cybersecurity is one of the big ones; prompt injection, hallucinations, and intellectual property protection are a whole new set of risks and concerns companies must not overlook. But to do that, it takes knowledge and competencies and, with that, investment.
Regulations, especially for those located in the EU, will raise the bar. Obligations and penalties will be in place, so the concept of responsible AI must be introduced into the routines. Describing responsible AI is ensuring that a system is accountable, transparent and can be audited. It must be ensured that bias is prevented, and that justice and ethics are built in the core.
Personally, I have a huge concern about what will happen to innovation, creativity, and the human capability to rationalise and create new knowledge. One must remember that the current most popular Gen AI systems are living off previously created knowledge, from a time where no Gen AI existed, only the human brain did. What would happen to future Gen AI models if the new knowledge production just slows down a bit or more?