In the last few years, AI has progressively made its way into the recruitment process. Its use even increased since the start of the Coronavirus Crisis that led to a global generalisation of remote work.
To facilitate the recruitment process remotely, companies are now more inclined than ever to use AI software to assist them. The goal is to make the process faster, better, and fairer thanks to the algorithms.
How Does The Algorithmic Recruiting Work?
There are different forms of algorithmic recruiting: indeed, recruiting is a long and complex process, and AIs can help on many levels. The first category of AI tools is able to automatise the search for potential applicants. It includes social networks and websites such as LinkedIn in its algorithm and automatically connects job seekers with offers that match their skills.
Other AIs are searching the Internet to identify ideal applicants for the job offer via data. Finally, the last category of AIs uses data sciences to optimise the redaction of the description under each job offer to attract as many applicants as possible.
Then, when it is time to select applicants for the job interview, the AI can help managers and human resources in their choice. Some AIs can read the applicants’ resumés and select the most promising. Other AIs use a chatbox that can automatically contact the selected profiles to have a first simple conversation, to make sure that they possess the skills that are needed for the job, and then plan a first interview with a human recruiter.
AI: How To Make The Recruitment Process Fairer?
Many companies assure that their AI allows them to evaluate the applicants’ profiles more fairly: AIs have a more objective approach than a human recruiter would. Aside from better efficiency and a process well adapted to remote work, the companies’ promise is indeed to make the recruitment process fairer with the help of AIs and to fight against discrimination in employment. AIs are after all able to deal with data that are not accessible to the human mind. They never take breaks, are never tired, always rational, and they are presumably without bias.
However, if the AIs are made by humans, is there a possibility for the latter to introduce their own biased into the AI’s DNA? Searchers have pointed out that algorithms learn from humans’ choices, and that their aim is to reproduce their companies’ past decisions. Once the algorithm is made, it stays impartial, but its making is not.
Many algorithms used for recruiting are basing their decisions on past recruiting data from the company, and therefore perpetuate its past bias. Their limitation does not alter their utility. When well developed and well used, AIs can make the recruiting process easier more efficient. However, AI experts currently advise setting some rules to structure this process and make applicants aware of the AI and the use of their data.