Facial Analysis AI Use In Job Interviews Will Probably Reinforce Inequality

By Ivan Manokha, Departmental Lecturer in International Political Economy, University of Oxford 8 Oct 2019

UK companies last month began using facial analysis technology in job interviews for the first time. Handing recruitment decisions over to algorithms will probably perpetuate the biases and discrimination which already exists in wider society, writes political economy expert Ivan Manokha.

Artificial intelligence and facial analysis software is becoming commonplace in job interviews. The technology, developed by US company HireVue, analyses the language and tone of a candidate’s voice, and records their facial expressions as they are videoed answering identical questions.

It was used in the UK for the first time in September but has been used around the world for several years. Some 700 companies, including Vodafone, Hilton, and Urban Outfitters, have tried it out.

There are certainly significant benefits to this approach. HireVue says it speeds up the hiring process by 90 per cent thanks to the pace of information processing. But there are important risks we should be wary of when outsourcing job interviews to AI.

An illustration of a woman's face being mapped

Image Credit: teguhjatipras /Pixabay.

The AI is built on algorithms that assess applicants against its database of about 25,000 pieces of facial and linguistic information. These are compiled from previous interviews of “successful hires” – those who have gone on to be good at the job.

The 350 linguistic elements include criteria like a candidate’s tone of voice, their use of passive or active words, sentence length, and the speed they talk. The thousands of facial features analysed include brow furrowing, brow raising, the amount eyes widen or close, lip tightening, chin raising and smiling.

The fundamental issue with this, as is often pointed out by critics of AI, is that this technology is not born in a perfect society. It is created within our existing society and marked by a whole range of different kinds of biases, prejudices, inequalities and discrimination. The data on which algorithms “learn” to judge candidates contains these existing sets of beliefs.

As UCLA professor Safya Noble demonstrates in her book Algorithms of Oppression, a few simple Google searches shows this in action. For example, when you search the term “professor style”, Google Images returns exclusively middle-aged white men. You get similar results for a “successful manager” search; by contrast, a search for “housekeeping” returns pictures of women.

This reflects how algorithms have “learnt” that professors and managers are mostly white men, while those who do housekeeping are women. By delivering these results, algorithms necessarily contribute to the consolidation, perpetuation, and potentially even amplification of existing beliefs and biases.

For this very reason we should question the intelligence of AI. The solutions it provides are necessarily conservative, leaving little room for innovation and social progress.

‘Symbolic Capital’

Image Credit: Pexels.

As French sociologist Pierre Bourdieu emphasised in his work on the way that inequalities are reproduced, we all have very different economic and cultural capital. The environment in which we grow up, the quality of the teaching we had, the presence or absence of extra-curricular activities and a range of other factors, have a decisive impact on our intellectual abilities and strengths. This also has a big impact on the way we perceive ourselves – our levels of self-confidence, the objectives we set for ourselves, and our chances in life.

Another famous sociologist, Erving Goffman, called it a “sense of one’s place”. It is this ingrained sense of how we should act that leads people with less cultural capital (generally from less privileged backgrounds) to keep to their “ordinary” place. This is also reflected in our body language and the way we speak.

So there are those who, from an early age, have a stronger confidence in their abilities and knowledge. And there are many others who have not been exposed to the same teachings and cultural practices, and may be more timid and reserved. They may even suffer from an inferiority complex.

All of this will come across in job interviews. Ease, confidence, self-assurance and linguistic skills become what Bourdieu called “symbolic capital”. Those who possess it will be more successful – whether or not those qualities are actually best, or bring something new to the job.

Of course, this is something that has always been the case in society. But artificial intelligence will only reinforce it, particularly when AI is fed data of the candidates who were successful in the past. This means companies are likely to hire the same types of people that they have always hired.

The big risk here is that those people are all from the same set of backgrounds. Algorithms leave little room for subjective appreciation, for risk-taking, or for acting upon a feeling that a person should be given a chance.

In addition, this technology may lead to the rejection of talented and innovative people who simply do not fit the profile of those who smile at the right moment, or have the required tone of voice. And this may be bad for businesses in the long run as they risk missing out on talent that comes in unconventional forms.

More concerning is that this technology may also inadvertently exclude people from diverse backgrounds and give more chances to those who come from privileged ones. As a rule, they possess greater economic and social capital, which allows them to obtain the skills that become symbolic capital in an interview setting.

What we see here is another manifestation of the more general issues with AI. Technology that is developed using data from our existing society, with its various inequalities and biases, is likely to reproduce them in the solutions and decisions that it proposes.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Want to know more? Why not read: