Will Artificial Intelligence Impact Our Human Rights?
News

Will Artificial Intelligence Impact Our Human Rights?

By Hannah Shewan Stevens, Interim Editor 17 Sep 2021
Discrimination, Privacy, Technology
Credit: Maxim Hopman / Unsplash

Want more discrimination-related news?

Get regular news about this topic by signing up to our daily newsletter.

Get involved

Share this article with a friend

The United Nations Human Rights chief has called for urgent action to assess the significant risk posed to human rights by the sale and use of artificial intelligence (AI) technology. 

Michelle Bachelet’s call follows the publication of a report by the Office of the United Nations High Commissioner for Human Rights (OHCHR) analysing how AI affects people’s right to privacy and other rights, such as the rights to education, freedom of peaceful assembly and association, and freedom of expression. 

She expressed concerns about the “unprecedented level of surveillance across the globe by state and private actors”, insisting that its use in this regard was “incompatible” with human rights. 

“The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be,” she said.

In the UK, third in world rankings for most surveilled countries, heightened use of AI could be considered a breach of several rights enshrined in the Human Rights Act, including the right to a private life, access to education, freedom from discrimination and freedom of expression. 

Speaking at a Council of Europe hearing on the implications of July’s controversy over Pegasus software – which revealed apparent widespread use of surveillance spy software affecting thousands of people in 45 countries across four continents – Bachelet said that the Pegasus revelations were not a surprise to many who already feared misuse of surveillance technology. 

“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times,” said Bachelet. “But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”

Action is needed now to put human rights guardrails on the use of AI, for the good of all of us

The report also assessed automated decision-making and other machine learning technologies. Tim Engelhardt, human rights officer at OHCHR, rule of law and democracy section, described the situation as “dire”, adding that it has “not improved over the years but has become much worse”. 

The European Union has agreed to strengthen rules on controlling the use of AI, but Engelhardt warned that “we don’t think we will have a solution in the coming year, but the first steps need to be taken now or many people in the world will pay a high price”. 

Biometric technologies – the measurement of physiological characteristics like, but not limited to, fingerprints, facial features and iris patterns – are becoming a go-to solution for international organisations and technology companies alike. The report states they are an area “where human rights guidance is urgently needed”.

Including facial recognition, these technologies identify people in real-time and from a distance, potentially allowing the unregulated tracking of any individual. 

Until authorities can show that there are no significant issues with accuracy or discrimination and that AI systems can comply with privacy laws, the report calls for a moratorium on using biometric technologies in public spaces.

The OHCHR’s report emphasises the need for companies to be much more transparent in how they are developing and using AI. It said: “The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors, are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society.”

Several studies have indicated that facial recognition technologies powered by AI have the potential to be racially biased. A 2019 study in the UK revealed that 81% of suspects flagged by facial recognition technology used by London’s Metropolitan Police were innocent. 

“We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight and dealing with the almost inevitable human rights consequences after the fact,” Bachelet added. “The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”

About The Author

Hannah Shewan Stevens Interim Editor

Hannah Shewan Stevens is an NCTJ-accredited freelance journalist, editor, speaker and press officer based in Birmingham. Her areas of interest are broad-ranging but the topics she is most passionate about are disability, social justice, sex and relationships and human rights. Hannah believes in using her own voice and elevating others to create meaningful change in the world. She is also a sex columnist for The Unwritten and has recently completed her first accreditation in delivering Relationships and Sex Education.

Hannah Shewan Stevens is an NCTJ-accredited freelance journalist, editor, speaker and press officer based in Birmingham. Her areas of interest are broad-ranging but the topics she is most passionate about are disability, social justice, sex and relationships and human rights. Hannah believes in using her own voice and elevating others to create meaningful change in the world. She is also a sex columnist for The Unwritten and has recently completed her first accreditation in delivering Relationships and Sex Education.