AI is the simulation of human intelligence by machines. It requires a foundation of specialised hardware and software for writing and training machine learning algorithms. AI programming focuses on cognitive skills that include learning, reasoning, self-correction and creativity.
Kate Jones, director of the Diplomatic Studies Programme at Oxford University, is an expert consultant on human rights law. Speaking about resetting the relationship between AI and human rights, Jones stated: “Human rights are central to what it means to be human. They were drafted and agreed, with worldwide popular support, to define freedoms and entitlements that would allow every human being to live a life of liberty and dignity.”
What does AI mean for workers’ rights?
Workers’ rights have also been called into question by algorithms that calculate pay. Examples include app-based services that can encourage workers to undertake longer shifts. The concern for gig workers is due to the fact that, in some cases, drivers or riders have worked longer hours but not necessarily been paid for that extra time. The most recent example has been Lyft drivers, who claim that destination filters have been changed and they have been penalised for not completing those jobs.
More broadly, AI threatens to replace human workers in a variety of fields, endangering people’s right to work.
Could AI help tackle online sexual exploitation?
Technology-facilitated child sexual abuse and exploitation is not only a child protection matter but also raises questions of corporate responsibility and public policy.
The charity End Violence Against Children and Technological University Dublin have been developing ‘N-Light’, an AI tool to tackle online child sexual exploitation and abuse. The AI tool aims to advance global understanding of trends in perpetrator behaviour and debunk strategies and tactics that are used to gain access to and coerce children into sexually exploitative acts.
Dr Susan Mckeever and Dr Christina Thorpe stated: “There is great potential to identify gaps in policy, emerging or evolving threats, technologies being misused for nefarious purpose, etc, which could better equip front-line responders, such as hotlines, helplines, and child’s rights agencies in their work.”
Identifying forced labour at sea
Fishing in high seas is not only labour intensive but it is also remote, making it an attractive environment for human traffickers to employ forced labour. Under, for example, the European Convention on Human Rights and the UK Human Rights Act, people have the right to freedom from slavery and forced labour.
According to the International Labour Organisation (ILO), there are several ways in which AI could help non-governmental organisations (NGOs) and law enforcement agencies identify people who have been trafficked or forced to work on fishing boats.
By combining the Automatic Identification System and existing satellite imagery with AI, it could help port inspectors identify victims of modern slavery.
The Automatic Identification System was designed to avoid collisions on the high seas, and Vessel Monitoring Systems were adapted for fishing vessels to support fisheries management at the national level. Meanwhile, there is existing satellite imaging for vessels that help identify craft that have turned off their tracking systems – or “gone dark”. However, the ILO acknowledges that the AI would need to incorporate standards set by the ILO’s Work in Fishing Convention No 188, as well as the ILO’s indicators of forced labour.
Ongoing concerns about facial recognition
You do not have to travel far in the UK before your face is picked up through facial recognition software. This type of system is implemented in public spaces such as train stations. However, the growing use of facial recognition combined with machine learning raises other concerns. Facial recognition software is capable of tracking individuals, as well as the movements of groups or crowds. This technological development enables an unprecedented form of mass surveillance, potentially putting at risk people’s right to freedom of assembly and association and their right to a private and family life.
Research also indicates that facial recognition can be discriminatory when it comes to race and gender. According to a study by Massachusetts Institute of Technology (MIT) and Microsoft, facial recognition algorithms are less likely accurately to identify women and people of colour than men and white people – meaning that women of colour, in particular, face higher rates of mistaken identity, unfounded suspicion and false accusations.
Regulation is needed
In order to safeguard human rights and for AI to contribute to the cause of social justice worldwide, regulations is needed.
Kate Jones said: “AI, its systems and its processes have the potential to alter the human experience fundamentally. But many sets of AI governance principles produced by companies, governments, civil society and international organisations do not mention human rights at all. This is an error that requires urgent correction.”
The Council of Europe has also echoed this concern, previously stating: “Ensuring that human rights are strengthened and not undermined by artificial intelligence (AI) is one of the key factors that will define the world we live in. AI-driven technology is entering more aspects of every individual’s life, from smart home appliances to social media applications, and it is increasingly being utilised by public authorities to evaluate people’s personality or skills, allocate resources, and otherwise make decisions that can have real and serious consequences for the human rights of individuals.”
From spyware and its implications for the right to privacy to self-driving vehicles and their implications for the right to life, systems which deploy AI raise many difficult questions when it comes to safeguarding our human rights. The European Parliament will vote on the AI Act in the following weeks.