Skip to main content

How the European Union’s AI Act Provides Insufficient Protection Against Police Discrimination

May 14, 2024

Image is a black and blue background with a light blue graphic of lines and circles connecting. White text reads, How the European Union

Lukas Arnold

Lukas Arnold (LL.M., MSc) is a graduate student at Columbia University, Department of Computer Science, and at University of Bern, Department of Public Law. His current research focuses on racial discrimination and technology.

 

How the European Union’s AI Act Provides Insufficient Protection Against Police Discrimination

 

On December 9, 2023, the Council of the European Union (EU) and the European Parliament—the EU’s legislative body—announced an agreement on the Artificial Intelligence Act (“the Act”),[1] which was subsequently approved by the Parliament on March 13, 2024.[2] It is the first comprehensive legislation on artificial intelligence in the world and is expected to have an impact on AI regulation far beyond the borders of the EU. One of the Act’s stated main objectives is to prevent discrimination, stating, “Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.”[3] In the preamble of the Act, the principle of non-discrimination is repeatedly emphasized and the risks of AI are acknowledged.[4] Naturally, a prohibition of discrimination should include adequate protection against discrimination by AI systems that are used by law enforcement. However, the Act’s provisions fail to account for this.

Existing Bias in Facial Recognition Technology

AI systems used by law enforcement agencies, such as police and border patrols, carry the particular risk of stark interference with individual rights. Take the example of real-life facial recognition technology (“FRT”). This AI might be run on footage of live CCTV systems, such as in public transport systems of major European metropoles; for instance, the Paris metro has installed nearly 10,000 live cameras.[5] Misidentification in this footage may have devastating consequences; if a person is wrongfully identified as a suspect of a crime, it can lead to arrest, significant time spent in custody and pretrial detention, and an increased risk of wrongful conviction. FRTs do not operate with perfect accuracy, and misidentifications will undoubtedly happen. In fact, the accuracy of FRT is known to be unequally distributed among different demographic groups and biased against already marginalized populations. Several studies have shown that FRTs carry racial bias and that it would take significant effort to overcome such bias.[6] One study found that differences in false positive error rates differed by factors as much as 100 times between countries.[7] Another, that error rates for dark-skinned women were up to 34 percent higher than those of light-skinned men.[8] FRTs can contribute to already existing police bias and institutional racial discrimination. Misidentification by AI systems also adds to the risk that a government could abuse AI systems to silence dissent and persecute opponents. In fact, this already happened when metro CCTV footage was used by the Moscow city government to track down a protester.[9] FRTs could also potentially be misused for religious, ethnic or racial persecution.

Inadequate Regulation of Bias

AI systems’ potential to carry biases and be misused by law enforcement carries immense and imminent risks for human rights. However, the Act does not do enough to address these risks. The Act classifies biometric identification systems used by law enforcement (whose real-time use—by the way—is only permitted for specific purposes and with proper authorization) as a “high-risk” system.[10] High-risk systems are subject to higher regulatory standards. One requirement is that they perform with a sufficient level of accuracy,[11] however, as previously shown, “accuracy” of such systems is not distributed equally among different demographics due to system bias. So how does the Act address bias in AI systems? Granted, the Act requires data government and management practices to be put in place that require high-risk AI systems to be examined in terms of possible biases, and risk management practices to mitigate likely risks to fundamental rights.[12] The addition of this mandate, it can be argued, is progress, however it is not sufficient. A high-risk AI system may be deployed even if such risks persist, as long as there is commensurate human oversight minimizing those risks.[13] In other words, discriminatory AI systems may be deployed as long as they have human oversight. The Act itself, however, does not require an AI system to be unbiased. It also does not require human oversight to be of such nature that it eliminates discrimination, but only that it minimizes it.

Challenges in Human Oversight

The Act generally requires an output made by a remote biometric identification system, such as FRT, to be independently verified by two humans before a final decision is made.[14] This requirement for post hoc verification provides some risk protection. However, there exists an exception to this requirement. The EU and its member states can decide to exempt human verification “for the purposes of law enforcement, migration, border control or asylum” in cases where the oversight requirement is considered disproportionate by “Union or national law.” Since the disproportionality clause is not tied to any specific criteria, this equates to the possibility of the EU and its member states to provide general exemption for their law enforcement and migration agencies from the requirement.[15] The exception thus applies in those areas where humans are particularly vulnerable to misidentification. Without the strict requirement for an AI system to be unbiased, its outcomes, even when applying human oversight, are likely to be biased as well. Even in cases presented to post hoc human verification, human identification is not perfectly accurate and verifiers might exhibit their own biases.[16] Therefore, the output of the human verification stage is statistically dependent on the preselection made by the biased AI algorithm, making the outcome of its output likely to be biased as well. But even assuming that sufficient diligence in human post hoc verification results in a sufficient reduction of misidentification through the deployment of AI systems, such a requirement for sufficient diligence is omitted in the Act. Together with the fact that an unbiased AI system is not required (i.e. that there are scarce regulatory counter-measures to AI bias), this can lead to highly undesirable, discriminatory outcomes.

Limited Transparency

To prevent bias and discrimination, transparency is essential. Transparency is necessary in respect of both the technical accuracy of the algorithm and its deployment, particularly towards the people affected. The Act does not require sufficient transparency. Transparency requirements on high-risk AI systems, such as FRT, apply to providers who are deployers, such as police, but not explicitly to those who interact with such a system, such as those (potentially wrongfully) targeted by the police.[17] In fact, while there is a general obligation to inform natural persons that they are interacting with an AI system, the Act makes an explicit exception for law enforcement “to detect, prevent, investigate and prosecute criminal offences.”[18] This exception also applies to generative and manipulative AI.[19] Without further information on interactions with FRT systems, it is difficult, and under certain circumstances impossible, to find a legal remedy against the discriminatory use of such systems by law enforcement. If an investigation is closed or a risk has ceased, there is no reason not to inform a person about their interaction. Without such information, it is impossible for an affected person to challenge the use of FRT. It is regrettable that a transparency mandate for law enforcement would not, at the least, include an obligation for a post hoc notification about interactions with FRT systems.

Insufficient Protection Against Discrimination

The Act fails to create requirements that limit bias of AI systems used by law enforcement and to mandate sufficient transparency of AI deployment. In fact, the Act exempts law enforcement from many of the relevant general requirements, despite the fact that law enforcement agencies are most likely to interact intrusively with the fundamental rights of persons, including the right to freedom and security. Marginalized communities, like people of color, are frequently exposed to bias and discriminatory practices by the police. Biometric identification systems bear a strong risk of adding to existing racial and other biases within law enforcement, with potentially devastating consequences for individuals. Therefore, clear and effective protections against discrimination by AI systems in use by law enforcement are warranted. However, the AI Act on its own does not provide sufficient protection against discrimination. Such protection could be implemented through additional regulations from the European Union or by individual nations. However, it is regrettable that the AI Act, in its enacted form, omits such protections.

 

[1] Artificial Intelligence Act, Eur. Parl. Doc. (P9 TA(2024)0138) (2024); Press Release, Eur. Comm’n, Commission welcomes political agreement on Artificial Intelligence Act (Dec. 9, 2023), https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[2] Press Release, Eur. Parl. Artificial Intelligence Act: MEPs adopt landmark law (Mar. 13, 2024), https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law.

[3] EU AI Act: First Regulation on Artificial Intelligence, Eur. Parl. (updated Dec. 19, 2023), https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act -first-regulation-on-artificial-intelligence.

[4] See, e.g., Eur. Parl., supra note 1 at (7), (21), (27), (28), (29) & (30).

[5] Assurer la protection de nos voyageurs, RATP (Sept. 14, 2018), https://www.ratp.fr/groupe-ratp/pour-nos-voyageurs/assurer-la-protection-de-nos-voyageurs.

[6] See, e.g., Abdallah Hussein Sham, et al., Ethical AI in facial expression analysis: Racial bias, 17 Signal, Image and Video Processing 399, 403-05 (2022).

[7] Patrick Grother, Mei Ngan, & Kayee Hanaoka, Face Recognition Vendor Test (FVRT): Part 3, Demographic Effects, at 2, Nat’l Institute of Standards and Tech. (Dec. 2019).

[8] Joy Buolamwini & Timnit Gebru, Gender shades: Intersectional accuracy disparities in commercial gender classification, 81 Proc. of Machine Learning Research 2, 8 (2018).

[9] Glukhin v. Russia, No. 11519/20, Eur. Ct. H.R. (2023).

[10] Eur. Parl., supra note 1, at Art. 5, para. 2, 3; Annex III, para. 1.

[11] Id. at Art. 15, para. 1.

[12] Id. at Art. 10, para. 2(f); Art. 9, para. 2(a).

[13] Id. at Art. 14, para. 1, 2.

[14] Id. Art. 14 para. 5.

[15] Id.

[16] See generally Leslie R. Knuycky, Heather M. Kleider, & Sarah E. Cavrak, Line-up misidentifications: When being ‘prototypically black’ is perceived as criminal, 28 Applied Cognitive Psych. 39 (2014).

[17] Eur. Parl., supra note 1, at Art. 13, para. 1.

[18] Id. at Art. 50, para. 1.

[19] Id. at Art. 50, para. 4.