To Protect and Serve? Framing the impact of police technology on human rights.

Print Friendly, PDF & Email

Technology creates many legal challenges and opportunities. Reflections on these have been brought together in a recent book. In this blogpost, Thomas Marquenie discusses how novel law enforcement technologies like predictive analytics and algorithmic profiling pose new challenges to human rights and police integrity.

Law enforcement has long relied on technology in the fight against crime and the protection of public safety. In contemporary police practice, powerful analytical tools involving complex algorithms and machine learning methods increasingly aim to allocate resources more efficiently, draw inferences from large volumes of data, and assist in various decision-making processes. Yet, the efficacy of these applications remains a point of contention while their use raises pressing ethical concerns for public interests.

Setting the scene for law enforcement AI

Computer systems with advanced analytical capabilities are shaping modern society. Frequently referred to as Artificial Intelligence (AI), these increasingly autonomous systems capable of mimicking human cognition and independently drawing inferences from data are being deployed to support critical decision-making and automate various tasks. In the context of law enforcement and criminal justice, these tools are being used to predict future crimes and trends, to assess the risks associated with certain individuals or locations, and to support the creation of evidence by unearthing obscure patterns in data.

Pressing Legal and Ethical Challenges

These advancements are often presented as capable of streamlining police operations and expanding law enforcement’s capacity to manage the growing amounts of data produced by a digital world. Nevertheless, their effectiveness remains the topic of debate while their use poses various legal and ethical challenges for civil liberties and algorithmic fairness.

As a comprehensive assessment of these issues lies beyond the scope of this blogpost, two of the main problems will be highlighted here. First, advanced analytical systems are at risk of producing biased and discriminatory results. Through their reliance on machine learning techniques that utilize various types of data to train the system and teach it to identify patterns in crime, society and police practice, these applications are prone to replicating and exacerbating human biases and preconceptions. For example, using data from historically over-policed areas with high concentrations of disenfranchised and vulnerable minority groups could create dangerous feedback loops through which harmful and disproportionately invasive practices are continued. Similarly, relying on facial recognition systems prone to mistaking certain minority groups could lead to wrongful arrests.

Second, these systems are often plagued by a lack of transparency and accountability. As a result of technical complexity, deliberate secrecy and/or intellectual property interests, the exact workings of these programmes are rarely known to the public and sometimes even to the police analysts using them. For this reason, they are often compared to “black boxes” –  mysterious processes that convert inputs into recommendations, analyses and decisions without sufficient clarity on how they arrived at their conclusion. This presents serious issues in the context of policing, as law enforcement personnel might put undue trust in flawed machines that they don’t fully understand and are incapable of holding accountable.

As a result, law enforcement AI risks producing unfair and inaccurate results used to make important decisions on police operations and ongoing investigations. Without adequate oversight and mitigation measures in place, this could exacerbate harmful and discriminatory practices that are difficult to detect or rectify due to the lack of transparency.

Risks to the rights of privacy, fair trial and equal treatment

These factors could thus have a significant negative impact on various human rights. While more fundamental freedoms could be affected by these technologies, three of them deserve particular mention. First are the rights to privacy and data protection, which protect one’s private life and personal data from excessive and undue interference. Since interferences therewith must be proportionate and subject to stringent safeguards, these rights risk being disproportionately  affected by large-scale data collection and analytics involving law enforcement. The use of criminal profiling, big data and various tools like facial recognition technology could blur the line between policing and (mass) surveillance without an adequate justification.

Second is the right to fair trial or due process. The criminal limb of this right entitles all individuals to fair and equitable court proceedings. Under the presumption of innocence, everyone is treated as innocent until proven otherwise. This fundamental principle, however, could be undermined by the use of predictive policing tools intended to identify likely suspects and high-risk individuals prior to committing a crime. Similarly, the abovementioned problem of opacity could restrict a defendant’s ability to review the reliability of incriminating evidence and the process by which it was obtained.

Third is the right to equal treatment, which requires all individuals to be treated in a non-discriminatory manner before the law. As described above, police agencies using potentially unfair and biased tools to guide their operations risk subjecting vulnerable groups and individuals to unequal and prejudiced treatments.

What now?

The growing use of advanced analytical tools in law enforcement appears inevitable. But while these technologies could provide significant benefits for police personnel, they clearly also introduce novel legal issues and significant risks to human rights. As of yet, little case law or concrete legal instruments exist that can guide the fair and reliable deployment of these tools. There is thus a need for new guidelines or legislation like the recently published proposal for an AI Regulation. Until these can provide further clarity on the limits and usability of such technologies, law enforcement should closely adhere to the relevant data protection legislation and proactively adopt mitigation measures to promote algorithmic fairness, ensure human oversight of system outcomes. Moreover, they should exercise severe caution when using novel technologies in its investigations and operations.

Thomas Marquenie (@tmarquenie) is a reseacher at CiTiP (KU Leuven). He is working on the European Commission’s VALCRI Project, which seeks to create a Visual Analytics-based sense-making capability for criminal intelligence analysis by developing and integrating a number of technologies into a coherent working environment.


Thomas MARQUENIE, "To Protect and Serve? Framing the impact of police technology on human rights.", Leuven Blog for Public Law, 12 November 2021, https://www.leuvenpubliclaw.com/to-protect-and-serve-framing-the-impact-of-police-technology-on-human-rights (geraadpleegd op 27 November 2021)

Any views or opinions represented in this blog post are personal and belong solely to the author of the blog post. They do not represent those of people, institutions or organizations that the blog or author may or may not be associated with in professional or personal capacity, unless explicitly stated.
Any views or opinions are not intended to malign any religion, ethnic group, club, organization, company, or individual.
All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site.
The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information.

Leave a Reply

Your email address will not be published. Required fields are marked *

We reserve the right to refuse, without any correspondence or notification, the publication of comments, for example, due to an insufficient link with the blogpost.

This site uses Akismet to reduce spam. Learn how your comment data is processed.