Artificial intelligence is rapidly being deployed in legal systems all over the world. While certain types of software offer to replace the lawyer for several relatively simple issues, others aim to help the lawyer to conduct research faster than any human ever could. Globally, AI has already proven to be a valuable ally in the distribution of justice. Consequently, it is no surprise that voices are calling to replace, or at least support, human judges with artificial intelligence. Is this the future of humanity or the start of the apocalypse?
Arguments in favour of replacement by AI are its speed, objectiveness and pattern-detection capabilities. Opponents, on the other hand, draw attention to the identification difficulties of AI and the dangers of bias as demonstrated by several software applications for profiling and crime-predictive purposes. If AI wants to fulfil its role as a worthy replacement for human judges, it must at least address these concerns. All the more so if it wants to meet the requirements set out by the European Convention on Human Rights (ECHR). In its Article 6, the right to a fair trial, the ECHR sets out several different minimum standards that a court must necessarily meet to guarantee said right. Whether courts powered entirely by AI meet these standards is dependent on a flexible and modern interpretation of the right by the European Court on Human Rights (ECtHR).
Generally, the right to a fair trial is explained, for the civil as well as the criminal limb, as encompassing two types of requirements, institutional and procedural. It is the latter that poses the biggest challenge for artificial intelligence. Granted, it is generally accepted that, under certain circumstances, procedural violations can be rectified by that same court (Helle v. Finland) or a higher court (Schuler-Zgraggen v. Switzerland) if that higher court is competent to judge the merits of the case. Consequently, procedural shortcomings of AI could be found acceptable if they can be remedied at a later stage. However, for the sake of legitimacy, we should aim to have AI meet as many of these conditions as possible, as hard as that may be. In my research, I identified two main challenges for AI in judicial decision-making.
Two challenges for AI
First of all, the right to a fair trial is interpreted as meaning that a court should duly consider and thoroughly examine all observations brought before it. As we daily experience however, computer software is only capable of processing the inputs for which it was trained. Consequently, AI will inherently be unable to ‘duly consider and thoroughly examine all observations’, simply because it is incapable of understanding ‘all observations’. The question then becomes whether a requirement of standardised inputs or the services of ‘translators’ are sufficient to fulfil this requirement.
Secondly, courts are obligated to motivate their decisions. In particular, they must address at least the deciding arguments of a case, especially pleas regarding the rights of the Convention. Once again, this seemingly simple task proves to be terribly difficult for artificial intelligence software. While AI is incredibly efficient at making decisions, the complete opposite is true when it comes to explaining why it has made a certain decision. Especially considering it not only has to provide an explanatory output, but it also has to do so in a format comprehensible to humans. Some experts in the field propose for the AI to provide a list of the most influential inputs in its decision-making process. Others propose the technique of counterfactual questioning to increase understanding of the AI’s decision. By changing or removing certain inputs, one can determine how influential they were to come to the final decision.
Both of these issues relate to a highly debated issue of AI, namely transparency. Since AI is usually self-learning, once it is set up humans lose control and are unable to figure out exactly how it came to a decision. There are methods to increase transparency, but those also leave the AI vulnerable to exploitation. However, there is another way to maximise legitimacy without exposing the software. AI can only operate within the limits set by its creators. It is exactly through those limits that humans can steer the decision-making process.
Decisions on the decision-making process
To facilitate the introduction of artificial intelligence software that is beneficial to all stakeholders of the European judicial systems, I believe we must go beyond Member State action and towards EU-wide consensus. Standards must be established that any AI-software must meet before it can be deployed in the field. More specifically, consensus has to be found on the rules governing the decision-making process, as it is the integrity of that process that lies at the core of legitimacy. Admittedly, the principles proposed here go further than guaranteeing the minimum standards as set forth in Article 6 ECHR. However, I believe Union-wide consensus on these principles to be immensely beneficial for the development of our judicial systems, as we can only avoid a further (digital) fragmentation of the Union’s judiciary through harmonisation.
In my opinion, there are a few principles that should be adhered to. Naturally, the decision should stay within the legal limits. It should be supported by at least as many legal principles as any other possible decision. Moreover, the consequences of every possible decision must be taken into account, based insofar as possible on scientific and statistical data. Through an overall assessment, the consequences of each decision should be evaluated to determine the better outcome overall. In this regard, attention should be given to both systemic (e.g. relating to power of precedent, influence of higher courts on lower courts, judicial certainty, etc.) and case-specific consequences.
Finally, to reduce biases and simulate the interaction between different perspectives (cf. here at 216), multiple datasets could be used to train different ‘AI judges’. That way, the same issue is evaluated by hundreds, if not thousands of different perspectives. Combined, they might find the solution that most closely aligns with the reality of the case.
Artificial intelligence forces us to re-evaluate and reinterpret the right to a fair trial. The extent to which AI will take over our courtrooms entirely depends on a contemporary interpretation by the ECtHR. Can the Court strike the balance between modernisation and the protection of our rights? Moreover, to maximise transparency without leaving AI vulnerable, Union-wide standards will be required. A uniform decision-making process understood by all parties is the only way towards secure transparency. Can consensus be reached on such a highly debated aspect of the judiciary? Only time will tell…
Arkadi De Proft is a law graduate from KU Leuven. His Masters’ degree thesis, under the supervision of Frank Fleerackers, discussed the possibilities and challenges of introducing artificial intelligence in judicial systems and how AI must be trained to make its decisions.
& Arkadi DE PROFT, "¡A.I. Caramba! – The reconciliation of AI judges with the ECHR", Leuven Blog for Public Law, 16 October 2020, https://www.leuvenpubliclaw.com/a-i-caramba-the-reconciliation-of-ai-judges-with-the-echr (geraadpleegd op 23 April 2021)
Any views or opinions represented in this blog post are personal and belong solely to the author of the blog post. They do not represent those of people, institutions or organizations that the blog or author may or may not be associated with in professional or personal capacity, unless explicitly stated.
Any views or opinions are not intended to malign any religion, ethnic group, club, organization, company, or individual.
All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site.
The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information.