How to contain the dangers of Artificial Intelligence

Print Friendly, PDF & Email

Legal studies cannot ignore the evolving technology regarding Artificial Intelligence. Thus, algorithms can already be used to predict the way in which a judge will rule. To prevent malpractices concerning this AI application, a transparent legal framework is needed. This blog post explores how such a framework should be perceived.

Technology regarding AI is evolving at lightning speed, also in the field of Legal Studies. Nowadays, it is possible for algorithms to predict a judge’s judgement, based on their previous judgments in other cases and language. Whilst this technology opens many new opportunities (e.g. decreasing duration and costs of legal processes), it could also lead to malpractice and abuses like forum shopping. Here, parties could try to choose the court that will try their case, based on a favourable prediction of the possible judgment. Therefore, the development and the use of such algorithms need to take place within a transparent legal framework, which could help to prevent such malpractices, for example, by forbidding the use of AI to predict judgments. Or, as is done in France, forbidding the use of judge’s personal information for the creation of such predicting algorithms.

As set out in this post, this legal framework needs a united cross-border approach. The creation of safeguards should be led by the European Union (EU) because this will lead to a uniform strategy, rather than 27 different national action plans.  However, we must not only look at a robust and strictly legal ‘hard law’ approach to develop certain safeguards, as some methods of soft law could also prove to be of use in this area.

In 2020, the European Parliament conducted an assessment concerning the need for a European framework on the use of Artificial Intelligence. This assessment discusses both the appropriate legal basis for EU lawmakers to use when developing the regulations and the benefits of an EU approach, instead of a different approach per Member State. This has led to a European AI package, which includes a proposal for an AI Regulation.

Legal Basis

In the assessment, the European Parliament’s researchers suggest using article 114 TFEU as the legal basis for the EU to create the needed legal framework. This article states that the EU can take action when the objective is the well-functioning of the internal market. If 27 different Member States were to develop their own different legal frameworks, this would result in a malfunctioning internal market and competition between the Member States. Tech companies would be able to do some ‘cherry-picking’ and choose the more lenient legislation, avoiding countries with stricter legislation and more thorough fundamental rights protection.

In addition to this new legal framework, however, other safeguards relevant to AI-related challenges already exist under EU law. For example, the GDPR, which regulates the collection of personal data, and the Charter of Fundamental Rights of the European Union. In the Charter, we find fundamental rights such as the right to privacy (Article 8)  and the freedom to conduct a business (Article 16), which includes the freedom to exercise an economic or commercial activity. Those two rights reveal a difficult balance between creating what you want (e.g. through artificial intelligence) and protecting your personal life from invasive technologies, which similarly are sometimes created through artificial intelligence.

The creation of an AI-specific legal framework will help the EU in finding the right balance between these two fundamental rights when developing and using AI. Without such a framework, Member States could have different levels of protection regarding one fundamental right (or both), depending on how much value a Member State attaches to those rights. For instance, one could prefer an approach that enhances the freedom to conduct a business and spend less attention on the protection of our privacy, which is an advantage for businesses. This would, as mentioned earlier, result in the possibility for companies to do some ‘cherry-picking’ and opt for a Member State where the protection of our privacy is less stringent.

Why the EU?

The researchers’ main rationale for an EU approach is that the EU as a whole constitutes a much more powerful player on the global market concerning the enforcement of rules, which is also beneficial for the respect of human rights (p. 19-20), as compared with every country acting separately. The enforcement power of the EU as a whole acting against a big tech company is considerably more significant than for instance that of Luxembourg alone.

Another reason is equality inside the EU, as fragmentation of regulation could disadvantage certain countries, due to their less attractive national rules (European added value assessment, p. 18). An example of this can be found in the legislation that France adopted in 2019, amending the existing code for administrative justice. The legislation prohibits the use of judges and magistrates’ personal information (article L10). Such information is often used to predict a judge’s judgment in a certain case. If France prohibits the use of personal information but another Member State allows it, tech companies will be more likely to settle down in the latter Member State, due to its lenient legislation.

Lastly, the EU will benefit from a clear regulation, because of the clarity that it gives to businesses. For them, it would finally become clear what is allowed and what is not in this currently uncertain area (p. 18-19).

 Other possibilities

 When looking for possible safeguards, we must also consider more ‘soft’ approaches, different from the traditional legal framework. For example, researchers at the University of Florida evaluated the fairness of certain algorithms. In order to enhance this fairness, they have suggested four techniques, which regard the creation of the algorithm itself and the outcome. The four techniques are each applied in a specific stadium of the process, from the creation of the algorithm to the actual results.

Developers could implement these techniques. Yet, this procedure is not binding and not enforceable. Creators are not obliged creators to run through a set procedure during the development and the use of the algorithm. Rather, they have various possibilities to apply those four techniques, or not. For example, they have the choice to apply only one technique or two or all of them. Whilst leaving a lot of freedom during the development and use, it is, I think, a great way of ensuring the compliance of tech companies. This compliance could be the result of peer pressure from other tech companies, as the need for fairness in Machine Learning is widely recognised.

Conclusion

Considering the risks and challenges indicated above, AI should not be left unlimited and unregulated but should be restricted by a legal framework that limits the risks of malpractices. This legal framework has already partially been put into place by France. The European Parliament has conducted an assessment regarding possible EU actions in this area. Herein, it has become clear that EU action would be beneficial for the enforcement of rules and the equality between the Member States. This has led to a European AI package that includes a proposal for an AI Regulation.

However, it is not the only approach that should be used when creating safeguards. The application of certain techniques during the development and use of AI to enhance fairness in the algorithm, as shown by the University of Florida, can be a positive motivation that could result in enhanced compliance.

Florence De Houwer is a Research Master student in Law at KULeuven.


Florence DE HOUWER, "How to contain the dangers of Artificial Intelligence", Leuven Blog for Public Law, 24 November 2021, https://www.leuvenpubliclaw.com/how-to-contain-the-dangers-of-artificial-intelligence (geraadpleegd op 27 November 2021)

Any views or opinions represented in this blog post are personal and belong solely to the author of the blog post. They do not represent those of people, institutions or organizations that the blog or author may or may not be associated with in professional or personal capacity, unless explicitly stated.
Any views or opinions are not intended to malign any religion, ethnic group, club, organization, company, or individual.
All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site.
The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information.

Leave a Reply

Your email address will not be published. Required fields are marked *

We reserve the right to refuse, without any correspondence or notification, the publication of comments, for example, due to an insufficient link with the blogpost.

This site uses Akismet to reduce spam. Learn how your comment data is processed.