Technology creates many legal challenges and opportunities. Reflections on these have been brought together in a recent book. In this post, Anne Oloo considers the right to information on social media.
The last two decades have affirmed the central role that social media platforms play in our lives, not least in the dissemination and imparting of information. Yet, this great power has so far been wielded in opaque ways as platforms provide insufficient details regarding the algorithms that they employ. Although much has been written about the need for regulation of online platforms, current proposals remain insufficiently connected to human rights such as the Right to information (RTI).
Concerns about the workings of social media platforms raise pertinent questions as recent events such as COVID-19 disinformation, threats to democracy etc. have shown (highlighting the power of these platforms beyond the online realm). A first phenomenon many people have identified on social platforms is that of ‘filter bubbles’ in which people only see content that they ideologically agree with. Second, there’s microtargeting, which allows political campaigns to target individuals based on data that platforms have. Lastly, there’s been a lot of outcry on disinformation and how social media platforms provide a fertile ground for their quick spread.
These concerns, however, often focus on aspirational claims on what these platforms should (not) do but not on what legal norms are violated and what legal remedies are available. It is, therefore, neither guaranteed that platforms would act in accordance with democratic principles such as human rights nor that proposed solutions to remedy problems associated with these platforms would be assessed on their compatibility with such principles. A human rights framework could aid in identifying what rights are implicated, and how to remedy harms caused by use of algorithmic systems.
Right to information on social media
One impacted right on social media is the right to information (RTI), which includes the “freedom […] to seek, receive and impart information and ideas through any media and regardless of frontiers” (Article 19 Universal Declaration of Human Rights). RTI allows us to focus on the two uses of platforms: receiving and imparting information.
At the core of the problem is the need to filter content that we see. Since there are thousands of posts that can be shown at one given time, platforms use algorithms to determine the (in)visibility of posts. While content is tailored to match users’ perceived interests and ranking of posts differs between sponsored ads and user-generated content, platforms provide little details regarding the algorithms that they employ to these ends.
Filtering thus raises questions on RTI. One such concern is the impact on pluralism which ensures diversity in media supply (internal pluralism), and in control and ownership of media (external pluralism). Pluralism as a normative criterion arose out of the need to foster RTI in traditional (i.e. broadcast and print) media and thus seems to provide a yardstick for assessing social platforms’ intervention with RTI.
What regulation is therefore desirable and feasible for social media platforms?
Yet, the requirement of external pluralism to ensure access to a wide variety of viewpoints would not work for social media as one can always get information elsewhere: most television and newspaper media companies maintain an online media presence. However, users do not actively seek out alternative sources of information or are unaware that such alternatives exist. Secondly, requiring internal pluralism for online platforms would be somewhat redundant since that is in a way ensured: you can find different content on these platforms. However, there is no guarantee that each individual feed will be pluralistic as there is extreme personalisation of content.
Many of the proposals to regulate platforms are encompassed in the concept of Algorithmic Accountability, which is understood as the “obligation to report, explain, or justify algorithmic decision-making as well as mitigate any negative social impacts or potential harms” (FAT/ML Principles for Accountable Algorithms). There are two aspects to Algorithmic Accountability. The first is the what -which includes principles such as Transparency (allowing users to get insight into algorithmic processes), Loyalty (making sure that platforms work in the interest of their users), Compliance with relevant laws (including human rights law) and Responsibility and liability (redress in case of algorithmic harm). The second aspect of Algorithmic Accountability is the how i.e. methods to achieve accountability that broadly encompass ex ante efforts (to avoid the realisation of algorithmic harms) e.g. auditing algorithms before deploying them, as well as ex post measures (redress in case of harm) such as having oversight mechanisms.
Applying Algorithmic Accountability to online platforms, it becomes clear that platforms are often not transparent nor loyal to their users as they micro target them and make content (in)visible with no concrete explanation on how that is done. Algorithmic Accountability can make platforms broadly accountable by requiring greater transparency through, for instance, making available the description of ranking algorithms.
Beyond aspirational claims
RTI helps to see what we want to protect when it comes to use of platforms namely receiving and imparting information without interference from third parties (or the government). On its own though, it does not tell us how we can move beyond aspirational claims such as having more diversity of information (pluralism) and there is always a risk of focussing on one aspect (reception of information) at the expense of another (imparting information). Providing more content (both on procedure and substance), Algorithmic Accountability thus helps see how we can achieve a desirable goal using general legal and ethical principles that can also be used for all types of algorithmic harms.
Anne Oloo is a PhD researcher in the Law and Development Group-University of Antwerp. Her PhD research is on algorithmic human rights accountability and focuses on inclusive regulation of online global media platforms.
This post is partly based on the chapter ‘ Algorithmic Accountability and the Right to Information: Towards Better Regulation of Social Media Platforms’ in the upcoming book ‘Technology and Society: The Evolution of the Legal Landscape- Marie Bourguignon, Tom Hick, Sofie Royer & Ward Yperman (eds.)
Anne OLOO, "Responsibilising online platforms: Right to information on social media", Leuven Blog for Public Law, 19 October 2021, https://www.leuvenpubliclaw.com/responsibilising-online-platforms-right-to-information-on-social-media (geraadpleegd op 19 May 2022)
Any views or opinions represented in this blog post are personal and belong solely to the author of the blog post. They do not represent those of people, institutions or organizations that the blog or author may or may not be associated with in professional or personal capacity, unless explicitly stated.
Any views or opinions are not intended to malign any religion, ethnic group, club, organization, company, or individual.
All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site.
The owner will not be liable for any errors or omissions in this information nor for the availability of this information. The owner will not be liable for any losses, injuries, or damages from the display or use of this information.