Opinion & analysis

Social media, artificial intelligence, and the challenges for democracy: How Brazil is preparing for local elections in 2024

By Juliana Câmara

In a year when half the world will head to the polls, protecting electoral integrity and democracy in the face of challenges like disinformation, polarisation, and political violence on social media is a crucial discussion. While this concern is not new, it is now intensified by the impact of artificial intelligence (AI) on information ecosystems. Most countries still lack regulations for AI and social media platforms.

In Brazil, the attack on the National Congress, the Supreme Federal Court, and the Presidential Palace on January 8, 2023, highlights the dangers represented by disinformation and online assaults for democracy. In response to the attacks, authorities have attempted to pass social media platform regulation in Congress, but the proposal has stalled.

In 2024 October, Brazil will hold municipal legislative and executive elections. In preparation for these elections, the Superior Electoral Court issued a resolution in February regarding the use of AI and the responsibility of platforms during the electoral process. The official campaign season is set to commence in August.

To discuss the Court's decision, the potential for election violence against historically underrepresented groups, and what this moment means for democracy, Luminate's Juliana Câmara spoke with Heloisa Massaro, Director of Research and Operations at InternetLab. A Luminate partner in Brazil, this independent research center fosters academic debate and generates knowledge in the fields of law and technology, particularly focusing on internet-related issues. Read the interview below:

In the current Brazilian context, how can disinformation, hate speech, and information manipulation affect municipal elections?

This question is crucial for understanding how election-related concerns and risks evolve over time and where we should focus our attention. As new election cycles unfold, it is important to recognise that municipal elections have very different dynamics compared to state and federal elections.

We are talking about elections in thousands of municipalities, all happening simultaneously, each with its own unique dynamics. While major capitals in the country might resemble state and federal elections, smaller towns operate very differently. Therefore, two key points must be highlighted:

First, it is quite common for smaller municipalities to experience what we call "news deserts”, areas without significant independent journalism to verify content and distribute trustful information.

And second, political violence, a long-standing issue in Brazil, takes on new forms as political communication shifts to digital platforms. In municipal elections, this factor is particularly relevant because candidates and voters interact daily, making relationships very localised.

For example, women are more frequently attacked during election periods. Not only women, but candidates from historically underrepresented groups face increased hostility. Thus, at the municipal level, the already porous boundary between online and offline becomes even more blurred and permeable, raising the risk of physical violence against individuals.

The Superior Electoral Court recently issued a resolution on the use of artificial intelligence and the responsibility of platforms in the upcoming October elections. What are the main points, and how do you view this resolution?

Issuing resolutions is a common practice of the Superior Electoral Court (TSE, in the acronym in Portuguese). The TSE, possessing administrative and regulatory authority, updates election resolutions as elections draw near, to oversee election laws.

It is important to highlight some specific points of the resolution on electoral propaganda. Using its regulatory authority over internet-related electoral propaganda and staying attuned to technological changes, the TSE introduced some new provisions related to the use of AI.

First, content created with AI tools must be clearly labeled and identified. This does not mean that the use of these tools is prohibited, but they must be marked to prevent public confusion. Campaigns can use chatbots, but the resolution prohibits them from simulating real conversations with candidates or humans. Additionally, the resolution bans the use of deepfakes—videos, voices, or photos artificially created to depict events that never occurred or statements that were never made.

Another point involves new data protection rules. Over the last two resolutions, the TSE has been incorporating data protection regulations, establishing a dialogue between this legislation and electoral law. This year, the court advanced these regulations, introducing rules such as obligations for recording data processing activities and regulating the use of micro-targeting tools for voter profiling.

Finally, the resolution established new rules concerning disinformation, including a ban on using AI to create disinformation and general guidelines outlining the responsibilities of platforms. According to the resolution, platforms must have terms of use and content moderation policies, especially during the electoral period. They are also required to use their best efforts to ensure electoral integrity and must remove content that is knowingly false or severely out of context, under penalty of being held accountable—this last point being particularly contentious.

How is the resolution perceived by civil society in Brazil, and what are the international precedents?

The dialogue between electoral law and digital law still has much room to progress on the international stage. The TSE is pioneering in entering this discussion, having engaged with platforms over several election cycles and established collaborations. Meanwhile, we observe experiences from other countries where electoral legislation still falls short in addressing numerous issues surrounding online propaganda.

The resolution comes at a time marked by two major international movements: the "super-elections" of 2024, encompassing approximately 50% of the world's population, including major democracies such as the United States and the European Union Parliament. This fuels international debate surrounding electoral integrity and platform accountability, expanding concerns about their actions and the necessary measures to ensure electoral integrity.

The second international movement, that has been ongoing for some time, aims to enact legislation to regulate platforms. A primary example is the Digital Services Act (DSA), which came into effect this year in the EU. This legislation introduces a series of rules, responsibilities, obligations, and parameters for platforms, especially the largest ones.

What are the limitations on each side: both the resolution and the platforms' response? Are there points of concern in each of them?

There are positive aspects, particularly regarding the advancement of topics such as AI, data protection, and the implementation of transparency rules for platforms. A critical rule, stemming from a longstanding demand from civil society, requires platforms to maintain repositories for contracted ads.

On the other hand, we saw the addition of Article 9E, which directly conflicts with the intermediary liability rule, provided for in the Brazilian Civil Rights Framework for the Internet (Marco Civil da Internet, in Portuguese) and in electoral legislation. Article 9E establishes civil and joint administrative liability for platforms for various types of content, including antidemocratic acts and content that is knowingly false or severely out of context, posing a threat to electoral integrity. This rule was not in the draft resolution but was included later, causing considerable surprise and concern due to the conflict with existing legislation.

According to the intermediary liability regime, provided for in Article 19 of the Marco Civil da Internet and in electoral legislation, platforms are only obligated or can be held liable for third-party content if they do not remove it after a judicial decision. This does not mean that platforms do not remove harmful content; they have their own content moderation policies to ensure the integrity and security of their digital environments. This arrangement is crucial for guaranteeing freedom of expression.

However, as we shift responsibility to platforms, potentially penalising them for third-party content, an economic incentive is created for them to remove potentially risky content. Thus, platforms decide what can or cannot be said through an analysis done from an economic perspective.

This change in the responsibility regime is concerning for freedom of expression, as it fosters excessive content removal. This applies especially when it comes to false information, where the distinction between what should be removed or not is complicated, creating a large gray area that can result in excessive removal of legitimate content that is part of the democratic process and political discussion. 

At the same time, this type of arrangement creates incentives for platforms to direct resources towards compliance, rather than investing in improving content moderation policies and structures.

When we think about regulatory structures, the ideal is to create incentives for better content moderation systems that make the digital environment more integral and safer through better rules. When we only create incentives for compliance with a regime of liability for specific content, it is unlikely that we will foster spaces where people feel secure to exercise their freedom of expression.

Given the assaults on the Brazilian democratic system culminating on January 8th, can we say that Brazilian democracy is entering this year's elections more protected?

Predicting potential risks is very challenging. However, I believe that this year's elections will be different and particularly significant, especially considering the upcoming presidential electoral cycle in two years. This is because local dynamics will impact the national elections - and efforts for democratic and electoral integrity are ongoing. Therefore, it is difficult to assert that we are in a more protected situation. We continue to navigate a scenario of constant struggle for the construction of democracy.

Despite the social and political mobilisation around the issue, Brazil has not yet succeeded in advancing the regulation of platform and AI use. Is this debate mature enough to become legislation? Why hasn't it progressed?

The debates on AI and platform regulation are distinct. The regulation of AI involved a commission of jurists, and bills have been proposed that are currently under review in the Federal Senate, involving a wide range of economic actors. On the other hand, there is a movement focused on discussing platform regulation that will encompass issues related to content moderation and transparency, for example. This is a complex issue, mainly due to the sensitivity of freedom of expression - it is a new, unprecedented structure. Designing regulatory models and considering incentives according to the local context is not easy and, at the same time, involves various interests, understandings, and positions in determining the regulatory framework. 

The search for a structure that makes sense in the Brazilian context has recently made significant progress. However, this is a process that depends on political dynamics and the challenges of discussing and crafting alternatives for such a complex, non-consensual topic. The effort to develop the optimal solution takes time. When the bill entered the Federal Senate in 2020, it proceeded very quickly due to the lack of time to mature the topic and for comprehensive deliberation and refinement to occur. In Europe, the law was recently passed, so it's not as if Brazil is lagging behind in the field. Some countries are initiating discussions now, so the debate is natural.