top of page

TSE, AI, AND ELECTORAL PROPAGANDA: ADVANCES AND REGULATORY OBSTACLES

by Gustavo Borges | mar 11, 2024 | Publications

0 Comments 

TSE, AI, AND ELECTORAL PROPAGANDA: ADVANCES AND REGULATORY OBSTACLES

On March 1, the Superior Electoral Court (TSE) published 12 electoral resolutions amending Resolution 23.610/2019, reported by Vice-President Justice Cármen Lúcia, establishing the guidelines for the 2024 municipal elections.

These changes were made, as provided for in the Electoral Code (Law 9.504/97), so that the TSE, “according to the scenario and technological tools existing at each electoral moment,” can formulate and disseminate “rules of good practice relating to electoral campaigns” (Art. 57-J).

Thus, the TSE, assessing the current technological context, through Resolution 23.732, presented some rules on the use of Artificial Intelligence (AI) in elections, with a focus on electoral propaganda.

(I) Main innovations of the Tse Resolution on IA

The fundamental novelties related to AI include:

(i) a requirement to disclose the use of AI in electoral advertising (art. 9-B);(ii) a ban on chatbots, avatars, and synthetic content to mediate contact with voters (art. 9-B, § 3);(iii) prohibition of deep fakes (art. 9-C) and;(iv) joint civil and administrative liability for platforms that do not immediately remove content containing disinformation hate speech, anti-democratic, racist, homophobic, Nazi, and fascist ideology (art. 9-E).

(II) Advances in the TSE RESOLUTION

Significant progress has been made with the TSE resolution.

We highlight the essential advances of the Resolution, firstly, with the requirement that the use of AI-generated content in electoral advertising be accompanied by an indication that the material has been manufactured or manipulated and include details of the technology used (art. 9-B).

This provision makes electoral propaganda more transparent because voters know the technology used beforehand.

In addition, another improvement for the current context is the ban on the use of “chatbots, avatars and synthetic content as an artifice to mediate campaign communication with natural persons” and any simulation of interlocution with the candidate or another real person (art. 9-B, § 3).

This provision is designed to make it impossible to manipulate individuals through the misleading and erroneous use of audio, images, videos, and texts to safeguard the electoral process.  Similarly, the ban on deepfakes prevents the use of sounds and images in a distorted way, misleading voters.

The prohibition of devices and systems that can produce distorted and massively spread content in bad faith strengthens the democratic process.

Below, we highlight some of the challenges and obstacles created by the resolution, which have already indicated the urgency of a discussion and consideration of its implementation during the election period.

(III) Obstacles to the TSE Resolution

(III.1) More severe civil and administrative liability regime

The Resolution implements a worsening of the liability regime for platforms, which are now held jointly and severally liable if they do not “immediately” remove content containing disinformation, hate speech, anti-democratic, racist, homophobic, Nazi, and fascist ideology (Art. 9-E).

This innovation represents a significant transformation in the way platforms are held accountable under Brazilian law, which significantly reconfigures their role.

However, this change raises concerns about its incompatibility with art. 19 of the Marco Civil da Internet (MCI), a federal law that already regulates the issue. Article 19 stipulates that the final definition of the legal nature of content is a matter for the Judiciary, and providers can only be held civilly liable if they fail to comply with a court order to remove content.

The new regulation transfers responsibility to the platforms for determining whether suspect content is lawful during the elections.

The current lack of legal provisions on the legal contours of this content, i.e., on how platforms should identify the content to be removed, represents a risk to freedom of expression online and significant challenges for the digital ecosystem.

Therefore, it raises crucial concerns about its conformity with the established legal system and merits in-depth debate.

(III.2)Possibility of excessive content removal and prior censorship

The imposition of this new liability regime for platforms in the electoral context could also result in another movement of excessive content removal —which could even lead to prior censorship—due to the fear of direct accountability.

In this new aggravated context, platforms tend to remove questionable content excessively and precautionarily, fearing penalties. This can lead to risks of violating freedom of expression and potential censorship.

(III.3)Open clauses and the creation of technical-operational obstacles: limits to the TSE’s atypical normative power

The resolution provides for the joint civil and administrative liability of platforms “when they do not promote the immediate unavailability of content and accounts” “in the following risk cases” (Art. 9-E).

Open-ended clauses such as “immediate removal of content” (Art. 9-B, § 4), “immediate unavailability of content and accounts,” and “cases of risk” (Art. 9-E) raise questions about their practical operability insofar as they create technical-operational obstacles because they do not allow the interpretation necessary for the proper moderation of content.

In addition, there are other undetermined legal concepts of a subjective nature, such as the terms: “anti-democratic content” (Art. 9-E, I), “notoriously untrue or seriously decontextualized facts that undermine the integrity of the electoral process” (Art. 9-E, II); “hateful behavior or speech, including promotion of racism, homophobia, Nazi, fascist or hateful ideologies” (Art. 9-E, IV).

These are subjective concepts included in the resolution, without a prior law defining them, which are challenging to comply with on a large scale, creating practical technical-operational obstacles and making it difficult for platforms to implement the rapid removal of thousands or millions of pieces of content due to the lack of objective criteria.

The wording does not offer clear and objective guidelines on how platforms should identify the content mentioned, creating doubts about how they should act independently.

The absence of federal laws to regulate issues such as disinformation and hate speech, exemplified by Bill 2630 on Fake News, which is still before the legislature, which is responsible for regulating these issues, highlights the legislative inertia in dealing with these urgent issues, which does not authorize regulation through the TSE’s atypical normative power.

In these terms, the legislature’s regulatory vacuum cannot be filled by the Electoral Court since its regulatory power is limited by the Constitution, federal laws, and electoral legislation. A more in-depth debate in the legislature is needed to clearly establish the concepts by law. As an act subordinate to the law, the resolution can neither expand its content nor restrict it.

If the limits of the rules are not determined, there is a concrete risk of prior censorship on the part of the platforms through excessive content moderation to avoid penalties, which has significant implications for digital democracy.

Therefore, these open clauses are a cause for questioning and deserve debate. They make the responsibility of application providers ambiguous, and they can violate Freedom of Expression and incur unconstitutionalities and illegalities by infringing the Federal Constitution, the MCI, and the Electoral Code itself.

Conclusion Civil society’s role in sparking debates on TSE regulations is to provide a variety of perspectives and opinions that enrich the democratic process by raising concerns and proposing improvements.

When it comes to electoral propaganda on the internet, art. 57-D of the Electoral Code already states that “the expression of thought is free, with anonymity forbidden during the electoral campaign, through the World Wide Web.” Thus, given the constitutional role of the Electoral Justice, which is to reconcile legal provisions and determine interpretation to guarantee the normality and legitimacy of elections, regulation cannot result in a potential restriction of freedom of expression.

Thus, it is undeniable that there have been significant advances in the TSE resolution. However, some of the points highlighted deserve deep reflection to conform them to the constitution and even to federal laws; otherwise, due to open clauses and indeterminate legal concepts, technical and operational obstacles will be created for the proper moderation of content, generating legal uncertainty and the possibility of prior censorship of all actors participating in the electoral process.participantes do processo eleitoral. 

Send Comment

Your email address will not be published. Mandatory fields are marked with *

Unable to submit your data for moderation! Please review the data, refresh the page and try again!

Your comment has been sent for moderation, thank you for your participation!

Recent Posts

EDITAL DE CONVOCAÇÃO ASSEMBLEIA GERAL ORDINÁRIA

UN adopts Labsul’s contributions on online hate speech

METAVERSE IN THE WORLD ECONOMIC FORUM’S COLLABORATIVE REPORTS

TSE, AI, AND ELECTORAL PROPAGANDA: ADVANCES AND REGULATORY OBSTACLES

LABSUL PARTICIPATES IN ARTICLE PUBLISHED ON THE UNITED NATIONS BRAZIL WEBSITE

THE WAR OF DISINFORMATION AND FOREIGN INTERFERENCE IN LATINAMERICA

LABSUL PARTICIPATES IN REPORT ON INTEROPERABILITY IN THE METAVERSE AT THE WORLD ECONOMIC FORUM

LABSUL SENDS A CONTRIBUTION ON ONLINE HATE SPEECH TO THE UN

Comments

Marcela

30 de abril de 2024

Exceptional

bottom of page