Deepfake videos and synthetic content targeting elections and public figures prompt calls for urgent regulation ahead of 2026 polls
06/09/2025
The growing use of artificial intelligence to create highly realistic fake content has raised alarm bells within the Brazilian government, especially as the country approaches the 2026 general elections. Officials are particularly concerned with the manipulation of real data to produce convincing fake videos—spread widely on social media—that can mislead the public or facilitate digital scams. Authorities now frame the issue as not just an electoral or regulatory matter but one of public safety and national defense.
Three weeks ago, Valor reported on a case in Argentina where an AI-generated “deepfake” video may have influenced the results of Buenos Aires’ city council elections on May 18. The video, widely circulated the night before the vote, appeared to show former President Mauricio Macri endorsing Javier Milei’s party, even though Mr. Macri was in fact a political rival. The video used AI to reproduce Mr. Macri’s appearance and voice with striking accuracy.
The capital is a traditional stronghold of Argentina’s right wing. But when results came in, Mr. Milei’s party led with 30% of the vote, followed closely by the left-leaning Peronists with 27%. Mr. Macri’s party finished third, with just 16%—a surprising defeat in its own stronghold.
In Brazil, concerns have intensified after similar cases surfaced involving President Lula and Finance Minister Fernando Haddad. In one instance, a fake video circulated showing Mr. Lula supposedly announcing that the government would grant Bolsa Família cash-transfer benefits to women caring for reborn dolls—lifelike dolls that resemble newborns. The video used AI-driven lip-syncing to manipulate a real interview in which Mr. Lula was actually talking about soccer.
In another case, a fake video showed Mr. Haddad saying the government would tax pregnant women and pet dogs. The manipulated footage was based on a real speech where he had discussed taxing online betting platforms. These videos gain traction precisely because they combine authentic content with manipulated audio or visual elements, lending them false credibility.
Attorney General Jorge Messias, who leads the government’s effort to hold digital platforms accountable, told Valor that neither Brazil nor other countries are prepared for the chaos caused by AI, which he described as a “dystopian and unregulated technology.” He noted that while electoral technology has evolved, such as the now-regulated mass messaging systems used illegally in 2018, the role of AI in elections will be far more prominent in 2026.
In the meantime, AI is also being used in digital scams, many of which target vulnerable groups such as children, teenagers, and the elderly. A notable example was the so-called “Pix crisis,” when a false rumor spread that the government would tax the country’s instant payment system. Scammers took advantage by falsely charging fees for money transfers.
According to Mr. Messias, the lack of regulation has turned the debate over AI and digital platforms into an issue of public safety. “Street crime has moved onto the platforms,” he said. “These are the digital pickpockets.”
While the Supreme Court is currently reviewing platform liability, the issue is also being addressed in Congress. The Attorney General’s Office (AGU), in partnership with the President’s Chief of Staff and the Presidential Communication Secretariat (SECOM), is drafting new legislation to be submitted for President Lula’s approval before reaching Congress.
In the absence of stronger legislation, the AGU, through its National Office for the Defense of Democracy (PNDD), has been working to remove or flag false and criminal content on social media platforms. In December, the office secured the removal of 12 fake videos about Mr. Lula’s health from YouTube. At the time, the president was recovering from brain surgery following a fall, but some videos falsely claimed he had died. YouTube removed seven of the 12 flagged videos within 24 hours via an out-of-court process.
YouTube, which sees around 20 million videos uploaded daily, stated that it uses AI to identify potentially harmful or manipulated content, which is then reviewed by humans. Users can also report misleading videos, which may be removed or labeled as disinformation. The company emphasized that the use of AI alone does not necessarily constitute a policy violation and that each case is reviewed individually under its guidelines and user rights.
Between January 2023 and May 2025, the PNDD submitted 39 out-of-court takedown requests. Of those, 23 were fully granted, two partially accepted, six rejected, and eight remain pending. In court, five requests were granted, two were rejected, and five await a ruling.
Over time, the PNDD has found that applying a “disinformation” label is often more effective than removing content entirely. Labeling allows users to access the content while being explicitly warned about its misleading nature—a solution the platforms may find less invasive.
Meta, which owns Facebook and Instagram, did not comment when contacted.
*By Andrea Jubé — Brasília
Source: Valor International
https://valorinternational.globo.com/