The current year 2024 constitutes an important test for politics in this era artificial intelligenceElections are held in countries representing half the world's population, while modern technology contributes to the spread of misinformation massively.
An Agence France-Presse report described the year 2024 as “crucial” for democracy, as elections will be held in 60 countries, including India, South Africa, Britain, Indonesia, and the United States, in addition to the European Union.
The first test of how to withstand the storm of artificial intelligence-based disinformation has already taken place. The Taiwanese elected Lai Ching-te as president last week despite a widespread disinformation campaign against him, in which experts point the finger at China.
Beijing considers new President Lai a dangerous separatist because of his repeated emphasis on Taiwan's independence, and the TikTok platform was filled with conspiracy theories and insulting expressions about him in the period leading up to the elections.
The fact-finding team at the Agence France-Presse found several video recordings of this type that came from “Duyane.” Chinese version of the application Tik Tok.
But it remains to be seen how events will unfold in other countries, where generative AI threatens to exacerbate already deep polarization and distrust of traditional media.
Fake photos of Donald Trump during his arrest last year and others of Joe Biden announcing a general mobilization to support Ukraine were revealed. How advanced this technology has reached.
Currently, the few details through which it was possible to detect counterfeiting operations, such as fingers for example, which artificial intelligence finds difficult to manipulate, are rapidly disappearing, which makes monitoring mechanisms less able to perform their tasks, and thus the risks increase.
“Disinformation”…the first threat
The World Economic Forum ranked disinformation as the biggest threat over the next two years, warning that undermining the legitimacy of elections could lead to internal conflicts, terrorist acts and even “state collapse” in the worst cases.
Groups linked, especially to Russia, China and Iran, are resorting to artificial intelligence-based disinformation, seeking to “shape and obstruct” elections in adversary countries, according to the analysis group Recorded Feature, and this will not be the first time.
It reproduced the (similar) “Operation Doppelganger” that was launched in early 2022. Accounts of media and public institutions known for disseminating positions supportive of Russia, especially with regard to Ukraine.
In contrast, repressive regimes can also use the threat of disinformation to justify enhanced censorship and other human rights violations, according to the World Economic Forum.
Countries hope to respond through laws regulating this process, but their work is very slow compared to the rapid development of artificial intelligence.
The Digital India Act and the European Union's Digital Services Act will require platforms; Targeting misinformation and deleting any illegal content, but experts doubt their application possibilities.
China and the European Union are working on comprehensive AI laws, but it will take time. It is unlikely that the European Union law will be completed before 2026, according to the report.
In October, US President Joe Biden issued an executive order related to safety standards regarding artificial intelligence. However, some point out that it is not possible to impose these standards, while some representatives fear that exaggerating the regulation of the sector may negatively affect it and be in the interest of competing parties.
Technology companies that have faced pressure to act; Own initiatives. Meta says that advertisers will have to disclose whether their content used generative artificial intelligence, while it proposed Microsoft A tool that enables political candidates to verify the authenticity of their content using a digital watermark.
But platforms are increasingly relying on artificial intelligence for verification.