Explore insightful, comprehensive, and interpretative stories that resonate with your curious mind. Drive into depth with Tvista, where stories come alive!

How social media Could Sabotage 2024 Elections around the World

How social media Could Sabotage 2024 Elections around the World

This year is the global election year as nearly 2 billion people will participate in more than 50 elections across the world. As technology becomes more influential in various domains, the elections in 2024 will be affected by different aspects of communications technology (such as social media and messaging apps) and artificial intelligence (AI). The consequences of unregulated technology use during the elections could be severe, leading to more confusion between truth and lies and eroding the confidence in democratic institutions.

 

Online hate speech, misinformation and lies are not new, but the scale of ‘misinformation pandemic’ now worsen, thanks to the AI and AI-generated bots, or automated social media accounts. AI driven large language model now made it easier to run convincing propaganda campaign through bots. Some researchers worry that bots will soon be more persuasive and deceptive. A new study in PNAS Nexus says that generative AI will be used more by disinformation campaigns, trolls and other “bad actors” to create election lies in 2024. 

Major world powers including the UK, the United States and Canada expressed ‘deep concerned’ about foreign information manipulation and other actions aimed at undermining democracies and human rights globally, the three Western countries said in a joint statement on Friday (February 16).

Read More: How An Election-Packed 2024 Could Swing World Markets

"The time is now for a collective approach to the  foreign information manipulation threat that builds a coalition of like-minded countries committed to strengthening resilience and response to information manipulation," said the statement released by the British government, urging all the like-minded nations to work together to identify and counter this threat. "Securing the integrity of the global information ecosystem is central to popular confidence in governance institutions and processes, trust in elected leaders, and the preservation of democracy,” the statement said.

The Role of AI In Information Manipulation 

Studies of disinformation in previous elections have showed that how bots at large can spread disinformation or fake propaganda across social media and information landscape, thereby manipulating public trust and mandate. Bots are programs that repeat messages created by humans or other programs. But now, bots can also use large language models (LLMs) to create their own text that sounds human. This makes them more dangerous and harder to detect. "It's not just generative AI, it's generative AI plus bots," says Kathleen Carley, a computational social scientist at Carnegie Mellon University.

LLMs can also help programmers write software faster and easier. This means more bots can be created with less effort. Some bots can write long and realistic comments using generative AI, says Yilun Du, a Ph.D. student at MIT. Unlike images or videos, text is very difficult to identify as AI-generated. "We lack tools that can reliably spot LLM-generated texts," Sanderson says.

Political candidates use AI to reach more voters, create more messages and speak more languages. For example, India’s Prime Minister Modi uses AI to talk in Hindi and Tamil. But AI can also be used for mischief, such as making memes, songs and deepfakes of politicians.

AI can also affect how voters get information. Online chatbots powered by generative AI can provide helpful or harmful information, depending on the source and the intention. Some chatbots can mislead voters about voting sites, candidates’ views and reliable sources. This can increase election misinformation, disinformation and conspiracy theories.

AI can also make it easier and cheaper to create and spread deepfakes, which are fake images and videos. Anyone with some coding skills and software can make a deepfake in minutes. Deepfakes can damage the reputation and mental health of politicians, or create pornographic images of them. This can harm democracy and discourage people from running for office.

Deepfakes can also worsen racism and prejudice in society. They can target or misrepresent groups or individuals based on their identity or beliefs. This can increase social conflicts and challenges for democracy defenders, such as civil society, peacebuilders, technology companies and governments.

Social Media Shapes Political Campaigns in Southeast Asia

Social media is a key tool for political campaigns in Southeast Asia. Candidates and parties use platforms like Facebook, Twitter, and TikTok to connect with voters, spread their message, and mobilize support. In this article, we show how social media has influenced recent elections in the region and what lessons can be learned for future ones.

In Cambodia, Prime Minister Hun Sen, who has ruled since 1985, has used social media to reach the public. He deleted his Facebook account with more than 14 million followers (many fake) after Meta threatened to suspend it for inciting violence. He then switched to Telegram and TikTok, and also promoted a TV show about his life on YouTube.

In Thailand, Move Forward Party (MFP) won 151 out of 500 seats thanks to its leader Pita Limjaroenrat’s social media appeal. He opposed former army chief Prayut Chan-o-cha, who has ruled since a coup in 2014, and promised to end military interference in politics. He has 2.6 million followers on Instagram, where he posts friendly photos of his family. However, he has been unable to form a government due to political deadlock.

In Malaysia, parties have used social media to boost their narratives. Last year, the conservative Parti Islam se-Malaysia (PAS) won most seats with its hate speech campaign on TikTok. PAS President Abdul Hadi Awang is also on Facebook, but the party focused on TikTok, which has more than 14.4 million users in Malaysia. The party plans to use the same strategy for the state elections on August 12, hoping to gain more support from Muslim Malays.

Myanmar’s National Unity Government (NUG) has used social media extensively to communicate with the public and to raise awareness of the ongoing civil war in the country. The NUG has a number of official social media accounts on platforms such as Facebook and Twitter, and also shares interviews and other news on its YouTube channels. The NUG’s use of social media has been praised by some for its ability to reach a wide audience and to provide a platform for the voices of the Myanmar people since the 2021 coup. However, the NUG’s use of social media has also been criticized by some for being too focused on propaganda and the alleged spread of misinformation.

Southeast Asian politicians and parties have learned to use social media effectively to present themselves as relatable, to disseminate their messages, to build their brand, and to mobilize their supporters. They have also learned to target their messages to specific segments of voters, based on their preferences and interests. This can increase their political appeal, but it can also create online echo chambers and polarize the society, as seen in the elections in Indonesia in 2019 and Malaysia in 2022.

Social media also poses challenges for political integrity and democracy. It makes it easier for politicians and political groups to spread false and malicious information against their rivals, which can sway public opinion and affect the election results. It also opens the door for foreign interference, which can undermine the democratic process.

To address these challenges, three actions are needed. First, the public needs to be educated about the dangers of misinformation and disinformation on social media, and how to verify the information they see online. They should only trust information from credible sources. Second, social media companies need to be more active in removing false and harmful content from their platforms, and to make it easier for users to report such content. Third, social media companies need to be more transparent about their moderation policies, so that users can trust them and hold them accountable.

Big Tech Scales Back Protections

 

Amid the 2024 election year, many people around the world are worried about the spread of disinformation on social media platforms, according to a UNESCO survey. However, the efforts of Meta, YouTube and X (formerly Twitter) to curb harmful content have been inconsistent and insufficient, a report by Free Press, an advocacy group, revealed. These platforms have reduced or restructured their teams that monitor and remove dangerous or false information, and have introduced new features, such as one-way broadcasts, that are hard to oversee.

Free Press's senior counsel, Nora Benavidez, warned that these platforms have "little bandwidth, very little accountability in writing and billions of people around the world turning to them for information" - a risky situation for democracy.

Meanwhile, newer platforms, such as TikTok, are expected to play a bigger role in shaping political discourse. Substack, a newsletter service that refused to ban Nazi symbols and extremist language from its platform, declared that it wants the 2024 elections to be "the Substack Election". Politicians are also using Twitch to livestream their events, where a virtual debate between A.I. versions of President Biden and former President Trump is also scheduled.

Meta, the owner of Facebook, Instagram and WhatsApp, claimed in a blog post in November, 2023 that it was "in a strong position to protect the integrity of next year's elections on our platforms". However, its own oversight board criticized its use of automated tools and its handling of two videos related to the Israel-Hamas conflict in December, 2023.

YouTube said that its "elections-focused teams have been working nonstop to make sure we have the right policies and systems in place". However, the platform also announced that it would stop removing false claims about voter fraud. (YouTube said it wanted to allow voters to hear all sides of a debate, but added that "this isn't a free pass to spread harmful misinformation or promote hateful rhetoric".)

X, which was acquired by the billionaire Elon Musk in late 2022, saw a surge of toxic content on its platform. Alexandra Popken, who was in charge of trust and safety operations for X, quit her job a few months later. She said that many social media companies rely too much on unreliable A.I. tools for content moderation, and leave a few human workers to deal with the constant crises. She joined WebPurify, a content moderation company, afterwards.

She said that "election integrity is such a behemoth effort that you really need a proactive strategy, a lot of people and brains and war rooms".

How Foreign Information Manipulation Threatens Democracy

Information manipulation is a set of tactics involving the collection and dissemination of information in order to influence or disrupt democratic decision-making. Foreign governments use information manipulation as a tool of statecraft and geopolitics, aiming to introduce doubt, uncertainty and mistrust, and not just to alter the result of elections but to delegitimise the entire electoral process.

Information manipulation can take various forms, such as:

  • Disinformation: the deliberate creation and dissemination of false or misleading information with the intent to deceive or harm.
  • Misinformation: the inadvertent or unintentional sharing of false or misleading information without the intent to deceive or harm.
  • Malinformation: the deliberate disclosure of private or sensitive information with the intent to harm or exploit.

Information manipulation can also involve different actors, such as:

  • State actors: foreign governments or their proxies that use information manipulation to advance their interests, undermine their adversaries or shape public perception of their actions.
  • Non-state actors: individuals, groups or organizations that use information manipulation for ideological, political, economic or social reasons, such as extremists, activists, hackers or trolls.
  • Domestic actors: citizens, politicians or media outlets that use information manipulation for personal, partisan or nationalistic reasons, such as supporters, opponents or influencers.

Information manipulation can also employ different platforms, such as:

  • Social media: online platforms that enable users to create and share content, such as Facebook, Twitter, YouTube or TikTok.
  • Traditional media: offline or online platforms that provide news and information, such as newspapers, television, radio or websites.
  • Messaging apps: online platforms that enable users to communicate privately or in groups, such as WhatsApp, Telegram or Signal.

Information manipulation can pose serious challenges to democracy, such as:

  • Eroding trust: information manipulation can undermine the credibility and legitimacy of democratic institutions, processes and actors, such as election management bodies, political parties or candidates.
  • Polarizing society: information manipulation can exacerbate existing divisions and conflicts within and between communities, such as ethnic, religious or ideological groups.
  • Manipulating behavior: information manipulation can influence the opinions, attitudes and actions of voters, such as their participation, preferences or choices.

To combat information manipulation, democratic actors need to adopt a comprehensive and coordinated approach that involves:

  • Identifying: detecting and analyzing information manipulation campaigns, such as their sources, methods, targets and impacts.
  • Responding: countering and mitigating information manipulation campaigns, such as their narratives, messages, reach and effects.
  • Building resilience: preventing and reducing information manipulation campaigns, such as their opportunities, incentives, vulnerabilities and risks.

The European Union (EU) has been at the forefront of addressing information manipulation, especially in the context of elections. The EU has developed a range of initiatives and tools, such as:

  • The EU Action Plan against Disinformation: a strategic framework that outlines the EU’s objectives, principles and measures to tackle information manipulation, such as enhancing capabilities, strengthening cooperation and raising awareness.
  • The EU Code of Practice on Disinformation: a self-regulatory instrument that sets out commitments and obligations for online platforms, advertisers and industry associations to combat information manipulation, such as improving transparency, accountability and oversight.
  • The EU Rapid Alert System: a network of contact points that facilitates the exchange of information and best practices among EU institutions and member states to respond to information manipulation, such as alerting, monitoring and assessing.

The EU has also supported various projects and partners that contribute to the fight against information manipulation, such as:

  • The EUvsDisinfo: a website that exposes and debunks information manipulation campaigns, especially those originating from Russia, such as by providing fact-checks, analyses and reports.
  • The EU DisinfoLab: a non-governmental organization that investigates and exposes information manipulation campaigns, especially those affecting the EU and its member states, such as by conducting research, advocacy and training.
  • The European Digital Media Observatory: a platform that connects and supports researchers, fact-checkers, media outlets and civil society organizations that work on information manipulation, especially in the EU and its neighborhood, such as by providing data, tools and resources.

Information manipulation is a complex and evolving phenomenon that requires constant vigilance and adaptation. The EU has shown leadership and commitment in addressing this challenge, but it cannot do it alone. It needs the collaboration and cooperation of all democratic actors, both within and outside the EU, to defend and promote the values and principles of democracy.

How to Address Disinformation and Political Propaganda 

Disinformation and political propaganda are not new phenomena, but they have become more pervasive and impactful in the digital age. With the rise of social media platforms, online news sources, and mobile devices, people are exposed to a vast amount of information that may be inaccurate, misleading, or harmful. Disinformation and political propaganda can undermine democracy, erode public trust, and fuel polarization and conflict.

How can we combat disinformation and political propaganda in new media, while protecting free speech and promoting media literacy? Here are some possible solutions.

Fight misinformation with information

One of the most effective ways to counter disinformation and political propaganda is to provide accurate, reliable, and timely information to the public. This requires a strong and independent journalism sector that adheres to professional standards and ethics, and that can hold power to account. Journalism can also help expose and debunk false or harmful narratives, and provide context and analysis to complex issues.

However, journalism alone is not enough. The public also needs to be equipped with the skills and tools to critically evaluate the information they encounter online, and to distinguish between facts and opinions, evidence and speculation, and sources and platforms. This is where media literacy education comes in. Media literacy education can help people develop the ability to access, analyze, evaluate, and create media content, and to recognize and resist manipulation and persuasion.

Media literacy education can be integrated into formal and informal learning settings, such as schools, libraries, community centers, and online platforms. It can also target different groups and audiences, such as children, youth, adults, seniors, and marginalized communities. Media literacy education can foster a culture of informed and engaged citizenship, and empower people to participate in democratic processes.

Collaborate and coordinate across sectors and stakeholders

Disinformation and political propaganda are complex and multifaceted problems that require collective and coordinated action from various sectors and stakeholders. These include governments, technology companies, civil society organizations, academia, and international institutions.

Governments have a role to play in creating and enforcing laws and regulations that protect the integrity of information and elections, and that prevent the spread of hate speech, incitement, and violence. Governments can also support and fund public service media, independent journalism, and media literacy initiatives, and promote transparency and accountability in their own communication.

Technology companies have a responsibility to design and operate their platforms in ways that minimize the risks of disinformation and political propaganda, and that respect human rights and democratic values. Technology companies can also invest in innovative solutions that detect and flag false or harmful content, reduce the incentives and rewards for those who produce or disseminate such content, and enhance the visibility and diversity of quality information.

Civil society organizations have a vital role in monitoring and exposing disinformation and political propaganda, and in advocating for the rights and interests of the public. Civil society organizations can also provide fact-checking services, media literacy programs, and alternative narratives that challenge and counter false or harmful content.

Academia can contribute to the understanding and awareness of disinformation and political propaganda, and their causes and consequences, through research and education. Academia can also collaborate with other sectors and stakeholders to generate and share evidence-based knowledge and best practices, and to evaluate and improve existing policies and interventions.

International institutions can facilitate and support the cooperation and coordination among different sectors and stakeholders, and across different countries and regions. International institutions can also set and promote global standards and norms, and provide guidance and assistance to address the challenges and opportunities of disinformation and political propaganda in new media.

Fact-Checking tools

Fact checking tools can be awesome to verify the suspected information and propaganda. Here is the some useful fact-checking tools for you. 

  • InVid: This is a browser extension that can help you analyze and verify videos. It can extract thumbnails, metadata, and keyframes from videos, and let you perform reverse image searches on them.
  • Duplichecker: This is a website that can help you check the originality of images. It can perform reverse image searches on multiple search engines, and show you if the image has been used before in a different context or time.
  • FactCheck: This is a project of the Annenberg Public Policy Center that monitors the factual accuracy of statements made by U.S. politicians and public figures. It also provides analysis and context to political claims and issues.
  • Google Fact Check Explorer: This is a tool that can help you find fact checks from various sources on different topics. It can show you the claim, the verdict, and the source of the fact check, and let you filter by region, language, and publisher.

However, To address disinformation in elections, civil society, governments, technology companies and peacebuilders can collaborate to monitor and counter false or harmful content, educate citizens on digital literacy, and hold social media platforms and governments accountable. Fact-checking organizations and coalitions of civil society and peacebuilding organizations can verify and debunk claims and rumors on social media and traditional media. Democracy faces many challenges from polarization, authoritarianism and misinformation, and needs to show its value through fair and inclusive elections.

Md Motasim Billa
Author

Md Motasim Billa

Editor

Recent News