close
close

The battle for trust on the Internet

The battle for trust on the Internet

Ahead of the European Parliament elections in June, the EU used its unique market size and regulatory tools to force social media platforms and search engines to increase transparency and mitigate electoral risks. The Digital Services Act (DSA), which came into full force in February 2024, requires major platforms and search engines to, among other things, provide detailed transparency reports, risk assessments, and researcher access to platform data. In April 2024, the European Commission created electoral guidelines outlining the measures these companies should take under the DSA, such as labeling political advertising and AI-generated content and ensuring that internal electoral teams are adequately equipped. Citing the DSA, the Commission initiated formal proceedings against Meta and

The EU’s non-binding code of conduct on disinformation served as a separate mechanism to strengthen information integrity. The code requires signatories, including major platforms and advertising companies, to preemptively debunk and clearly label “digitally altered” content, set up transparency centers and debunk false and misleading information. These steps can help provide voters with the reliable information they need to make informed voting decisions and fully participate in the vote. However, due to the voluntary nature of the Code, its effectiveness is unclear and difficult to track.

With strong oversight and protections for free expression, information sharing between democratic governments and technology companies can improve users’ access to reliable and reliable information. For example, government agencies may have information about foreign actors that could provide context to companies combating cyberattacks or coordinated inauthentic behavior. Federal authorities in the United States Collaboration with platforms was scaled back at a critical time before the November 2024 elections as they faced legal challenges from state officials in Louisiana and Missouri. The two states, along with private plaintiffs, had sued the federal government in 2022, claiming that their interactions with tech companies during the 2020 election cycle and the COVID-19 pandemic amounted to “censorship.” The Supreme Court dismissed the case in June 2024, saying the plaintiffs had failed to show harm and pointing out that a lower court’s ruling in their favor was based on “clearly false” facts. The Supreme Court has not issued more detailed guidance on how agencies should communicate with platforms consistent with constitutional protections for free speech. As a result of the proceedings, the Federal Bureau of Investigation announced plans to increase transparency and establish clearer guidelines for its cooperation with platforms.

Support for fact checking and digital literacy

During the reporting period, there were several positive initiatives aimed at making it easier for voters to access reliable information, for example through fact-checking programs, centralized resource centers or digital literacy training.

Taiwan Civil society has established a transparent, decentralized and collaborative approach to fact-checking and disinformation research that is considered a global model. In the lead-up to and during the country’s elections in January 2024, these fact-checking programs helped build trust in online information across the political spectrum and among diverse constituencies. The Cofacts platform allowed people to submit claims they came across on social media or messaging platforms to be verified by Cofacts contributors, which include both professional fact-checkers and non-professional community members. During the election period, Cofacts found that false narratives about Taiwan’s foreign relations, particularly with the United States, were prevalent on the Line news platform. Other local civil society organizations, such as IORG and Fake News Cleaner, also cultivated resistance to disinformation campaigns by conducting direct outreach and programs in their communities.

Before of India Ahead of the election, more than 50 fact-checking groups and news publishers formed the Shakti Collective, the largest coalition of its kind in the country’s history. The consortium worked to identify false information and deepfakes, translate fact-checks into India’s many languages, and build broader capabilities for fact-checking and detecting AI-generated content. The diversity of members in the Shakti Collective allowed it to reach different communities of voters and identify emerging trends, such as an increase in false claims in regional languages ​​that electronic voting machines were rigged.

Indonesian fact-checkers worked to debunk false posts about the February 2024 elections. (Image credit: Bay Ismoyo/AFP via Getty Images)

In some countries, governments supported the implementation of such programs. The independently run European Digital Media Observatory (EDMO), established by the EU in 2018, conducted research and collaborated with fact-checking and media literacy organizations during the European Parliament term. EDMO uncovered a Russia-linked influence network operating fake websites in multiple EU languages ​​and also found that generative AI was used in only about 4 percent of the false and misleading narratives discovered in June. of Mexico The National Electoral Institute (INE) launched Certeza INE 2024, a multidisciplinary project to combat electoral disinformation, ahead of the June elections. The program allowed voters to ask questions about voting and report articles, images and audio clips to “Ines,” a virtual assistant on WhatsApp. Voter-reported content would then be fact-checked by a partnership that included Meedan, Agence France-Press, Animal Político and Telemundo.

Fact-checkers are often among the first to identify trends in false narratives, the actors responsible for them, and the technology they use. Their insights can inform effective policy, programmatic and technological interventions that promote internet freedom. Although scientific research has shown that fact-checking is effective in certain contexts, it may not always result in broader behavioral changes among users. Furthermore, there remains a fundamental structural imbalance between fact-checkers and purveyors of disinformation campaigns: Proving that a claim is false takes far more time and effort than creating and disseminating it. These initiatives may face particular difficulties in highly polarized environments, as voters who already lack trust in independent media groups are unlikely to believe their fact-checking work.

Regulations on generative AI in political campaigns

Prompted by concerns that generative AI would blur the line between fact and fiction in subsequent elections, regulators in at least 11 of the 41 FOTN countries that held or prepared for national elections during the reporting period have issued new rules or official guidance to restrict the use of the Restricting technology could be used in the context of elections. The ban on problematic applications of generative AI, such as B. Impersonation, can force political campaigns and candidates to behave more responsibly. Regulations requiring labeling provide voters with the transparency they need to distinguish between real and fictional content.

Related Post