Fighting incorrect information becomes more difficult when it comes to more

In February 2024, the United States State Department revealed that it had exposed a Russian operation to discredit Western fitness systems in Africa. The operation included spreading rumors that dengue, a mosquito-borne disease, had been created through a U. S. NGO and that patients treated by U. S. Army investigators were using Africans as control subjects. The campaign, founded on a Russian-funded news site, aimed to seed apartments and damage the reputation of the United States. Discourageing Africans from accessing health services has been collateral damage along the way.

The campaign was highlighted through paintings from the Global Engagement Center, a U. S. State Department company. U. S. Once a false story is detected, the company works with local partners, adding academics, journalists and civil society groups, to publicize its source: a strategy known as “psychological vaccination” or “pre-bunking. “The concept is that if other people are informed that a specific false narrative is circulating, they are more likely to view it with skepticism if they encounter it in social media posts. news articles or in person.

Pre-bunking is just one of many countermeasures proposed and deployed against misleading information. But how effective are they? In a study published last year, the International Panel on the Information Environment (IPIE), a nonprofit group, compiled a list of 11 categories of proposed countermeasures, based on a meta-analysis of 588 peer-reviewed studies, and evaluated the evidence for their effectiveness. The measures include: blocking or tagging express users or publications on virtual platforms; providing media literacy (such as pre-bunking) to enable others to identify misinformation and disinformation; verification needs on virtual platforms; assist fact-checking organizations and other publishers of corrective information; and so on.

The IPIE studies found that only 4 of the 11 countermeasures were widely supported in the literature of the studies: content tagging (such as adding tags to accounts or pieces of content to signal that they are being challenged); corrective data (i. e. , fact-checking and debunking); content moderation (downgrading or cutting content and postponing or blocking accounts); and media literacy (educating others to identify misleading content, for example through pre-packages). Of those other approaches, the most powerful evidence is content tagging and corrective data.

Of course, these countermeasures are already being implemented in tactics around the world. On social platforms, users can report posts with “false information” on Facebook and Instagram, and “misinformation” on TikTok, so that cautionary labels can be applied. X does not have such a category, but it does allow you to add “Community Notes” to problematic posts to provide corrections or context.

Lies, lies and social networks

In many countries, academics, civil society groups, governments, and intelligence agencies report posts offensive to technology platforms, which also have their own internal efforts. Meta, for example, cooperates with around a hundred independent fact-checking organizations in more than 60 languages. , all members of the foreign fact-checking network, created through the Poynter Institute, a U. S. nonprofit group. A number of organizations and governments are racing towards media literacy; Finland is notable for its national education initiative, introduced in 2014 as a reaction to Russian disinformation. Media literacy can also be taught through games: Tilt Studio from the Netherlands has worked with the United Kingdom government, the European Commission and NATO to create games to identify misleading content.

In order to combat misinformation, academics, platforms and governments want to understand it. But studies on disinformation are limited in several key respects: studies tend to concentrate only on campaigns in a single language or topic, for example. The most scandalous thing is that there is still no consensus on the true effect of exposure to misleading content. Some studies find little evidence linking disinformation to election and referendum results. But others believe the Kremlin’s arguments resonate with right-wing politicians in the United States and Europe. Opinion polls also reveal that enough European citizens have a tendency to agree with Russian disinformation lines to recommend that Russia’s campaign to sow doubt about the fact could work.

A major impediment for researchers is the lack of access to knowledge. The most productive knowledge isn’t in the hands of the public, but “it’s in the personal networks of Silicon Valley,” says Phil Howard, a former consistent in democracy and generation at Oxford University and co-founder of IPIE. After Elon Musk bought Twitter (now X) in 2022, the company shut down the flexible formula that allowed downloading information about posts and accounts, and began charging thousands of dollars per month for such access to information. Meta announced in March that it would remove CrowdTangle, its platform tracking tool that allows scientists, journalists and civil society groups to access information, though the company says academics can still request access to certain knowledge sets.

These adjustments have seriously hampered researchers’ ability to find incorrect information and perceive how it spreads. “Most of our basic perception of misinformation comes from massive amounts of Twitter data,” says Rachel Moran of the University of Washington. Once this source is eliminated, however, researchers worry about losing sight of how the new campaigns work, which has broader implications. The educational network is very, very vital in this area,” says a U. S. official.

Regulators are stepping in to try to close the gap, at least in Europe. The European Digital Services Act (DSA), which came into force in February, requires platforms that provide information to researchers who apply to take “systemic risk” to society (the United Kingdom’s equivalent, the Online Safety Act, does not include such a provision). Under the new EU rules, researchers can submit proposals to platforms for consideration. But until now, few have controlled their colleagues’ data collection on the effects of their applications. Of the approximately 21 researchers he knows who have submitted proposals, only 4 have gained knowledge. According to a spokesperson for the European Commission, the platforms have been asked to provide data showing that they are lately complying with the law. investigation into whether they have not provided knowledge to investigators in an unwarranted time (both corporations claim that they comply or are dedicated to complying with the DSA. X withdrew from the EU’s voluntary code to fight data disdata last year).

In the United States, however, efforts to fight dysdata have become entangled in the country’s political dysfunction. Researchers fighting against disdata require a coordinated effort from generational platforms, academics, government agencies, civil society teams, and the media. But in United States United States, any such coordination is now seen, especially through those on the right, as evidence of a conspiracy among all those teams to suppress specific voices and viewpoints. When false data about the election and covid-19, published through Donald Trump and Marjorie Taylor Greene, was removed from some tech platforms, they and other Republican politicians complained about censorship. An organization of giant corporations that has refused to promote it on right-wing platforms where erroneous data abounds has been threatened with antitrust investigations.

Researchers examining the disdata have faced lawsuits, attacks from political groups and even death threats. Funding has also decreased. In the face of those challenges, some researchers say they have stopped alerting platforms in case of suspicious accounts or posts. An ongoing lawsuit, Murthy v. Missouri, has led U. S. federal agencies to arrest the U. S. government. The U. S. Department of Homeland Security has suspended the sharing of allegedly false data with tech platforms, even though the FBI has reportedly resumed sending reports to social media corporations in recent weeks.

All of this has had a chilling effect on the ground, just as fears are developing about the possibility that disinformation could influence elections around the world. “It is difficult to know that one aspect of politics — primarily in the United States but elsewhere as well — is more threatened by studies of disinformation than by the dangers to democracy from disinformation itself,” the researchers recently wrote in Current Opinion in Psychology.

However, it’s possible that the tide is turning. In recent weeks, in oral arguments in Murthy v. Missouri, the top justices of the United States Supreme Court have expressed support for efforts by governments, researchers, and social media platforms to work together to fight disinformation. The United States also announced foreign collaboration with the intelligence agencies of Canada and Britain to decrease foreign influence on social media “by going beyond ‘watch and report’ approaches,” though the main points of the new methods were not disclosed. The EU’s DSA regulations This would possibly pave the way for tech corporations to share knowledge with researchers in Europe, but researchers elsewhere could also benefit.

While the United States recently provided an example of how not to deal with incorrect information in the run-up to an election, another country, Taiwan, offers a more inspiring example. “Taiwan is the gold standard,” says Renée DiResta, who studies news flows at Stanford’s Internet Observatory. Its style arises from a close collaboration between civil society groups, generation platforms, government and media. When incorrect information is detected through fact-checking organizations, they notify generation platforms and, where appropriate, government departments as well. publish rebuttals or quick corrections. The government also promotes media literacy, for example by adding it to the school curriculum. But this technique can be effective in a small country where it exists. An apparent opponent (Finland and Sweden would be other examples), it can be tricky to get it to work elsewhere.

Other countries have adopted other approaches. Brazil has been applauded by some observers for its harsh handling of disinformation in the run-up to its October 2022 elections, which involved cooperation between civil society teams and tech platforms, as well as oversight through a Supreme Court ruling over who ordered the suspension. Social media accounts of politicians and influencers whose posts, he said, threatened the process. But critics, in Brazil and abroad, felt the trial was too harsh (he is now worried about a dispute with Elon Musk, owner of X). Sweden, for its part, created a government company in charge of “psychological defense” in 2022.

Global warming

Disinformation is an expanding challenge that requires coordinated action from multiple sectors of society. Unfortunately, studies tend to be isolated and there is a lack of agreement on things like terminology. This makes it difficult to connect the dots and locate classes that apply more broadly. IPIE’s Dr Howard compares the situation to the early days of meteorological science: many others are looking to tackle the same challenge from other angles, but it is difficult to see the full picture. He notes that it took decades to bring together atmospheric scientists, geologists and oceanographers to reach a consensus on what was happening. And there remains strong political opposition from those interested in maintaining the prestige quo. But the UN Intergovernmental Panel on Climate Change now provides governments with forged knowledge on which to base their policy decisions. IPIE aims to do the same for the changing global environment, says Dr Howard. The current lack of a concerted reaction against disinformation is a challenge, but also an opportunity: studies and coordinated actions lead to greater detection and mitigation of misleading content, because trendy disinformation campaigns paint everyone the same . But, as with climate change, cleaning up the global environment from transformation represents a significant long-term challenge.

© 2024, The Economist Journal Limited. All rights reserved. Taken from The Economist, license published. The original content can be viewed on www. economist. com.

Download the Mint app and premium items

Log in to our to save your favorites. It will only be a matter of a moment.

You’re one step away from creating your watchlist!

Ups! It seems that you exceeded the limit to load the symbol in your favorites. Remove a few to load this symbol into your favorites.

Your inquiry has expired, please log back in.

You are now subscribed to our newsletters. If you are unable to locate any emails from us, please check your spam folder.

This is a subscriber-only feature. Subscribe now to receive updates on WhatsApp.

Start investing in stocks, mutual funds, IPOs, and more

Start investing in stocks, mutual funds, IPOs, and more

He’ll touch you soon

Leave a Comment

Your email address will not be published. Required fields are marked *