Twitter has lifted its ban on incorrect COVID information: Studies show it’s a serious public health risk
(The Conversation is an independent, nonprofit source of news, analysis, and observation from education experts. )
(THE CONVERSATION) Twitter’s resolution to avoid enforcement of its COVID-19 misinformation policy, quietly posted on the site’s regulations page and indexed as of November 23, 2022, is raising serious considerations among researchers and public fitness experts about the imaginable repercussions.
Incorrect health information is not new. An old case is incorrect information about an alleged, but now refuted, link between autism and the MMR vaccine in a debunked study published in 1998. This incorrect information has serious consequences for public health. Tetanus and pertussis (DTP) vaccines faced a higher incidence of pertussis in the last twentieth century, for example.
As a researcher who studies social media, I believe that reducing content moderation is a vital step in the wrong direction, especially in light of the uphill war social media platforms face to combat incorrect information and incorrect information. opposed to incorrect medical information.
There are 3 key differences between past bureaucracy of disinformation and misinformation on social media.
First, incorrect social media information spreads at a much greater scale, speed, and reach.
Second, sensational and emotional content is more likely to go viral on social media, making falsehoods less difficult to spread than the truth.
Third, virtual platforms like Twitter play a gatekeeper role in how they aggregate, curate and extend content. This means that incorrect information about topics that trigger emotions, such as vaccines, can draw attention without problems.
The World Health Organization has called the spread of incorrect information about the pandemic an infodemic. There is abundant evidence that incorrect social media information related to COVID-19 is reducing vaccine acceptance. Fitness experts have warned that misinformation on social media is seriously hampering progress toward herd immunity, weakening society’s ability to cope with new variants of COVID-19.
Misinformation on social media is fueling public doubts about vaccine protection. Studies show that doubts about the COVID-19 vaccine are due to a false impression of herd immunity and ideals in conspiracy theories.
Social media platforms’ content moderation policies and positions towards disinformation are to combat misinformation. In the absence of physically powerful content moderation policies on Twitter, algorithmic content curation and counseling is likely to spur the spread of incorrect information through the expansion of echo chamber effects. , for example, by exacerbating partisan differences in the exposure of content. Algorithmic bias in counseling systems can also further accentuate global disparities in fitness care and racial disparities in vaccine adoption.
There is evidence that some less regulated platforms, such as Gab, can increase the effect of untrusted resources and generate incorrect information about COVID-19. There is also evidence that the disinformation ecosystem can ensnare other people who use social media platforms that invest in content moderation to settle for incorrect information from less moderate platforms.
So the danger is that there will only be more anti-vaccine rhetoric on Twitter, but such poisonous communication may spread to other online platforms that can invest in combating medical misinformation.
The Kaiser Family Foundation’s COVID-19 vaccine monitor shows that the public accepts as true COVID-19 data from authoritative resources as governments have significantly declined, with serious public health consequences. For example, the percentage of Republicans who said they accept the Food and Drug Administration as true rose from 62% to 43% from December 2020 to October 2022.
In 2021, a statement from the U. S. Surgeon General. The U. S. Department of Health and Prevention noted that content moderation policies on social media platforms should:
– Pay attention to the design of advisory algorithms.
– prioritize early detection of misinformation.
– expand data from reliable sources of eHealth data.
These priorities require partnerships between healthcare organizations and social media platforms to expand rules of practice to combat healthcare misinformation. Developing and enforcing effective content moderation policies requires making plans and resources.
In light of what researchers know about incorrect COVID-19 data on Twitter, the announcement that the company will no longer ban incorrect data related to COVID-19 is troubling, to say the least.
This article is republished from The Conversation under Creative Commons license. Read the original article here: https://theconversation. com/twitter-lifted-its-ban-on-covid-misinformation-research-shows-this-is-a- Grave-Risk-to-Public-Health-195695.