In the weeks leading up to Twitter’s launch of a new fact-checking program to combat disinformation, company experts warned officials that the assignment could easily be exploited by conspiracy theorists.
These warnings, which have gone unnoticed, have almost come true. On the eve of the launch of the invitation-only project, called Birdwatch, in 2021, engineers and managers learned that they had inadvertently accepted a supporter of QAnon’s violent conspiracy theory into the show, which would have allowed them to publicly jot down tweets similar to the news to help other people with their veracity.
Details of Twitter’s near-crash with Birdwatch were revealed as part of an explosive complaint filed in July by the platform’s former security leader, Peiter Zatko. Zatko had commissioned an external audit of Twitter’s ability to combat disinformation and this was included in his complaint. Post received the audit and complaint from congressional staff.
While Zatko’s allegations about Twitter’s security flaws, first reported last month through The Post and CNN, have gained widespread attention, the audit on disinformation has been virtually unreported. Still, it underscores a basic conundrum for the 16-year-old social media service: Despite its role in hosting reviews from some of the world’s most prominent politicians, businessmen and journalists, Twitter hasn’t been able to put in place security measures commensurate with the platform’s huge social impact. It has never generated the profit point needed to do so, and its leaders have never shown willpower.
Twitter’s most sensible executives called the platform “the loose speech wing of the loose speech of the loose discourse. “While this philosophy has toned down over the years, as the company faced threats from Russian agents and incessant tweets from former President Donald Trump, the first time Twitter’s ban on any kind of disinformation only came in 2020, when it banned deep fakes and covid-19-like lies.
Former employees said users’ privacy, security and protection from destructive content have long been noted as an afterthought through the company’s management. Then-CEO Jack Dorsey even questioned the resolution of his more sensible aides to permanently suspend Trump’s account after Jan. 6, 2021, on Capitol Hill, calling it a mistake to silence the president.
The audit report by Alethea Group, a company that fights disinformation threats, confirms this sentiment, describing a company defeated through well-orchestrated disinformation campaigns and lost human engineering and firepower teams while facing threats comparable to those from Google and Facebook. much larger funded. .
The report outlines serious staffing issues, adding a large number of vacancies to its integrity team, one of 3 business sets tasked with combating disinformation. He also highlighted the lack of language features so severe that many content moderators have turned to Google Translate to fill in the gaps. In one of the most unexpected parts of the report, a staffing table indicates that Site Integrity had only two other full-time people working with disinformation in 2021, and 4 working full-time to counter foreign influence operations through agents founded in places like Iran, Russia, and China.
The report validates the frustrations of outdoor disinformation experts who have worked to help Twitter identify and scale back campaigns that have poisoned political conversations in India, Brazil, the United States and elsewhere, fueling violence.
“It has this oversized role in public discourse, but it still has staff as a medium-sized platform,” said Graham Brookie, who tracks influence operations as head of the Atlantic Council’s Digital Forensics Investigation Laboratory. “They struggle to do more than one thing at a time. “
The result of Twitter’s chaotic organizational structure, according to the Alethea report, was that disinformation experts had to “beg” other groups for engineering assistance because they largely lacked their own team and had little guarantee that their safety recommendation would be implemented in new products. like Birdwatch.
The report also exposed failed technology solutions that allowed experts to use five other software programs to tag a single tweet as misinformation.
“Twitter doesn’t have enough staff to be able to do anything other than respond to an immediate crisis,” the 24-page report concludes, noting that Twitter is always “slow” to respond to threats of disinformation.
“Organizational silos, lack of investment in critical resources, and responsive policies and processes have led Twitter to operate in a state of constant crisis that does not affect the company’s broader project to protect original conversations,” he found.
Alethea declined to comment on the report.
Twitter questions many of the main points of the 2021 report, arguing that it described a time when the company had fewer staff and that by focusing on a single team, it paints a misleading picture of the company’s broader efforts to combat misinformation.
A senior corporate official, who spoke on condition of anonymity because of an ongoing dispute with billionaire Elon Musk, told the Post that the report, which was based on interviews with just 12 Twitter workers, tended to exaggerate people’s considerations, adding concerns about Birdwatch’s publication. He said the report’s workforce only referred to high-level political experts, the other people who set the regulations, while the company lately has another 2,200 people, adding dozens of full-time experts and thousands of contractors, to enforce them.
“To moderate content well at scale, companies, adding Twitter, can’t invest in just the number of heads,” Yoel Roth, Twitter’s head of security and integrity, said in an interview. “It takes collaboration between other people and the generation. “to deal with those complex and demanding situations and mitigate and avoid pain, and that’s how we’ve invested. “
Still, when Twitter had just six full-time political experts to fight foreign influence operations and disinformation, according to the report, Facebook had hundreds, according to others familiar with the internal operations of Meta, Facebook’s parent company.
Twitter is much smaller, in terms of profits, users and number of employees, than the other social networks it compares to, and its ability to combat threats is also proportionally lower. Meta, which owns Facebook, Instagram and WhatsApp, for example, has 2. 8 billion users logging in daily, more than 12 times the length of Twitter’s user base. Meta has 83,000 employees; Twitter has 7,000. Meta had a profit of $28 billion in the last quarter; Twitter earned $1. 2 billion.
But some of the disruptions Twitter faces are worse than those of Facebook and YouTube because the platform deals and because Twitter users can face big attacks from a public crowd, said Leigh Honeywell, chief executive of Tall Poppy, a company that works with businesses. to mitigate online abuse through your employees. He added that Twitter users can’t remove negative comments about them, while YouTube video providers and facebook and Instagram page admins can remove statements from them.
“We see the highest volume of harassment in our Twitter posts,” Honeywell said.
“It’s not a smart defense to say that we’re really small and we don’t make that much money,” said Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights. effect on is, and you had this obligation, since you’ve become so influential, to protect yourself from the aspect effects of being so influential. “
Certainly, the richest companies, adding Facebook and YouTube, face similar disorders and have made hesitant strides to combat them. And the size of Twitter, according to experts, has also given it a certain agility that allows it to overcome its weight. Twitter was the first company to stick hashtags on politicians for breaking the rules, adding to put a caution tag on a May 2020 tweet from Trump about the George Floyd protests.
Twitter was also the first company to ban so-called “deep fakes,” the first company to ban all political ads, and, at the start of Ukraine’s war, the first to put precautionary labels on content that distorts a confrontation as it evolves. on the floor.
The company was also the first to launch features that slowed the spread of data on its service to prevent erroneous data from spreading quickly, such as a spark that asked other people if they had read an article before retweeting it. And it published a first-ever archive of disdata campaigns opposed to the state on its platform, a move that researchers praised for its transparency.
Frances Haugen, a Facebook whistleblower who sounded the alarm about the shortcomings of Meta’s investments in content moderation and has been highly critical of tech corporations, said other corporations copy some of Twitter’s efforts.
“Because Twitter had a lot less staff and made a lot less money, they were willing [to be more experimental],” Haugen said in an interview.
But country-backed adversaries, such as Russia’s Internet Research Agency, may temporarily adapt to such changes, while Twitter lacked the equipment to keep up.
“There’s an incredibly vulnerable landscape that’s infinitely manipulable because it’s easy to evolve and repeat as occasions happen,” Brookie said.
Twitter workers did more or less the same thing, according to Alethea’s report, complaining that the company was too slow to respond to crises and other threats and did not have the organizational design in a position to respond to them.
For example, the report says Twitter delayed its reaction to the QAnon and Pizzagate conspiracy theory, which falsely alleged that a Democrat-run pedophile ring operated in a pizzeria in Washington, because “the company couldn’t figure out how to categorize. “”
The leaders felt that QAnon did not fall within the purview of the disinformation team because the motion was not initiated through a foreign actor, and decided that the plot was not a child exploitation factor, as it included false instances of child trafficking. They did not consider it a spam factor despite the theory’s competitive and spam-like promotion through its proponents, according to the report. Many companies, and Facebook have faced similar demanding situations when dealing with QAnon, The Post reported in the past.
It wasn’t until occasions forced the company’s hand, such as celebrity Chrissy Tiegen who threatened to leave Twitter because of harassment from QAnon loyalists, that executives took QAnon more seriously, according to the report.
“Twitter is in charge of the crisis. He’s not handling the crisis,” one former executive told the Post. The executive was not interviewed through Alethea for his report and spoke on condition of anonymity to describe sensitive internal issues.
Twitter’s lack of language features features figure prominently in Alethea’s report. The report states that the company is not ready for elections in Japan in 2020 because “there were no Japanese speakers on the site’s integrity team, only one [Trust and Security] staff member located in Tokyo, and a very limited Policy in Japanese among senior executives. [Twitter Services] Strategic response staff. “
In Thailand, according to the report, Twitter moderators can only “search for trending hashtags. . . because they do not have the linguistic or national expertise of the staff” to conduct genuine research.
The Twitter executive who spoke on behalf of the company said the report painted a misleading picture of its reaction to foreign threats. He said Twitter had a large office in Japan, which is a massive market for the company, and that it had employees who consulted on disinformation. problems the elections there. He pointed to the company’s track record of influencing operations in Thailand, adding the suspension in 2020 of thousands of hard-to-understand accounts that gave the impression of being linked to a crusade to hit the Thai monarchy’s warring parties.
Some former insiders told The Post that some facets of their Twitter experience echoed the report. Edwin Chen, a knowledge scientist, former head of Twitter’s fitness and spam measures and now chief executive of content moderation startup Surge AI, said the company’s synthetic intelligence generation to fight hate speech has sometimes become obsolete for six months. He said it was difficult to get resources for projects similar to creating a more appropriate discussion on the platform.
“You have to convince this other team to do these paintings for you because of a strong lack of leadership,” he said.
He also noted that there are still tensions between those who paint the picture of safety and security and those who are guilty of other facets of the business. “There is an inevitable balance between expansion and security, and there will be something missing,” he said.
Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University, said in an interview that due to the public and political nature of the Twitter platform, agents are ideal for sowing disinformation campaigns.
“Although Twitter has a small number of users compared to YouTube, Facebook and TikTok, because it is a public platform, those looking to spread false data and undermine democracy know that Twitter is one of the most productive places to increase probability. that their messages spread widely,” he said. The other people they hire are smart and serious, and they really want to make a difference, but Twitter is just a company that lacks resources compared to the oversized effect it has on the broader data ecosystem. “