Twitter Whistleblower Revelations Excerpts

From Mudge's written statement to Senate Judiciary Committee:

When a responsible practitioner finds a vulnerability that bad actors can exploit, the person first makes a quiet disclosure directly to the institution, giving the affected company or government the information and the opportunity needed to fix the vulnerability. If the vulnerable institution does not want to hear the truth or fix the problem, the person reporting the problem must determine if public disclosure of the unaddressed security vulnerability is necessary to protect the public. If the benefit of public disclosure outweighs the risk to the recalcitrant institution, then the responsible practitioner makes the public disclosure necessary to alert the public to the risk and to encourage the institution to address the vulnerability.

I continue to follow this ethical disclosure philosophy and am here today because I believe that Twitter’s unsafe handling of the data of its users and its inability or unwillingness to truthfully represent issues to its board of directors and regulators have created real risk to tens of millions of Americans, the American democratic process, and America’s national security. Further, I believe that Twitter’s willingness to purposely mislead regulatory agencies violates Twitter’s legal obligations and cannot be ethically condoned.

Mudge: If you are a foreign agent you have access to all data because Twitter doesn't have a testing environment and all engineers work on live systems.

Mudge says there was at least one agent of China's security agency inside the engineering team at Twitter.

"It is very valuable to a foreign agent to be inside there."

Mudge says that a colleauge at one point told him: "We don't log the activity of the systems."

"There were thousands of failed attempts to access internal systems and nobody was noticing," Mudge says

“What if, next time, it isn't two teenagers trying to pull off a crypto scam? Imagine if it's a malicious hacker or a hostile foreign government breaking into the president's Twitter account" or falsely alleging a terror attack?

All disclosures:

Issues and Objections Regarding Twitter InfoSec Information and the Q4 2021 Twitter Risk Committee

In December 2021 the Risk Committee received information about Twitter’s information security posture that is inaccurate and misleading. It appears that Twitter’s information security environment has not been accurately characterized to the Board of Directors and Risk Committees dating back to before my tenure.

There was also a disconnect between Twitter’s stated Privacy posture and the reality of Twitter’s privacy issues. However, by removing Privacy Engineering from Information Security and through the work of myself, XXXX, and the new team, this disconnect has been significantly improved.

When | brought to Mr. Agrawal’s attention the fact that YYY's report was misleading, inaccurate and intentionally wrong, he overruled me and overruled my recommendation that YYY report be rewritten to make it accurate.

Twitter is grossly negligent in several areas of information security.

Regulators, when evaluating Twitter, will identify these as systemic issues.

Out-of-date software and the lack of basic security configuration in existing software

Gross problems around access control to systems and data

Lack of basic processes and compliance such as software development lifecycles, line-managers being allowed to unilaterally overrule security and privacy findings, and a prioritization of running products with known violations over compliance with regulatory requirements‘

Avolume and frequency of security incidents impacting a large number of users’ data that is frankly stunning

Twitter is very far behind the industry in key areas of Access Control, Software and Security Patches/Configuration/Versions, and Processes and Compliance. This is evidenced in the volume and frequency of Incidents. In more than one of these areas Twitter is a decade behind peers such as Google and Facebook.

Some newsworthy highlights are that more than half of Twitter’s 500,000 servers are running out-of-date Operating Systems so out of date that many do not support basic privacy and security features and lack vendor support.

More than a quarter of the ~10,000 employee computers have software updates disabled!

More than half of Twitter employees have access to Twitter's production environment

At Twitter engineers work on live data when building and testing software because Twitter lacks testing and staging environments; work is instead conducted in production and with live data.

With this understanding, it is somewhat less surprising that frequent security incidents are so commonplace at Twitter that more than one per week, on average, occurred in 2021 and were determined to involve millions of people’s accounts/data.

To get to Twitter’s current state of insecurity required repeated downplaying of problems, selective reporting, and leadership ignorance around basic security expectations and practices.

Twitter lacks necessary visibility into networks and systems that it needs to state confidently whether identified security problems have been remediated to the extent necessary,

At the beginning of 2021, 46% of all FTEs had privileged access to production systems and data. By Q4 2021 this number was 51% of employees. Twitter has grown meaningfully in its number of employees. The percentage of employees with privileged access has increased on top of this.

There are several known insider threats (KNITs) at Twitter’*. Because of the ubiquitous access to production systems and/or data and the lack of isolation environments and logging, this risk is significant. Combine this with ~30 offooardings per week, each of which represent periods of enhanced concern for insider threat, and the lack of access control and ubiquitous access grants are critical problems.

Almost 40% of these ~10,000 employee computers (aka endpoint systems aka the client fleet) are not in compliance with basic security settings.

30% of the total endpoint systems report that they do not have automatic updates enabled.

Of the approximately 500,000 servers in Twitter data centers, ~60% of them are running outdated Operating Systems and, therefore, are non-compliant even with Twitter’s own Engineering standards. In addition to security concerns around outdated software components, many of these outdated OSes are not supported by the vendor. They are also not capable of supporting encryption at rest, a critical compliance and Privacy obligations.

On the same engineering dashboards as above it is revealed that ~30% of the software packages running on the ~500,000 data center systems are non-compliant (out of date or need patching).

Both of these situations have been in the same state for the past 12 months.

Making things more challenging, Twitter lacks the ability to provide a count of total software projects (denominator).

Twitter has an unacceptable, and near continuous, number of security and privacy incidents. | estimate there were more than 50 Incidents in 2021; approximately an incident per week.

The Incidents were predominantly related to areas where Twitter has systemic, long lived, problems: ‘Access Control’ and ‘Security Configuration and Bugs’. Together these problems account for more than 80% of the Incidents.

I identified numerous issues in the materials created by EM and put forward to the Q4 2021 Risk Committee. | suggested to Mr. Agrawal that | create a corrected replacement deck, including data points. Mr. Agrawal, as CEO, directed that | not create a corrective document and that | send the objectionable deck forward to the Committee.

Any Twitter engineer in any country is presently provided direct access to production systems. The accesses to these production systems are not audited.

In any country where Twitter has an engineer, there is access to production systems and data.

This brings up an important question: why are engineers performing software builds locally on their laptops. Presently every engineer has a full copy of Twitter’s proprietary source code on their laptop. Ideally software builds would be performed on servers in the data centers, or in the cloud, and in an isolated testing environment. The fact that engineers are performing software builds on their laptops (endpoints) and these systems are in such poor security configuration is indeed very disturbing.

Re: Protected Disclosures of Federal Trade Commission Act Violations, Material Misrepresentations and Omissions, and Fraud by Twitter, Inc. (NASDAQ: TWTR) and CEO Parag Agrawal

In or around February 2021, after Mudge had prepared comprehensive written materials to educate the Board on his findings about the company’s extensive security, privacy and integrity problems, Mudge was instructed not to send them to the Board of Directors.

On multiple occasions during 2021, described in greater detail below, Mudge witnessed senior executives engaging in deceitful and/or misleading communications affecting Board members, users and shareholders. In contrast, Mudge spent 2021 designing and implementing a long-term strategy to reform and address Twitter’s privacy, security and integrity vulnerabilities. On December 14, 2021, against Mudge’s recommendation, CEO Agrawal explicitly instructed Mudge to provide documents which both of them knew to be false and misleading, regarding vital information security matters, to the Risk Committee of Twitter’s Board of Directors.*

In January 2022, Mudge began working to document evidence of fraud. Twitter’s Chief Compliance Officer opened a fraud investigation based on Mudge’s allegations. On January 18, CEO Agrawal lied about Mudge’s efforts to rectify the previous month’s fraud.

Agrawal terminated Mudge the next day, January 19.

Astonishingly, hours after Twitter terminated Mudge’s employment, including immediately denying him access to corporate systems, Twitter’s Chief Compliance Officer began emailing Mudge at his personal gmail account, seeking to obtain his latest disclosures of fraud. The Compliance Officer’s reference to “your conversation this morning” was the video call in which Mudge had been terminated, and the “matters already under investigation” was Agrawal’s instructions to knowingly present inaccurate materials to the Board

Until 2019, Twitter reported total monthly users, but stopped because the number was subject to negative swings for a variety of reasons, including situations such as the removal of large numbers of inappropriate accounts and botnets."

In fact, Mudge learned deliberate ignorance was the norm amongst the executive leadership team. In early 2021, as a new executive, Mudge asked the Head of Site Integrity (responsible for addressing platform manipulation including spam and botnets), what the underlying spam bot numbers were. Their response was “we don’t really know.” The company could not even provide an accurate upper bound on the total number of spam bots on the platform. The site integrity team gave three reasons for this failure: (1) they did not know how to measure; (2) they were buried under constant firefighting and could not keep up with reacting to bots and other platform abuse; and, most troubling, (3) senior management had no appetite to properly measure the prevalence of bot accounts— because as Mudge later learned from a different sensitive source, they were concerned that if accurate measurements ever became public, it would harm the image and valuation of the company.

Repeated Efforts to Disable ROPO: “ROPO,” which stands for “Read-Only Phone Only,” is probably Twitter’s most volumetrically-effective mechanism for identifying and blocking spam bots. If a script identifies an account as possibly spam and triggers ROPO, the account is placed into a “Read Only” mode and is unable to post content to the platform. Twitter sends a text message to the associated phone number, with a one-time code that the recipient needs to manually enter to regain account access. Shortly into Mudge’s time at Twitter, a senior executive (with primary responsibility for growing mDAU) proposed disabling ROPO worldwide, based on an anecdote of a small number of unsolicited DMs (text messages) he had personally received in which users claimed they were incorrectly denied access by ROPO.” The Lead of Site Integrity told Mudge that executives responsible for growing mDAU had proposed disabling ROPO several times before. The Site Integrity Lead pleaded with Mudge, as a senior executive, to prevent the other executives from disabling ROPO. Research later performed at Mudge’s direction showed ROPO was effectively blocking more than 10-12 million bots each month with a surprisingly low rate (<1%) of false positives.

Therefore Musk’s suspicions are on target: senior executives earn bonuses not for cutting spam, but for growing mDAU. In fact, Twitter created the mDAU metric precisely to avoid having to honestly answer the very questions Mr. Musk raised.

More broadly, Agrawal’s tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots. Mudge discovered the reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams. The scripts were largely un-owned by any person or team, and their results were not tracked. Furthermore no effort was made to compare costs to benefits of the scripts, nor approaches, nor their veracity.

Tools available to Site Integrity to work on these issues are often outdated, “hacked together,” or difficult to use, limiting Twitter's ability to effectively enforce policies at scale. A lack of automation and sophisticated tooling means that Twitter relies on human capabilities, which are not adequately staffed or resourced, to address the misinformation and disinformation problem.

There are components of Twitter that are part of the disinformation and misinformation detection or response that are outside of Site Integrity / Security, and Site Integrity / Security have no access or authority to use these tools absent the good will of other teams.

Twitter does not have aligned incentives across the organization, and, as a result, priorities with regards to Product Safety.

SI relies on functions that have no accountability to SI in order to piece together solutions.

SI does not have dedicated engineering support for their tools, so even minor upgrades or changes to existing tools can take months or years to complete.

SI lacks sufficient dedicated data science support and staff with technical skills.

Policies to address misinformation/disinformation often do not address repeat offenders and are applied on a case-by-case basis, leading to a lack of scalability.

The process for labelling disinformation and misinformation content is largely manual, requires the use of multiple tools, and usually needs to be done on a case-by- case basis.

Hacked by Teenagers: In July 2020—following nine years of supposed fixes, investments, compliance policies, and reports to the FTC by Twitter—the company was hacked by a 17-year old, then-recent high school graduate from Florida and his friends. The hackers managed to take over the accounts of former President Barack Obama, then-Presidential candidate Joseph Biden, and high-profile business leaders including, but not limited to, Jeff Bezos, Bill Gates, and Elon Musk. As part of the account takeovers, the hackers urged their tens of millions of followers to send Bitcoin cryptocurrency to an account they created.

The 2020 hack was then the largest hack of a social media platform in history,” Id. and triggered a global security incident.° Moreover, the hack did not involve malware, zero-day exploits, supercomputers brute-forcing their way past encryption, or any other sophisticated approach. In fact, it was pretty simple: Pretending to be Twitter IT support, the teenage hackers simply called some Twitter employees and asked them for their passwords. A few employees were duped and complied and—given systemic flaws in Twitter’s access controls—those credentials were enough to achieve “God Mode,” where the teenagers could imposter-tweet from any account they wanted. Twitter’s solution was to impose a system-wide shutdown of system access to all of its employees, lasting days.

Security experts agreed this extreme response demonstrated that Twitter did not have proper systems in place to understand what had happened, let alone remediate and reconstitute to a safe state

Specifically the draft FTC complaint charged that from 2013 to 2019, Twitter misused users’ phone number and/or email address data for targeted advertising when users had provided this information for safety and security purposes only. This implied Twitter still lacked basic understandings about how, what, and where its data lived, and how to responsibly protect and handle it. On May 25, 2022, the FTC announced a $150 million fine against Twitter.

Mudge remembers early in his tenure hearing Mr. Agrawal stating to the executive team that “Twitter has 10 years of unpaid security bills.”

Mudge’s findings were dire. Nearly a decade after the FTC Consent Order, with total users growing to almost 400 million and daily users totaling 206 million,*® Twitter had made little meaningful progress on basic security, integrity, and privacy systems. Years of regulatory filings in multiple countries were misleading, at best.

Mudge found serious deficiencies in: a. Privacy, including i. Ignorance and misuse of vast internal data sets, with only about 20% of Twitter’s huge data sets registered and managed, ii. Mishandling Personally Identifiable Information (PII), including repeated marketing campaigns improperly based on user email addresses and phone numbers designated for security purposes only; iii. Misusing security cookies for functionality and marketing, iv. Misrepresentations to the FTC on these matters b. Information Security (InfoSec), including: (...) iv. Insider Threats were virtually unmonitored, and when found the company did not take corrective actions;° c. Fundamental architecture including: (...) iii. Insufficient data center redundancy,” without a plan to cold-boot or recover from even minor overlapping data center failure, raising the risk of a brief outage to that of a catastrophic and existential risk for Twitter’s survival.

January 6 Capitol Attack: When a violent mob attacked and invaded the U.S. Capitol Building in an attempt to prevent Congress from certifying the election, Mudge quickly went to the executive in charge of engineering and asked “how do we seal the production environment?” Not knowing if there would be acts of internal protest aligned with the rioters, Mudge did not want any employees accessing, or potentially damaging the production environment. It was at this point when he learned that it was impossible to protect the production environment. All engineers had access. There was no logging of who went into the environment or what they did. When Mudge asked what could be done to protect the integrity and stability of the service from a rogue or disgruntled engineer during this heightened period of risk he learned it was basically nothing. There were no logs, nobody knew where data lived or whether it was critical, and all engineers had some form of critical access to the production environment. (Later on January 6 after the Capitol attack, the incoming administration offered Mudge a day-one appointed position as Chief Information Security Officer for the United States; Mudge turned the position down on the grounds that he thought he could have more positive impact fixing Twitter.)

X data centers gracefully go down and come back up

After the executive team meeting, Mudge was instructed not to send a detailed written report to the Board of Directors, but instead convey his findings orally, at a high level only

Disengaged CEO: CEO Jack Dorsey had recruited Mudge personally. They got along well, and Mudge has never suspected Dorsey of harboring bad intent. But Dorsey, the high-profile CEO of one of the most prominent companies on earth, was experiencing a drastic loss of focus in 2021. Dorsey attended meetings sporadically, and when he did, he was extremely disengaged.” In some meetings—even after he was briefed on complex corporate issues— Dorsey did not speak a word. Mudge heard from his colleagues that Dorsey would remain silent for days or weeks. Worried about Dorsey’s health, the senior team mostly tried to cover up for him,” but even mid- and lower-level staff could tell that the ship was rudderless.

Perverse bonus structure: In or around July 2021, Twitter announced the “Value Creation Award,” a new bonus structure in which top executives could individually earn over $10 million for generating short-term growth of mDAU (“monetizable daily active users,” see description above Section II). No bonus was provided for improving platform privacy, security or integrity. Mudge came to believe that short-sighted incentives like this were an important cause of Twitter’s egregious ongoing deficiencies.

Failed “stemming” for hateful ad targeting: Twitter maintains a list of hateful terms and slurs that cannot be used for ad targeting. But Mudge learned that the list was not “stemming” properly, meaning that even minor variations on slurs were able to be used for targeting for an unknown period (Twitter SIM 154).

Failed logins: In or around August 2021, Mudge notified then-CTO Agrawal and others that the login system for Twitter’s engineers was registering, on average, between 1500 and 3000 failed logins every day, a huge red flag. Agrawal acknowledged that no one knew that, and never assigned anyone to diagnose why this was happening or how to fix it.

No employee computer backups: In or around Q3 or Q4 2021, Mudge learned that no Twitter employee computers were being backed up at all

Every new employee has access to data they do not need to have access to for the purpose of their role. Until we have implemented a mature centrally owned and operated system to manage access to data (e.g., entitlements and review, Role Based Access Controls, audits, etc) we are at risk of inappropriate access or use of data. Our inability to delete data compounds that risk, as we retain data that we should not have and which is therefore accessible by people who do not need to have access to this data.

Deficient moderation for “Spaces”: In December 2021, an executive incorrectly told staff and Board members that Twitter’s “Spaces” product was being appropriately moderated. But Mudge researched and discovered that about half of “Spaces” content flagged for review was in a language that the moderators did not speak, and that there was little to no moderation happening.

Unlicensed machine learning materials for core algorithms: In January 2022, in the days before he was terminated, Mudge learned that Twitter had never acquired proper legal rights to training material used to build Twitter’s key Machine Learning models.™ The Machine Learning models at issue were some of the core models running the company’s most basic products, like which Tweets to show each user. Days before Mudge was fired in January 2022, Mudge learned that Twitter executives had been informed of this glaring deficiency several times over the past years, yet they never took remedial action.

Unless circumstances have changed since Mudge was fired in January, then Twitter’s continued operation of many of its basic products is most likely unlawful and could be subject to an injunction, which could take down most or all of the Twitter platform. Before Mudge could dig deeper into this issue he was terminated.

In January 2022, Mudge determined and reported to the executive team that (because of poor engineering architecture decisions that preceded Mudge’s employment) Twitter had over 300 corporate systems and upwards of 10,000 services that might still be affected, but Twitter was unable to thoroughly assess its exposure to Log4j, and did not have capacity, if pressed in a formal investigation, to show to the FTC that the company had properly remediated the problem.°®

Penetration by Foreign Intelligence & Threats to Democracy: Over the course of 2021, Mudge became aware of multiple episodes suggesting that Twitter had been penetrated by foreign intelligence agencies and/or was complicit in threats to democratic governance, including: a. The Indian government forced Twitter to hire specific individual(s) who were government agents, who (because of Twitter’s basic architectural flaws) would have access to vast amounts of Twitter sensitive data. b. Twitter executives opted to allow Twitter to become more dependent upon revenue coming from Chinese entities even though the Twitter service is blocked in China. After Chinese entities paid money to Twitter, there were concerns within Twitter that the information the Chinese entities could receive would allow them to identify and learn sensitive information about Chinese users who successfully circumvented the block, and other users around the world. Twitter executives knew that accepting Chinese money risked endangering users in China (where employing VPNs or other circumvention technologies to access the platform is prohibited) and elsewhere. Twitter executives understood this constituted a major ethical compromise. Mr. Zatko was told that Twitter was too dependent upon the revenue stream at this point to do anything other than attempt to increase it d. A few months before CTO Parag Agrawal was promoted to CEO, Agrawal suggested to Mudge that Twitter should consider ceding to the Russian Federation’s censorship and surveillance demands as a way to grow users in Russia. e. Shortly before Mudge was XXXX terminated, Twitter received specific information from a U.S. government source that one or more particular company employees were working on behalf of another particular foreign intelligence agency.

Twitter senior leadership have known for years that the company has never held proper licenses to the data sets and/or software used to build some of the key Machine Learning models used to run the service. Litigation by the true owners of the relevant IP could force Twitter to pay massive monetary damages, and/or obtain an injunction putting an end to Twitter’s entire Responsible Machine Learning program and all products derived from it. Either of these scenarios would constitute a “Material Adverse Effect” on the company.

Misrepresenting the 2020 Hack: Following the July 2020 hack by teenagers, Twitter provided updates via unsigned blog entries. Broadly speaking, Twitter drastically overstated the sophistication of the hack, and misrepresented the sophistication of its own defenses.

False assurances on security: A September 24, 2020 blog post by Parag Agrawal and Damien Kiernan also included multiple false assertion

Current State Assessment

was engaged by Twitter to evaluate the state and structure of Twitter's capabilities in countering misinformation and disinformation, with the goal of identifying gaps in its processes, policies, and approach, as well as opportunities to build the organization’s ability to safeguard the platforms and its users. This report details the current state of Twitter’s misinformation and disinformation capabilities as identified by A based upon internal documents reviewed, stakeholder interviews, and other information gathered as needed.

Broadly, our assessment found that organizational siloing, a lack of investment in critical resources, and reactive policies and processes have driven Twitter to operate in a constant state of crisis that does not support the company's broader mission of protecting authentic conversation. As a result, Twitter is ts. dfbehind the curve in actioning against a disinformation and misinformation threats.

Twitter does not have a traditional threat intelligence capability that would better position the company to be proactive on misinformation and disinformation and to protect authentic conversation.

These gaps illustrate the extent to which product and growth are prioritized over online user and platform safety. Twitter further lacks sufficient mechanisms to measure progress and impact,

Tools available to Site Integrity to work on these issues are often outdated, “hacked together,” or difficult to use, limiting Twitter’s ability to effectively enforce policies at scale. A lack of automation and sophisticated tooling means that Twitter relies on human capabilities, which are not adequately staffed or resourced, to address the misinformation and disinformation problem. Further, policies are often written in response to external events, or “fires,"

Our assessment found that Site Integrity teams lack diversity, especially gender diversity, across the analytical and managerial level. Additionally, the lack of diverse backgrounds among employees contributed to gaps in foreign-language and on-the-ground contextual capabilities, hindering Twitter’s ability to execute its mission and remove harmful content worldwide.

The organizational structure within Twitter that responds to disinformation and misinformation is siloed and not clearly defined. The capabilities were built in an ad hoc manner largely in response to crises. This has contributed to organizational silos, capabilities gaps, and created a culture in which employees must rely on informal relationships across the organization to accomplish work.

Efforts to combat misinformation and disinformation on the platform have evolved in an ad hoc manner as a result of external factors, such as the 2016 elections, coronavirus pandemic, and other pressing threats.

There are components of Twitter that are part of the disinformation and misinformation detection or response that are outside of Site Integrity / Security, and Site Integrity /Security have no access or authority to use these tools absent the good will of other teams.

Twitter does not have aligned incentives across the organization, and, as a result, priorities with regards to Product Safety

SI relies on functions that have no accountability to SI in order to piece together solutions.

Within SI, there do not appear to be clear priorities from the organization’s leadership on how to prioritize threats and thus it is impossible to prioritize resources, goals, and KPIs.

Twitter is not poised to deliver on its mission globally, especially in non-English speaking countries.

For example, the misinformation team currently only has two individuals and lacks the sufficient tools to be able to adequately address the threat on a global scale due to a lack of on-the-ground context. This is especially true in priority growth markets, including Africa, Latin America, and Asia.

The lack of context and understanding has significant implications on the ability to implement policies globally. For example, historically marginalized groups experiencing online threats and harms may not be recognized without an understanding of each country's context, and in some countries it is the government or military that are violating policies, and Twitter is too understaffed to be able to do much other than respond to an immediate crisis.

Twitter expresses a strong preference for fact-checking and labeling content versus removing the content. However, Twitter teams report not having the capacity to fact check in languages other than English.

Teams in priority growth markets are not sufficiently resourced.

Teams have been persistently understaffed.

SI does not have dedicated engineering support for their tools, so even minor upgrades or changes to existing tools can take months or years to complete.

Sl lacks sufficient dedicated data science support and staff with technical skills.

Twitter has not sufficiently invested in developing internal tools to address misinformation and disinformation. As a result, employees must use multiple outdated and manual tools to do parts and pieces of their investigations, analysis, and enforcement.

On Misinformation, SI must manually annotate each new instance of misinformation identified and then moderators manually tag tweets they see with this annotation to apply a warning label. This manual process is especially challenging for large events, such as key elections.

SI has access to many data sources, but they are spread across several different systems and require largely manual processes to access and analyze.

There are existing internal tools in other parts of Twitter that would be useful for the misinformation and disinformation use case, but Sl analysts do not have access to them. Analysts also lack access to externally available tools or datastreams that would allow them to do more proactive cross-platform analysis.

SI does not have a knowledge management system to track and store findings and data. As a result, SI does not have the ability to monitor threat actors or identify changes in their tactics, techniques, and procedures (TTPs) over time, or to measure the impact of Sl's work.

Twitter does not have traditional threat intelligence capabilities to identify, analyze, and warn about current and future threats, or ingest inputs and intelligence from partnerships.

Twitter does not have the capability to add cost to an adversary attempting to exploit the platform.

SI and TWS teams lack staff with geographic expertise and foreign language capabilities.

Twitter is limited on fact-checking or debunking to mostly English-language content. One interviewee said that they relied heavily on Google Translate for language capabilities

SI teams lack diversity, especially gender diversity across both the analyst and management level.

SI staff are burned out and do not believe Twitter leadership is aware of it.

Staffing in Sl is top heavy, except for on the Piper Team. Managers are expected to wear multiple hats, including conducting investigations and creating policies, but they spend most of their time with managerial responsibilities, and report spending their days in back-to-back meetings.

Content moderators in TWS are not adequately resourced, especially to make determinations on misinformation.

Policies are often implemented in response to “fires,” rather than being informed by analysis of the current or emerging threats for the platform, without an effective enforcement mechanism in place.

Policies to address misinformation/disinformation often do not address repeat offenders and are applied on a case-by-case basis, leading to a lack of scalability.

Policies are written for a sophisticated audience, making it difficult for agents on the ground to enforce.

Twitter’s US-centric approach to policy decisions makes it difficult to detect and mitigate disinformation and misinformation around the world.

While processes exist to elicit feedback from necessary stakeholders, there are no processes to actually incorporate that feedback.

The process for labelling disinformation and misinformation content is largely manual, requires the use of multiple tools, and usually needs to be done on a case-by- case basis.

There is currently no unified system for tracking misinformation and disinformation, from identification to remediation, according to staff interviews and the US 2020 retrospective document.

Twitter lacks sufficient processes to measure progress and impact, and therefore fails to implement lessons learned from the past.