If Musk Starts Firing Twitter's Security Team, Run
Elon Musk is buying Twitter for $44 billion after the least sexy will-they-won't-they saga of all time. And while Musk attempted to reassure advertisers yesterday that "Twitter obviously cannot become a free-for-all hellscape, where anything can be said with no consequences," the acquisition raises practical questions about what the social network's nearly 240 million active users can expect from the platform in the future.
Chief among these concerns are questions about how Twitter's stances on user security and privacy may change in the Musk era. A number of top Twitter executives were fired last night, including CEO Parag Agrawal, the company's general counsel Sean Edgett, and Vijaya Gadde, the company's head of legal policy, trust, and safety who was known for working to protect user data from law enforcement requests and court orders. Gadde ran the committee that ousted Donald Trump from Twitter in January 2021 following the Capitol riots. Musk, meanwhile, said in May that he would want to reinstate Trump on the platform and called the former US president's removal “morally bad.”
This afternoon, Musk wrote that “Twitter will be forming a content moderation council with widely diverse viewpoints. No major content decisions or account reinstatements will happen before that council convenes.” Content moderation has real implications for user security on any platform, particularly when it involves hate speech and violent misinformation. But other topics, including the privacy of Twitter direct messages, protection from unlawful government data requests, and the overall quality of Twitter's security protections, will loom large in the coming weeks. This is particularly true in light of recent accusations from former Twitter chief security officer Peiter “Mudge” Zatko, who described Twitter as having grossly inadequate digital security defenses in an August whistleblower report.