Russia Leads in Threats as X Drops 800M Accounts
By Jim Thomas From Newsmax

NEWSMAX TV screenshot
Social media platform X reported to British lawmakers that it suspended about 800 million accounts in 2024 under its spam and platform-manipulation rules, underscoring what the company described as a daily fight against state-backed influence operations and other inauthentic activity on its platform.
Wifredo Fernandez, a government affairs executive at parent company X Corp., told members of Parliament, “There are efforts every single day to create inauthentic networks of accounts” and said Russia remained the most prolific state actor on the platform, followed by Iran and China.
He said Russian activity around the 2024 U.S. presidential election aimed to undermine institutions and “stoke division,” often by trying to “flood the zone” with a particular narrative.
The company did not break down how many of the suspensions were specifically tied to foreign interference.
Fernandez also said he was “quite confident” that the remaining accounts on X were authentic.
A House of Commons transcript of the March 9 hearing shows Fernandez gave a more precise figure of “799 million-and change” account suspensions in 2024 under X’s spam and platform-manipulation policy, and said Russia, China, and Iran were the top three state sources X tracks, with Russia typically first.
X describes manipulative behavior as coordinated or otherwise inauthentic activity that artificially influences conversations or disrupts the service. Its help pages define spam more broadly as unsolicited, repeated actions that negatively affect other accounts.
X’s disclosure that it suspended about 800 million accounts in 2024 comes as the platform continues to face criticism over content moderation after tech billionaire Elon Musk bought the company, formerly known as Twitter, in 2022.
In Britain, the platform was criticized for helping spread inflammatory speculation after the Southport killings of three young girls in 2024.
Musk has long argued that bots and fake accounts were a central problem on the platform, making spam enforcement a recurring issue for the company.
Likewise, Meta, which runs Facebook, Instagram, and Threads, said in February 2026 that it had removed 200 networks engaged in coordinated inauthentic behavior since 2017 and was again activating its Election Operations Center for major 2026 elections.
Meta says it publishes regular adversarial-threat reports and treats fake accounts and coordinated inauthentic behavior as policy violations.
Google’s platforms, especially YouTube, continue to appear in quarterly Threat Analysis Group bulletins as targets of coordinated influence campaigns.
In its Q4 2025 bulletin, published Jan. 29, 2026, Google said it terminated multiple Russia-linked influence operations, including a batch of 1,256 YouTube channels, and also blocked some domains from eligibility for Google News and Discover.
In earlier 2025 bulletins, Google reported additional takedowns tied to Russia, China, Azerbaijan, and other actors.
The pattern across these companies is similar: They describe a mix of fake accounts, engagement manipulation, impersonation, and coordinated influence operations. They then respond with bulk removals, account-creation blocks, election task forces, automated detection systems, and recurring transparency reports to remedy policy violations.
Jim Thomas
Jim Thomas is a writer based in Indiana. He holds a bachelor’s degree in Political Science, a law degree from U.I.C. Law School, and has practiced law for more than 20 years.
For more on this story go to: NEWSMAX





