The X Paradox: An Analysis of Platform Governance, User Safety, and Inauthentic Activity Under Elon Musk

Executive Summary

This report analyzes the social media platform X under Elon Musk’s ownership. It examines the profound shifts in governance, user safety, and core architecture.

The central argument is that these changes have created the ‘X Paradox.’ The platform champions ‘free speech’ but creates a hostile environment that silences many users. It promotes ‘authenticity,’ yet its systems fail to stop inauthentic activity and often penalize genuine users.

The analysis details several key issues:

  • Inconsistent Policies: Rules for world leaders are applied inconsistently.
  • Eroding Trust: A monetized verification system has damaged user trust.
  • Bot Proliferation: Automated accounts persist, degrading the user experience and manipulating political discourse.
  • Declining Safety: Hate speech has measurably increased while content moderation has collapsed. This disproportionately impacts women and marginalized communities.
  • Opaque Appeals: The process for appealing suspensions is frustrating and lacks transparency.

The report concludes that this transformation is not an accident. It is the successful implementation of a new, permissive philosophy that externalizes the cost of safety onto its users.

Section 1: The World Leader Conundum: Policy, Precedent, and Contradiction

The continued presence of controversial world leaders on X is a central point of contention. The platform allows figures like Iran’s Supreme Leader, Ayatollah Ali Khamenei, to maintain an active presence. At the same time, it suspends ordinary users for seemingly lesser infractions. This raises fundamental questions about the platform’s content moderation policies.

This analysis argues that the strategic shifts since 2022 are not failures of execution. They are the successful implementation of a new, non-interventionalist philosophy. The result is a platform that fails to protect users from authentic harm while penalizing them for algorithmically misinterpreted inauthenticity. This has led to a profound erosion of trust and degraded the platform’s value as a global public square.

1.1 The “Public Interest” Doctrine (Pre-Musk Era)

Before its acquisition, Twitter developed a specific policy for world leaders. It addressed statements that could violate platform rules but were undeniably newsworthy. This framework was often called the ‘Trump Rule’ due to its development during Donald Trump’s presidency. It was a ‘public interest’ exception, not a grant of immunity, designed to balance free expression with public safety.¹

Under this 2019 policy, tweets from major political figures that violated rules could remain on the site. This applied if their removal was deemed not to be in the public’s interest. However, the platform significantly curtailed their visibility.

Such tweets were placed behind a warning label that read:¹

“The Twitter Rules about abusive behavior apply to this Tweet. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain available.”

Users had to actively click through this warning to view the content. Crucially, the platform algorithmically suppressed these labeled tweets. They were not eligible to appear in searches or be recommended.¹

This system was a deliberate attempt to navigate the complex intersection of news, speech, and harm. It acknowledged the value of world leaders’ pronouncements for public debate. But it also recognized that such speech could be harmful. By limiting the reach of rule-violating content, the policy aimed to inform the public without amplifying harassment or hate speech.¹

1.2 Musk’s “UN Rule” and Its Inconsistent Application

Following his acquisition, Elon Musk replaced this nuanced framework with a simpler principle. He articulated his policy as the “UN rule” in response to criticism over verifying Ayatollah Khamenei’s account. Musk stated, “if the UN recognizes someone, then we will” allow them on the platform. He framed this as a non-moral decision that “simply recognizes what the international community views to be accurate.”²

However, the application of this rule has been fraught with contradictions. This is best exemplified by the platform’s handling of Ayatollah Khamenei’s various accounts. His primary, verified English-language account remains active. Yet, X has taken decisive action against other accounts associated with him.

In January 2021, before the takeover, Twitter suspended a fake Persian-language account linked to Khamenei’s website. The account had posted a threatening image depicting a golfer resembling former U.S. President Donald Trump being targeted by a drone. The company cited its rules against platform manipulation and fake accounts for the suspension.³

More revealingly, in October 2024, Khamenei opened a new account in Hebrew amidst heightened military tensions. The account was suspended after just two posts. The second post contained a direct threat: “The Zionist regime made a mistake. It erred in its calculations on Iran. We will cause it to understand what kind of strength, ability, initiative, and will the Iranian nation has.”⁴ This message was substantively similar to one on his active English account.⁵

The suspension of the Hebrew account, while the English one remained, shows that enforcement is not based on a simple “UN rule.” This differential treatment suggests a “Zonal Policy” of moderation. Rules are applied differently depending on language and geopolitical context.

Musk’s own engagement further complicates this inconsistency. He has used the platform to make sarcastic jabs at the Supreme Leader.⁶ He has also adopted a more serious tone, pointing out Khamenei’s stated goal of eradicating Israel.⁷ This behavior creates a “Sovereign-as-Moderator” model. The platform’s owner acts as a powerful user and political commentator, not a neutral arbiter. This makes policy enforcement appear unpredictable and subject to personal whims.

1.3 The Platform as a Geopolitical Tool

The platform’s policies on world leaders underscore its role as an instrument of global statecraft. Heads of state and government officials use X for official communications, announcements, and negotiations.⁹, ¹⁰ This cements its status as a digital extension of the geopolitical landscape.

In this context, critics view Musk’s self-proclaimed “free speech absolutism” as a dangerously naive stance. They argue that authoritarian regimes exploit open platforms to flood democratic societies with disinformation and sow institutional distrust.² By providing a direct channel for figures like Khamenei, the platform becomes a conduit for their strategic narratives.

A clear example emerged during the 2024-2025 Iran-Israel conflict. Amidst Khamenei’s hardline threats, old, softer posts from his account resurfaced. These decade-old tweets included reflections on poetry and praise for Jawaharlal Nehru’s book Glimpses of World History.¹¹, ¹² This was a sophisticated public relations tactic. It served to humanize a hardline leader during a period of intense international scrutiny. This demonstrates the platform is not a passive “public square” but an active battleground for shaping global perceptions.

Section 2: The Architecture of Mistrust: Verification, Anonymity, and Inauthentic Activity

These high-level policy contradictions are mirrored by a series of fundamental changes to the platform’s core architecture, which have profoundly eroded user trust. The shift from a merit-based verification system to a paid subscription model, coupled with a persistent bot problem, has created an environment where authenticity is difficult to discern. This has led to a “Crisis of Heuristics,” where the cognitive shortcuts users once relied on to navigate the platform have been systematically destroyed.

2.1 From Verification to Subscription: The Collapse of Authority

Historically, Twitter’s blue checkmark was a powerful signal of authenticity. The company vetted and granted it to accounts of public interest like journalists and public officials. It served as a crucial heuristic, allowing users to quickly identify genuine sources.

In April 2023, this system was dismantled. The legacy blue checkmarks were removed. Verification became a feature available to any user willing to pay for an X Premium subscription.¹³, ¹⁴

The transition was chaotic. The initial rollout in November 2022 was swiftly paused after a wave of malicious impersonation. Users purchased verification to create fake accounts of high-profile individuals and corporations. For example, an account impersonating the pharmaceutical giant Eli Lilly and Company tweeted that insulin would be made free. The fake tweet caused a temporary dip in the company’s stock price and forced the real corporation to pull its advertising.¹³

This event crystallized the impact of the policy change. The meaning of the blue checkmark inverted overnight. It went from a marker of authenticity to a marker of a commercial transaction. This created widespread confusion and provided a new tool for scammers, who could now purchase an emblem of legitimacy for a nominal fee.¹³, ¹⁴

2.2 The Bot Epidemic: “Defeat the Spam Bots or Die Trying”

A central promise of Elon Musk’s acquisition was his vow to “defeat the spam bots or die trying”.¹⁵ This resonated with users frustrated by automated accounts. However, evidence suggests the problem has not been solved and may have worsened.

An AI-driven analysis in January 2024 estimated that as many as 64% of the 1.269 million accounts it analyzed were “potentially bots”.¹⁵ Another cybersecurity expert estimated the figure could be over 80%.¹⁶ Users report being inundated by spam, particularly pornographic bots and financial scams. Critically, many of these bot accounts now sport a blue checkmark, using their paid status to appear more legitimate.¹⁶

This situation reveals a fundamental contradiction. Musk has a stated goal of eliminating bots. However, the platform’s business model relies on selling X Premium subscriptions. Since many bots are now paying subscribers, an aggressive campaign to eliminate them would also eliminate paying customers and harm revenue. This conflict of interest helps explain why the bot problem persists.

2.3 The Anonymity Paradox: Joker Profiles and the Fear of the Unidentified

The platform’s struggles with authenticity intersect with the complex issue of user anonymity. The presence of anonymous accounts with menacing profiles, such as those with “Joker profile pictures,” raises valid questions about why individuals can hide their identity while engaging in harmful behavior.

However, anonymity online is dual-edged. It can empower harassers, but it has also been an essential tool for protecting dissidents, activists, and whistleblowers in authoritarian countries. The core issue is not anonymity itself. It is the platform’s failure to enforce its own rules against harmful behavior, regardless of the user’s identity status.

The suspension of an ID-verified user for a behavioral pattern, while an anonymous user engaging in threats remains, highlights this systemic failure. It shows that enforcement mechanisms are not calibrated to assess the content of speech. Instead, they detect simplistic behavioral patterns that are algorithmically flagged as suspicious. In this environment, an ID-verified user can be punished for appearing inauthentic, while an anonymous user can thrive by being authentically malicious.

Section 3: The Unwritten Rules of Engagement: Follows, Blocks, and the Logic of Suspension

A user’s suspension often stems from a collision between nuanced human behavior and rigid, automated systems. An examination of X’s policies on following and blocking reveals a system ill-equipped to understand user intent. It penalizes benign activities while weakening the tools users need to protect themselves from genuine harm.

3.1 “Follow Churn”: Why Your “Yu-Gi-Oh Deck” Strategy Triggered Alarms

A user’s suspension after following and unfollowing accounts for “intellectual sparring” can be attributed to policies against “follow churn.” X’s rules explicitly prohibit aggressive, bulk, or automated following and unfollowing. This behavior is a hallmark of spam and platform manipulation.¹⁸, ¹⁹

To combat this, the platform employs automated systems to monitor for such patterns.²⁰ While the exact thresholds are not public, community data suggests daily limits (e.g., 400 follows per day) and hourly guidelines (e.g., under 60 unfollows per hour).¹⁸, ²¹, ²²

A strategy of treating follows like a “Yu-Gi-Oh deck”—curating a dynamic list of interlocutors—mimics the signature of an automated bot from a data-driven perspective. The platform’s algorithms are not designed to comprehend the user’s intellectual intent; they are designed to recognize a mathematical pattern. Consequently, this authentic engagement style is miscategorized as inauthentic manipulation, leading to suspension.²², ²³ This outcome exemplifies a fundamental flaw in over-reliant algorithmic enforcement.

3.2 The Weakened Shield: How Blocking Became Ineffective

While automated systems can be overzealous, a deliberate policy change has made a critical user safety tool less effective. Under a new policy, the block function has been fundamentally weakened. A blocked user can no longer reply to, like, or follow an account. However, they are no longer prevented from viewing that account’s public posts. A blocked harasser can now continue to monitor their target’s timeline, take screenshots, and use quote-tweets to continue their abuse.²⁴

This change represents a philosophical shift from a user-centric safety model to a content-centric visibility model. The old block function created a private boundary for public speech. The new function prioritizes the public nature of the content above all. This aligns with a “free speech absolutist” ideology but fails to recognize that a sense of safety is a prerequisite for free expression. For many vulnerable users, the inability to sever contact with a harasser creates a “chilling effect,” leading them to self-censor or abandon the platform.²⁴

This change can even transform blocking into an offensive weapon. A harasser can post an abusive comment and then immediately block their target. This prevents the target from replying directly, effectively giving the abuser the last word. Digital safety experts have criticized this power imbalance as a severe degradation of user safety.²⁴

Section 4: “Freedom of Speech, Not Reach”: Harassment, Hate, and User Experience

The operational and philosophical shifts on X have culminated in a demonstrably less safe and more hostile environment. Under the principle of “Freedom of Speech, Not Freedom of Reach,” the platform has systematically dismantled prior safety policies. This has led to a measurable spike in hate speech and a precipitous decline in enforcement actions, with a profound human cost.

4.1 Policy Disassembly: The Rollback of Protections

Since the 2022 acquisition, X has initiated a comprehensive rollback of key content moderation policies. This is a deliberate redesign of the platform’s rules.

  • Violent Speech: The platform softened its violent speech policy. The previous “zero tolerance policy” was replaced with language stating that X “may remove or reduce the visibility of violent speech.” Permanent suspension is now reserved only for “certain cases” of severe violations.²⁵, ²⁶
  • Hate Speech and Harassment: Protections for transgender users were reduced. The explicit prohibition against targeted misgendering and deadnaming was removed in April 2023. It was later partially reinstated in a much weaker form.²⁵, ²⁷, ²⁸
  • Misinformation: Entire policy categories were eliminated. Rules targeting harmful misinformation related to COVID-19 and elections were removed.²⁵
  • Privacy: X’s new privacy policy introduced provisions allowing it to collect users’ biometric data and metadata from encrypted messages, which critics warned could constitute a form of mass surveillance.²⁹

4.2 The Data of Decline: Measuring the Rise in Hostility

The consequences of these policy changes are quantifiable and stark.

  • Academic Research: A 2025 study in PLOS One found that after the takeover, the weekly rate of posts with homophobic, transphobic, and racist slurs increased by approximately 50%. “Likes” on these hateful posts doubled, indicating wider reach.³⁰
  • Internal Enforcement Data: X’s first transparency report under Musk showed suspensions for “hateful conduct” plummeted by 99.7% compared to the pre-Musk era.³¹ Out of 81 million user reports for abuse, only 1.35% of accounts were suspended. The enforcement rate for hate speech was even lower, at just 0.004%.³²
  • Third-Party Audits: A report from the Center for Countering Digital Hate (CCDH) found that 86% of 300 posts containing “extreme hate speech” remained active a week after being reported.³³

This data does not depict a system that is failing at content moderation. It shows a system successfully implementing a new, intentionally permissive model. The dramatic drop in suspensions is a feature, not a bug, of the “Freedom of Speech, Not Reach” philosophy.

Table 1: Comparative Analysis of X/Twitter’s Content Moderation Policies & Enforcement (Pre- vs. Post-Acquisition)
Metric/Policy
Hateful Conduct Suspensions
Policy on Violent Speech
Policy on World Leaders
Transgender User Protections
Blocking Functionality
Verification System

4.3 Voices from the Trenches: The Lived Experience of Harassment

The experience of user Khosro Raúl Soleimani provides a visceral case study. After a political debate, Soleimani became the target of a coordinated harassment campaign. The abuse included doxing, racist and anti-LGBTQ+ slurs, and baseless accusations sent to his employer. The harassment escalated to target his family, with abusers mocking his disabled nephew.

Despite repeatedly reporting these severe violations, Soleimani states that X “almost always left the post in place” and did “nothing in response.” The experience led to a diagnosis of post-traumatic stress disorder and forced him to cease active engagement for his own safety.³⁵, ³⁶ This account illustrates a “Safety Inversion,” where the platform’s mechanisms have been co-opted by abusers.

4.4 A Chilling Effect: The Documented Experience of Women on X

The degradation of safety on X has disproportionately impacted women. Research confirms women face a barrage of gendered abuse, including image-based abuse, cyberstalking, and threats of sexual violence.³⁷, ³⁸, ³⁹

Under the new regime, this problem has metastasized into a “trust and security crisis”.⁴¹ A 2025 survey by Uplevyl found that women are abandoning social media in response to the “hostile environment.” The survey revealed that 66% of women had taken breaks from social media, and 24% of women specifically quit X.⁴¹ Over 60% of women surveyed reported experiencing harassment online. Consequently, trust has collapsed: only 10% of women considered conversations on X to be “completely truthful.”⁴¹ This data shows a platform whose failure to ensure safety is actively driving away a key demographic.

Section 5: Manufacturing Consent: The Unreliability of Political Discourse on X

The perception of rampant inauthenticity on X extends to its function as a political forum. Wildly contradictory poll results between X and other platforms are not an anomaly. They are a direct consequence of severe demographic bias and systematic, deliberate manipulation. These factors render on-platform polls useless as a measure of public opinion.

5.1 Echo Chambers and Demographic Skews

Informal polls on social media are not scientific. They survey a platform’s active user base, not the general population. The demographic and political composition of these user bases can vary dramatically.

According to the Pew Research Center, the profiles of news consumers on X and Truth Social are vastly different. X’s news consumers skew younger and have a slight Republican lean. In contrast, Truth Social’s user base is smaller, older, and overwhelmingly Republican.⁴²

Given these baked-in skews, it is predictable that a poll on X would yield a different result than one on Truth Social. Each poll simply reflects the prevailing opinion of its own echo chamber.

5.2 The Invisible Thumb on the Scale: Bots and Troll Farms

Beyond demographic bias, a more insidious force is at play: deliberate manipulation. Research from the University of Massachusetts Amherst provides definitive evidence that political polls on X are systematically skewed by coordinated inauthentic activity.

The study analyzed over 100,000 polls during the 2020 U.S. presidential election. It found that results were significantly distorted by “questionable votes,” many likely purchased from commercial “troll farms”.⁴³, ⁴⁴ On average, these manipulated polls showed Donald Trump winning the 2020 election with 58% of the vote, a stark contrast to his actual 47% share.⁴³, ⁴⁴

This research confirms that the platform’s polls are not just flawed; they are actively compromised. The goal is “perception hacking.” By flooding a poll with fake votes, malicious actors can create an illusion of overwhelming grassroots support. This manufactured result can then be shared widely to build a narrative of momentum and sow distrust in legitimate polling.⁴⁴ In this context, the polling feature on X is not a broken tool for discourse; it is a highly effective weapon for information warfare.

Section 6: The Path of the Deplatformed: Navigating an Opaque Appeals Process

For users who fall afoul of the platform’s opaque enforcement systems, the path to reinstatement is a journey into a bureaucratic black box. The suspension appeals process on X is characterized by a lack of transparency, poor communication, and the perception of automated, indifferent decision-making.

6.1 The Black Box of Appeals

The experience of being suspended is often the beginning of a prolonged, frustrating ordeal. Users suspended in early 2025 report that filing an appeal frequently results in complete silence. Many receive no response at all for weeks or even months.⁴⁵

For those who do receive a response, the process often feels like a futile loop. Users describe submitting a detailed appeal only to receive an almost instantaneous rejection. This suggests an automated process with no meaningful human review.⁴⁶ The initial reason for suspension is frequently vague, leaving users confused about what rule they allegedly broke.⁴⁷

While some users report eventual success, it is often only after a campaign of persistent, repeated appeals over several months.⁴⁶ This reality is at odds with the platform’s official guidance, which suggests a response time of 24-48 hours.¹⁹

6.2 The Lack of Due Process

A functioning system of justice requires transparency, a meaningful appeal, and a clear verdict. The current appeals process on X fails on all three counts. The charges are vague, the appeal appears to be reviewed by an algorithm, and the final decision is often delivered without explanation.⁴⁷, ⁴⁸, ⁴⁹

This lack of due process is not merely an inconvenience. It represents a significant cost-saving measure that externalizes the burden of moderation failures onto the user base. Human content moderators are expensive. Following the acquisition, Elon Musk laid off a substantial portion of the company’s trust and safety teams.¹⁴, ³¹

An automated, non-responsive appeals system is significantly cheaper to operate. The broken nature of the process can be seen as an intentional feature of a broader cost-cutting strategy. The platform saves money by making the appeals process so difficult that many users simply give up.

Conclusion

The evidence presents a clear narrative. The social media platform X, under Elon Musk, has undergone a radical transformation. It has become a demonstrably less safe, less authentic, and less reliable forum for public conversation. This transformation is defined by the “X Paradox”: under the banner of “free speech,” the platform has become less free for those harassed into silence. Under the banner of “authenticity,” its automated systems punish authentic users while enabling inauthentic actors.

The platform’s approach to world leaders is not governed by a simple “UN rule” but by a contradictory “Zonal Policy.” Structurally, the architecture has been reconfigured to generate mistrust. The monetization of verification created a “Crisis of Heuristics,” empowering impersonators and bots. The persistent bot epidemic has degraded the user experience and rendered political polling a tool for “perception hacking.”

For the individual user, the environment is increasingly hostile. Core safety features have been weakened, and moderation policies have been dismantled. This has led to a measurable spike in hate speech and a near-total collapse in enforcement, creating a “chilling effect” that is driving users, particularly women, away.

The strategic shifts since 2022 are not failures of execution. They are the successful implementation of a new, non-interventionalist philosophy. The result is a platform that fails to protect its users from authentic harm while penalizing them for algorithmically misinterpreted inauthenticity.

Looking forward, the trajectory of X raises critical questions about the future of large-scale social media. If current trends continue, the platform risks not only further commercial decline but also a permanent loss of its cultural and political relevance. The ‘X Paradox’ may ultimately serve as a cautionary tale. It demonstrates that a commitment to free expression, without a corresponding commitment to user safety and platform integrity, can lead to an environment where meaningful speech is drowned out by the noise of manipulation and abuse.


Works Cited

  1. Wong, Julia Carrie. “The Trump rule? World leaders that violate Twitter rules will get warning label.” The Guardian. June 27, 2019. https://www.theguardian.com/technology/2019/jun/27/twitter-warning-labels-tweets-violate-site-rules
  2. Stradner, Ivana. “Elon Musk, Hypocrisy, and Verifying the Iranian Supreme Leader.” Foundation for Defense of Democracies. October 24, 2023. https://www.fdd.org/analysis/2023/10/24/elon-musk-hypocrisy-and-verifying-the-iranian-supreme-leader/
  3. Reuters. “Twitter Suspends ‘Fake’ Account Linked To Iran Leader For Warning Trump.” NDTV. January 22, 2021. https://www.ndtv.com/world-news/twitter-suspends-account-of-irans-top-leader-ayatollah-ali-khamenei-after-tweet-warning-donald-trump-2356733
  4. The Economic Times. “X suspends Iran supreme leader Khamenei’s new Hebrew account as Iran-Israel tensions escalate following missile strikes.” The Economic Times. October 28, 2024. https://m.economictimes.com/news/international/global-trends/x-suspends-iran-supreme-leader-khameneis-new-hebrew-account-as-iran-israel-tensions-escalate-following-missile-strikes/articleshow/114671572.cms
  5. ToI Staff. “Khamenei’s day-old Hebrew account on X suspended after threats against Israel.” The Times of Israel. October 28, 2024. https://www.timesofisrael.com/khameneis-day-old-hebrew-account-on-x-suspended-after-threats-against-israel/
  6. Baku.ws. “Elon Musk addressed the supreme leader of Iran.” June 22, 2025. https://baku.ws/en/world/elon-musk-addressed-the-supreme-leader-of-iran
  7. Jerusalem Post Staff. “Elon Musk: ‘Khamenei’s position clear that eradication of Israel is the goal’.” The Jerusalem Post. October 9, 2023. https://www.jpost.com/middle-east/iran-news/article-765308
  8. Times of India. “Community Notes: Iran Supreme Leader Khameini’s ‘Hezbollah is the victor’ gets brutally fact checked by Elon Musk’s Community Notes.” Times of India. October 5, 2024. https://timesofindia.indiatimes.com/world/us/iran-supreme-leader-khameini-hezbollah-elon-musk-community-notes/articleshow/113848769.cms
  9. Military.com. “World Leaders Express Hope After Trump Says Israel and Hamas Agreed to First Phase of Peace Deal.” October 9, 2025. https://www.military.com/daily-news/2025/10/09/world-leaders-express-hope-after-trump-says-israel-and-hamas-agreed-first-phase-of-peace-deal.html
  10. Anadolu Agency. “Turkish President Erdogan to attend Sharm el-Sheikh peace summit on Gaza ceasefire.” aa.com.tr. October 21, 2024. https://www.aa.com.tr/en/middle-east/turkish-president-erdogan-to-attend-sharm-el-sheikh-peace-summit-on-gaza-ceasefire/3715306
  11. DH Web Desk. “‘Naughty and playful’: Supreme leader Khamenei trolled over decade-old posts amid Israel-Iran conflict.” Deccan Herald. June 21, 2025. https://www.deccanherald.com/world/naughty-and-playful-supreme-leader-khamenei-trolled-over-decade-old-posts-amid-israel-iran-conflict-3596485
  12. ET Online. “‘I Didn’t Know India…’: Ayatollah Khamenei’s old tweets go viral amid Israel-Iran conflict.” The Economic Times. June 21, 2025. https://m.economictimes.com/news/new-updates/i-didnt-know-india-ayatollah-khameneis-old-tweets-go-viral-amid-israel-iran-conflict/articleshow/121990266.cms
  13. Wikipedia. “Twitter Blue verification controversy.” Last modified April 20, 2023. https://en.wikipedia.org/wiki/Twitter_Blue_verification_controversy
  14. Britannica. “Twitter.” https://www.britannica.com/money/Twitter
  15. Internet 2.0. “Elon Was Right About Bots.” January 25, 2024. https://internet2-0.com/bots-on-x-com/
  16. Rothke, Ben. “Bots are spelling the demise of X.” Medium. September 2025. https://brothke.medium.com/bots-are-spelling-the-demise-of-x-604e83a9b76b
  17. Wikipedia. “Twitter bot.” Last modified November 2022. https://en.wikipedia.org/wiki/Twitter_bot
  18. SocialDog. “What Are the Twitter Follow/Unfollow Limits?” December 22, 2022. https://social-dog.net/en/trend/p25
  19. Qura.ai. “Twitter (X) Account Suspension: How to Get Your Account Back Fast.” https://www.qura.ai/blog/twitter-x-account-suspension-how-to-get-your-account-back-fast
  20. All About Cookies. “How to Recover a Suspended X Account.” https://allaboutcookies.org/twitter-vpn
  21. SocialDog. “What is the Twitter Unfollow Limit Per Hour?” December 22, 2022. https://social-dog.net/en/trend/p25
  22. Lifewire. “How to Unfollow People on X (Formerly Twitter).” https://www.lifewire.com/follow-twitter-user-tool-tips-3288838
  23. Hypefury. “Why your Twitter account got suspended & How to fix it.” https://hypefury.com/blog/en/why-your-twitter-account-got-suspended-how-to-fix-it/
  24. Montague Law. “X’s New Block Policy: A Double-Edged Sword for User Privacy and Platform Transparency.” https://montague.law/ai/xs-new-block-policy-a-double-edged-sword-for-user-privacy-and-platform-transparency/
  25. Schulz, Wolfgang, et al. “Four central policy developments of X under Musk.” HIIG. October 28, 2024. https://www.hiig.de/en/policy-changes-of-x-under-musk/
  26. Platform Governance Archive. “X (formerly Twitter) softens its violent speech policy.” platform-governance.org. October 30, 2023. https://platform-governance.org/2023/x-formerly-twitter-softens-its-violent-speech-policy/
  27. Binder, Matt. “Elon Musk’s X has a new policy that discourages — but doesn’t prohibit — anti-trans hate.” Mashable. March 2024. https://mashable.com/article/elon-musk-x-twitter-new-anti-trans-harassment-policy
  28. TechPolicy.Press. “X’s Updated Misgendering and Deadnaming Policy Should Concern All Social Media Users and Believers in Democracy.” March 4, 2024. https://www.techpolicy.press/x-s-updated-misgendering-and-deadnaming-policy-should-concern-all-social-media-users-and-believers-in-democracy/
  29. Amnesty International. “Global: X’s new policy risks violating right to privacy for millions.” September 2023. https://www.amnesty.org/en/latest/news/2023/09/global-xs-new-policy-risks-violating-right-to-privacy-for-millions/
  30. Manke, Kara. “Study finds persistent spike in hate speech on X.” Berkeley News. February 13, 2025. https://news.berkeley.edu/2025/02/13/study-finds-persistent-spike-in-hate-speech-on-x/
  31. Lazine, Mira. “X suspends ‘hateful’ users 99% less often than Twitter used to.” LGBTQ Nation. September 25, 2024. https://www.lgbtqnation.com/2024/09/x-is-suspending-users-for-hateful-conduct-99-less-often-than-twitter-used-to/
  32. Kirkland, Colin. “X Report: We Remove More Accounts, Suspend Fewer Users Than Twitter Did.” MediaPost. September 26, 2024. https://www.mediapost.com/publications/article/399766/x-report-we-remove-more-accounts-suspend-fewer-u.html
  33. Digital Watch Observatory. “Report reveals X’s persistent failure to remove ‘extreme hate speech’ posts.” September 13, 2023. https://dig.watch/updates/report-reveals-xs-persistent-failure-to-remove-extreme-hate-speech-posts
  34. Ortutay, Barbara. “Twitter announces ‘violent speech’ policy that’s similar to its old rules against violent threats.” Associated Press. March 1, 2023. https://apnews.com/article/twitter-elon-musk-violent-speech-hate-1912497f4e4f444f1ba123cbb2d1ad59
  35. Soleimani, Khosro Raúl. “Why X, formerly known as Twitter, is not worth it.” The Daily of the University of Washington. May 24, 2024. https://www.dailyuw.com/article/why-x-formerly-known-as-twitter-is-not-worth-it-20240524
  36. Soleimani, Khosro Raúl. “Why X, formerly known as Twitter, is not worth it.” The Daily of the University of Washington. May 24, 2024. https://www.dailyuw.com/article/why-x-formerly-known-as-twitter-is-not-worth-it-20240524
  37. eSafety Commissioner. “Online risks for women.” esafety.gov.auhttps://www.esafety.gov.au/women/online-risks-for-women
  38. Amnesty International. “Technology-Facilitated Gender-Based Violence.” amnesty.orghttps://www.amnesty.org/en/what-we-do/technology/online-violence/
  39. eSafety Commissioner. “Online risks for women.” esafety.gov.auhttps://www.esafety.gov.au/women/online-risks-for-women
  40. Amnesty International. “#ToxicTwitter: Violence and abuse against women online.” amnesty.nlhttps://www.amnesty.nl/actueel/online-abuse-of-women-thrives-as-twitter-fails-to-respect-womens-rights
  41. Fair Play Talks. “Women abandoning social media in droves, citing a ‘hostile environment’.” March 13, 2025. https://www.fairplaytalks.com/2025/03/13/women-abandoning-social-media-in-droves-citing-a-hostile-environment/
  42. Pew Research Center. “Social Media and News Fact Sheet.” pewresearch.orghttps://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
  43. Manning College of Information & Computer Sciences. “Social Media Polls Deliberately Skew Political Realities of 2016, 2020 US Presidential Elections, Finds Research Team Led by UMass Amherst.” University of Massachusetts Amherst. July 16, 2024. https://www.cics.umass.edu/news/social-media-polls-skew-realities-election
  44. Grabowicz, Przemyslaw. “X Polls Skew Political Realities of US Presidential Elections.” TechPolicy.Press. September 20, 2024. https://www.techpolicy.press/x-polls-skew-political-realities-of-us-presidential-elections/
  45. Wikipedia. “Twitter suspensions.” Last modified early 2025. https://en.wikipedia.org/wiki/Twitter_suspensions
  46. Reddit user Genc007. Comment on “Has X changed the appeal form for suspended accounts?” Reddit. 2025. https://www.reddit.com/r/24hoursupport/comments/1hskqsv/has_x_changed_the_appeal_form_for_suspended/
  47. Lifewire. “How to Appeal a Suspended or Limited X Account.” https://www.wikihow.com/Recover-a-Suspended-Twitter-Account
  48. e-Cabilly. “How to Recover Suspended X Account: A 2024 Guide.” https://e-cabilly.com/blog/how-to-recover-suspended-x-account-guide/
  49. TweetDelete. “Twitter Suspension Appeal: How To Get Your X Account Back.” https://tweetdelete.net/resources/twitter-suspension-appeal/

Comments

Leave a Reply