Geeks logo

Investigative Report: Bring Back the Old Internet.

Social Media Algorithms are Ruining Society.

By Andrew LehtiPublished 21 days ago 5 min read
The Legs of the YouTube Logo

The internet was once a space of boundless expression, where content thrived on human engagement rather than algorithmic manipulation. A simple, amusing video could amass millions of views—not because it adhered to a predetermined formula, but because it resonated with people. Creativity was vast, unfiltered, and diverse. Today, that spontaneity has been eroded, streamlined into a system that suppresses non-conforming content under the guise of community guidelines and corporate incentives. Platforms now prioritize palatability over authenticity, favoring predictability over ingenuity.

YouTube, once a hub of creative experimentation, has become rigid and homogenized. Content differs in presentation but not in substance, molded to fit within the narrow confines of algorithmic favorability. Even pioneers like the Angry Video Game Nerd have been forced to adapt to the platform’s increasing rigidity, while many others—both those who resisted and those who complied—have been buried by the algorithm, casualties of a system that no longer values unpredictability. The era of spontaneous discovery has been replaced with algorithmic sterility, where content feels manufactured rather than inspired.

The rise of zero-tolerance policies and over-moderation has further stifled originality. Platforms now cater to those who prioritize comfort over inquiry, reinforcing ideological conformity. Critical thought, curiosity, and intellectual debate have been cast aside in favor of emotional fragility, performative inclusivity, and black-and-white thinking. Compliance is now prized over curiosity, uniformity over originality, and obedience over accountability. The result is a digital culture that discourages independent thought, stifles innovation, and rewards those who conform rather than those who challenge.

The original promise of the internet—free discourse and the unrestricted exchange of ideas—has given way to rigid moderation policies that often favor emotional sensitivity over intellectual rigor. Platforms claim to combat misinformation and harm, but in doing so, they often stifle dissent and critical thought. What was once a space of open-ended discussion has become a landscape of pre-approved narratives, where questioning dominant perspectives risks suppression.

At its core, this shift reflects the inevitable clash between corporate interests and free expression. An internet driven by advertising revenue will always prioritize stability and predictability over risk and creativity. The challenge is determining how much longer humans can survive the bland abyss and eventually berserk the corporations after its successor, which will happen if they do not adapt, will emerge when one can recapture the raw, unfiltered nature of early online culture—or whether the internet, like so many other industries before it, will become permanently tamed.

Report Button

  • Platforms allow users to report content for violating community guidelines. When a post receives multiple reports, it is flagged for review by moderators or automatically removed once a threshold is met.
  • Report buttons are abused when users mass-report content that challenges their worldview, regardless of accuracy or adherence to guidelines. Moderators often see reports stemming from cognitive bias rather than actual violations, and automated systems remove content preemptively, silencing dissent.

Dislike Button

  • A metric for audience feedback that lets creators gauge content reception.
  • Dislike buttons become a weapon when platforms allow them to counteract likes. Humans instinctively dislike or downvote content based on titles, usernames, or initial impressions, whereas liking content requires a higher threshold of persuasion. This skews engagement metrics against unpopular viewpoints, suppressing dissenting voices.

Spam Button

  • A tool to filter low-quality or repetitive content by flagging posts as spam, training algorithms to recognize patterns of abuse.
  • Cognitive biases cause users to flag posts without considering evidence, externalizing their discomfort as mockery or anger. Algorithms, trained by these biases, incorrectly categorize legitimate content as spam. Zero transparency prevents users from countering false classifications, making suppression inevitable.

Artificial Tempering

  • Platforms reduce the impact of artificially inflated engagement by deprioritizing content flagged for manipulation. This includes preventing repeated refreshes, bot-driven views, or external traffic surges from skewing rankings.
  • Bad actors exploit this by artificially boosting a competitor’s content, triggering platform distrust. The system assumes the creator is manipulating engagement, leading to suppression. This tactic allows competitors to sabotage dissenting voices under the guise of algorithmic fairness.

Spam and Junk Filtering

  • Email filters sort messages into spam based on keywords, sender history, and engagement patterns to protect users from phishing and fraud.
  • These filters can be weaponized by ensuring warnings about criminal actions or critical messages land in junk mail, selectively preventing them from reaching those who can act. Meanwhile, irrelevant emails pass through without issue. Delegating communication control to algorithms introduces selective suppression.

Community Violation Restrictions

  • When a post violates community guidelines, the user may be shadow banned which is a term that describes removing or restricting recommendation from algorithms, limiting or removing visibility. This gives an abnormal and unrestrictive amount of power to single corporate entities, which inherently is illegal in aspects beyond consideration.
  • Guideline enforcement is subjective, allowing platforms to remove users arbitrarily. Content bans can stem from minor infractions or misinterpretations, effectively erasing dissenting voices. Satirical or historical content, even with context, may be flagged as hate speech, demonstrating the dangers of opaque moderation policies.

YouTube Censorship

For years, YouTube has presented itself as a neutral platform where engagement dictates visibility. However, that notion has been dead since at least 2009. Over the past decade, its algorithms have increasingly suppressed certain content, especially anything that can challenge the status quo. It will readily recommend flat-earth nonsense—because that does not legitimately challenge the status quo—but steer clear of anything that could be legitimate, and that isn't already stigmatized, such as the Joe Rogan Experience.

The suppression of Lunar Forensics: 10-Minute Preview is not just an isolated case of algorithmic bias—it represents a broader, more dangerous trend: the systematic shutdown of dissent, science-based inquiry, and public investigation into the actions of the federal government. Platforms like YouTube, which once served as open forums for discussion and discovery, have transformed into mechanisms of control, selectively burying content that questions official narratives.

This suppression extends beyond mere algorithmic manipulation. Lunar Forensics faced a copyright claim, yet one with no restrictions—meaning it did not require an appeal and imposed no limitations on the video’s availability. In contrast, the Polyhedral Index Partition video received a limited impact copyright notice, which would typically result in reduced visibility.

Yet, despite Lunar Forensics being free of any actual restrictions, it is still being algorithmically buried, while Polyhedral Index Partition, despite a more direct copyright claim, continues to receive far more exposure. This contradiction further exposes how suppression operates beyond simple content policy enforcement.

The dangers of this suppression are immense. When public investigations into government activities—especially scientific and historical ones—are suppressed, it sets a precedent where only state-approved narratives are allowed to flourish. Science thrives on skepticism, debate, and rigorous scrutiny, yet the very tools meant to facilitate such discourse are now being weaponized to silence it.

By shutting down inquiry into sensitive topics like the moon landing, YouTube is effectively stifling independent research, discouraging public participation in scientific discussions, and fostering an environment where questioning authority is treated as dangerous rather than necessary. If left unchecked, this trend will continue to erode transparency, intellectual freedom, and the public’s ability to hold institutions accountable.

Note: screen grabs of the funnels were taken a few weeks prior to the above table

industry

About the Creator

Andrew Lehti

Andrew Lehti, a researcher, delves into human cognition through cognitive psychology, science (maths,) and linguistics, interwoven with their histories and philosophies—his 30,000+ hours of dedicated study stand in place of entertainment.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.