Social Media Firms Have Come to Ban 'Kicking and Screaming', Says Australia eSafety Boss
“Australia’s eSafety Commissioner says social media companies are now enforcing bans amid growing online safety pressure”

Australia’s eSafety Commissioner has said that social media companies are now implementing stricter content moderation “kicking and screaming”, highlighting growing pressure on tech platforms to combat harmful online content. The remarks underline the increasing responsibility social media firms face in protecting users, regulating abusive content, and responding to government mandates.
Julie Inman Grant, Australia’s eSafety Commissioner, made the statement during a recent parliamentary session, signaling that while platforms have historically resisted strict moderation, regulatory and public pressure is increasingly forcing change.
What the eSafety Commissioner Said
Inman Grant emphasized that social media platforms have been reluctant to enforce bans on harmful content, but they are now acting due to external pressure.
“Social media firms have come to ban content, often kicking and screaming, but the direction is clear. Online safety is non-negotiable,” she told lawmakers.
She highlighted the role of legislation, penalties, and public scrutiny in compelling tech companies to adopt more rigorous moderation policies.
Why Content Moderation Matters
Content moderation has become a crucial issue in the digital era. Social media platforms must manage billions of posts each day, including:
Hate speech and harassment targeting individuals or communities
Misinformation and fake news, which can influence public opinion and elections
Illegal content, such as child exploitation material or violent imagery
Self-harm and suicide-related content, posing direct safety risks
Regulators argue that ignoring harmful content can have real-world consequences, including mental health risks, societal polarization, and erosion of public trust.
Australia’s Regulatory Approach
Australia has emerged as a global leader in online safety regulation. Under the Online Safety Act, the eSafety Commissioner can:
Issue removal notices for harmful content
Impose financial penalties on non-compliant platforms
Enforce rapid-response procedures for high-risk material
These measures are part of a broader strategy to hold social media companies accountable and ensure platforms prioritize user protection.
Social Media Companies’ Reluctance
Despite external pressure, platforms often resist strict moderation for several reasons:
Balancing free speech and safety
Volume of content, making automated moderation a necessity
Legal and cultural differences across countries
Business considerations, as moderation can affect engagement metrics
Inman Grant’s comments reflect that regulatory pressure is often the main catalyst for change, pushing companies to implement policies they might otherwise avoid.
Implications for Users
For social media users, stricter moderation may result in:
Safer online environments, particularly for children and vulnerable groups
Faster removal of harmful or abusive content
Increased accountability for creators and users, discouraging negative behavior
However, some critics warn that over-moderation could inadvertently suppress legitimate speech, emphasizing the need for transparent, consistent rules.
Challenges in Implementation
While regulations have accelerated action, content moderation remains complex:
Platforms must process millions of posts daily, often using AI tools
Contextual understanding is necessary to avoid false positives
Users may attempt to evade moderation, posting on less-regulated platforms
Legal appeals and transparency requests can slow enforcement
These challenges mean that even with regulations, moderation is ongoing and iterative, requiring continuous improvement.
Global Trends in Regulation
Australia’s experience reflects a global trend toward holding tech companies accountable. Other countries, including the UK, Canada, and EU member states, have introduced laws requiring platforms to:
Monitor and remove harmful content
Report safety compliance regularly
Maintain clear and fair moderation policies
Experts suggest that coordinated international regulation may become increasingly common as governments seek to protect online users while balancing free expression.
The Role of Government Pressure
Government oversight has been pivotal in accelerating change. By combining:
Legislation and fines
Mandatory reporting requirements
Public scrutiny and transparency campaigns
Regulators can ensure that social media platforms prioritize safety over engagement metrics. Inman Grant’s remarks highlight that without such pressure, platforms may have continued reluctance to enforce bans effectively.
Looking Ahead
The eSafety Commissioner anticipates that social media platforms will continue to strengthen moderation practices, but ongoing oversight will remain essential. Future priorities may include:
Transparent moderation decisions and appeal mechanisms
Improved tools for user reporting of harmful content
Enhanced protection for children and vulnerable groups
Global collaboration to harmonize safety standards
Experts believe that user safety, accountability, and transparency will remain central as social media continues to evolve.
Conclusion
Julie Inman Grant’s statement that social media companies have acted “kicking and screaming” underscores a pivotal moment in online safety. While platforms are adopting stricter content moderation policies, regulatory oversight remains crucial to ensure they protect users, uphold standards, and maintain trust.
Australia’s approach provides a model for other nations, demonstrating that legal frameworks, public accountability, and continuous monitoring are essential in creating a safer, more responsible online environment. As regulations tighten globally, social media companies will likely continue evolving under pressure, balancing user safety with freedom of expression.




Comments
There are no comments for this story
Be the first to respond and start the conversation.