01 logo

The Rise of ‘Sociopathic Intelligence’

Why it’s so important we establish an ethical framework for Strong AI now.

By Allegra CuomoPublished about a year ago 3 min read
The Rise of ‘Sociopathic Intelligence’
Photo by Dan Cristian Pădureț on Unsplash

In 2003, Swedish philosopher Nick Bostrom laid out a thought experiment known as ‘The Paperclip Maximiser’. This hypothetical situation illustrated an existential risk that an artificial intelligence may pose to human beings by describing the following scenario:

Imagine if there was an artificial intelligence that had been given the sole task of manufacturing paperclips. If the machine had not been programmed to value living beings, and was given sufficient power over its environment, the machine would quickly realise that in order to maximise its goal, it would try turn all matter in the universe, including living beings, into paperclips.

Furthermore, the machine may decide not only to turn all living beings into paperclips, but realise that without humans around, there would be no one to switch off the paperclip-making machines. Therefore, removing humans and all over life forms would not only provide further matter for the production of paperclips, but also an inability for the paperclip making process to stop – all of these actions allowing for the maximising of the production of paperclips.

While this scenario is hypothetical, simplistic in order to drive discussion, and has become almost cliche as a symbol of AI in pop culture, Bostrom’s point is ever more pertinent. If some sort of super-intelligence was designed without a form of machine ethics to guide it, even to pursue seemingly harmless goals, there is still an existential risks to humans and living beings. The paperclips maximiser example successfully illustrates the broader problem of managing powerful systems that lack human values.

To simplistically resolve the problem, a few simple rules – if obeyed – would avoid this existential threat to humankind from to a paperclip minimising entity. However, as the aims of artificial intelligences become more complex, or the approach to safeguarding becomes naive to the potential risks, the effectiveness of these rules would start to weaken.

Business is regularly focused on one objective rather than thinking more broadly. This is often profit-driven, and even when it’s not, it is usually determined to thrive in one singular aspect.

For example, monoculture crops having a very low genetic diversity, making them highly susceptibly to pests and diseases. To combat this, the practice relies heavily on chemicals which lead to pollution and a reduction of both organic matter and biodiversity in the soil. However, farmers benefit from higher profits, and when everything runs smoothly, both the yield of crop and efficiency of production are increased. While the practice is unsustainable and damaging to our planet, for the sake of profit and business, it continues.

There are a great many examples like the one above: the unsustainable meat production industry; the increasing rates of additives and sugar in food, leading to our current obesity crisis, and many more. Instead of prioritising ourselves or our planet, everyone is running on a clock to drive profit and business.

So if we are like this, it is naive to think that when it comes to AI, it will not be similarly singularly minded.

Overtime, we will see AI as having traits as something similar to a ‘high-functioning sociopath’, a term popularised by Benedict Cumberbatch in his portrayal of a modern Sherlock Holmes (though there is still plenty of debate about whether this is an accurate term for his character – that’s for a seperate article!)

Either way there is a very real concern that AI, if not programmed with some form of moral or ethical system, will evolve to have this singular mindset that could eventually spiral out of human control. Hence, the need for discussion around proper AI ethical regulation and protocol is one that needs to happen now.

Let’s just hope in the meantime we aren’t all turned into paperclips…

tech news

About the Creator

Allegra Cuomo

Interested in Ethics of AI, Technology Ethics and Computational Linguistics

Subscribe to my Substack ‘A philosophy student’s take on Ethics of AI’: https://acuomoai.substack.com

Also interested in music journalism, interviews and gig reviews

Reader insights

Nice work

Very well written. Keep up the good work!

Top insights

  1. Expert insights and opinions

    Arguments were carefully researched and presented

  2. Eye opening

    Niche topic & fresh perspectives

Add your insights

Comments (1)

Sign in to comment
  • Paolo Cuomoabout a year ago

    A critical topic for us all to be considering in parallel to developing the use cases for how we're going to be applying the AI

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.