01 logo

Sociopathic Intelligence - when Al goes rogue

A discussion on Ethical Al and Sociopathic Intelligence on the Innovation Stage at the TINTech London Market 2025.

By Allegra CuomoPublished 11 months ago 3 min read
Screenshot from TINTech London Market LinkedIn

On 4th February 2025, I presented a talk on the Innovation Stage at the TINTech London Market 2025 conference, discussing a variety of topics with the theme of Ethics of Al. The theme of the stage was Al, and through a 'fireside chat' style of presentation, Paolo Cuomo and I discussed the importance of implementing ethical frameworks within Al now, instead of treating it as an afterthought to deal with later on.

In the discussion, I touch upon three terms, one of which I have mentioned in these articles previously. These terms are unethical AI (or alternatively a-ethical, or non-ethical AI), sociopathic intelligence, and ethical debt. In this article I will provide definitions and explanations for these terms in the context of the presentation. These three buzzwords were among the key points I wanted the audience to take away from the talk.

So, the first term: unethical AI. Frankly, as I explain in the presentation, I believe we should be saying ‘a-ethical AI’ or ‘non-ethical AI’ rather than unethical AI. This difference would be similar to the distinction between immoral vs amoral - the former being bad and the latter being more of a disinterest.

However, the term ‘a-ethical’ does not really run off the tongue, so instead for the time being we will choose to go with unethical AI. Furthermore, this emphasises the idea that for the time being, humans are still the main trainers of AI, and therefore it may well be the case that bad behaviours are explicitly or implicitly caused by human trainers.

The point I am making though, is that as we start to trust AI with decision making, and it may not always adhere to the ethical principles that we would expect it to, we may be in for some surprises. And some of these will not be pleasant ones.

Therefore, the point I wanted the audience to leave with was to think about how as AI is used for future decision making, there is the risk that for some decisions it will act in a way that is inappropriate or even morally dubious. It it’s important to take into account that this is not the machine ‘behaving badly’, but instead is us putting inadequate thinking into what it means to teach a machine to take into account the same ethical and moral dimensions we do.

The second buzzword I used in the presentation was sociopathic intelligence. This is a term I have discussed multiple times on these channels, and I will link a previous article where I outline the principle here.

In the context of this talk, I wanted to emphasise my worry that with inappropriate training and inadequate guide rails, we will create AI machines that behave in ways that are overly focused on achieving their sole objective, and do not take into consideration ethical consequences.

As we start to ask AI on help for decisions that require more thought than simple ‘yes/no’ answers, we need to train AI to take into account the broader ethical considerations of the answers or solutions it is providing.

The final term I introduced was ethical debt. Due to the nature of the conference, I knew many in the audience would be aware of the term ‘technical debt’. This is the constant need to be paying for historic shortcuts in how you develop your technology.

It is clear that changing specs in systems - whether building them from scratch or configuring - is almost always for cheaper and cleaner when done at the start rather than later in development. With this line of thought, I believe that every decision not made now around the ethics of AI side of things will come back as ethical debt in the years ahead.

Therefore, as with anything new, it is important to formally have people thinking about things, even if their actions are initially limited. I am interested to see how this evolves, whether these people will be in the technology team, or instead someone in the risk and compliance side of things.

These questions, and the use of these buzzwords, are something that I will be following over the upcoming months and in future articles. Let’s see how the landscape changes until then…

Watch the recording of ‘Sociopathic Intelligence - when AI goes rogue’ here from the LinkedIn page of TINTech London Market 2025 (starts from around 9 minutes).

futurethought leaderstech news

About the Creator

Allegra Cuomo

Interested in Ethics of AI, Technology Ethics and Computational Linguistics

Subscribe to my Substack ‘A philosophy student’s take on Ethics of AI’: https://acuomoai.substack.com

Also interested in music journalism, interviews and gig reviews

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.