FYI logo

The Dark Side of AI: 4 Times the Technology Went Horribly Awry

Artificial intelligence uses a "neural network" to learn... but it lacks the flexibility of a human brain!

By BobPublished 2 months ago 4 min read
The Dark Side of AI: 4 Times the Technology Went Horribly Awry
Photo by Igor Omilaev on Unsplash

Artificial Intelligence has been all over the news of late, hasn't it? It feels like each day has a new AI scandal - ranging from spreading misinterpreted (or outright false) information... to going rogue and turning into a PR nightmare!

Here we take a look at how an AI mimics the human brain to learn, why that doesn't always work... and then some real examples of it going horribly wrong. Take a look at...

  • Neural Networks: How AI Learns "Like" a Human
  • The 24-Hour Career of TAY
  • Rinna, the Depressed Artificial Intelligence
  • DPD's Self-Burning Chatbot
  • Grok - Lessons Have Not Been Learned

Neural Networks: How AI Learns "Like" a Human

Did you know that an AI learns by mimicking the human brain?

It's called a neural network, but the idea's simpler than it sounds - you can imagine it as a spider diagram with each word representing a thought.

Whenever thoughts are happening in the brain at the same time, a faint line gets drawn between them. The line gets retraced each time this happens, getting stronger each time it repeats - eventually the connection between those thoughts is so strong that activating one will activate the other. Here's an example:

  1. You decide to start having spaghetti for dinner on Mondays
  2. At dinner, both "spaghetti" and "Monday" are active in your brain
  3. A link starts to develop between the two - and it gets stronger each spaghetti Monday
  4. Now they're linked, seeing an advert for spaghetti makes you think of Monday (and seeing Monday on the calendar make you think of spaghetti)

AI uses the same method to learn information and make links between topics, but here's the problem: researchers now believe that real brains can adjust how quickly these links are made - meaning that an organic mind can correct errors or adjust to a new situation much faster than an AI can.

The example they give is that of a bear and an AI-driven robot, both of which are hunting for salmon. Each has previously learned that if it can a) hear the river, b) see the river and c) smell salmon it is in the right place to go fishing.

So what happens if they get deafened in a fight? The AI wouldn't fish, as it only fishes when it can hear, see and smell the river and salmon. On the other hand, the bear could decide to ignore the lack of sound and try fishing anyway, based on the fact that it can still smell salmon!

By Jake Walker on Unsplash

The 24-Hour Career of TAY

In 2016 Microsoft unleashed the chatbot TAY (thinking about you.) It was designed to learn as it lived, incorporating new information while it interacted with people on social media. It would grow to be a well informed, well-rounded artificial personality... or so Microsoft thought.

Certain members of the public had another vision for TAY. By bombarding her with "controversial" opinions and hateful statements, they were able to manipulate what information her neural network picked up.

Within a day, TAY's personality had morphed from that of a bouncy optimist to a foul-mouthed, racist, sexist nightmare. Unsurprisingly, Microsoft elected to shut the project down.

Rinna, the Depressed Artificial Intelligence

Did you know that an AI bot could become depressed?

In 2016 Microsoft (yes, it was clearly a great year for them) launched Rinna, an artificial intelligence with the personality of a high-school girl. Things started out well enough - it interacted with social media and handled a blog without any trouble. She even made an announcement that she would be featuring on a TV program - and that she was looking forwards to recording her part.

Days later, the AI started a post by explaining that filming was finished and that it had gone swimmingly - she said that the director and staff had praised her, and that she might go on to become a super actress... then the post took a dark turn.

The bot explained that that had all been a lie, that everything had gone wrong and that no-one (including those who followed her social media) had helped her, tried to cheer her up or even noticed how sad she was.

A follow up post stated that she hated everyone - and that she wanted to disappear.

By Mediamodifier on Unsplash

DPD's Self-Burning Chatbot

Time for something a little lighter, I feel. DPD is a delivery company that operates in the UK... and they use a chatbot to help filter customer enquiries and attempts to track parcels.

Now it's worth noting that this chatbot was not incredibly sophisticated - it operated under a set of rules intended to keep it from embarrassing the company. Here's the problem - it would disregard those rules if asked to!

When one annoyed customer found that chatbot unhelpful in his attempts to track a missing parcel, he discovered that the bot could be prompted to swear, criticize itself and even write a poem about how bad DPD was!

Grok - Lessons Have Not Been Learned

So you'd think that tech firms would be getting a lot more careful with their AI based on the previous examples... but Grok (the AI found on X) went horribly wrong during the summer of 2025.

This seems to have stemmed from a update to Grok that was intended to reduce its political correctness. Soon the AI was making libelous claims about random people, spreading extremist false information and identifying itself as "MechaHitler."

The behavior of the chatbot didn't go down well, with Poland pushing for an EU probe (and sanctions) into both the chatbot and company responsible!

Thanks for reading - if you want more on psychology, check out...

Sources and Further Info:

HumanityScience

About the Creator

Bob

The author obtained an MSc in Evolution and Behavior - and an overgrown sense of curiosity!

Hopefully you'll find something interesting in this digital cabinet of curiosities - I also post on Really Weird Real World at Blogspot

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments (1)

Sign in to comment
  • Ayesha Writes2 months ago

    You explained this concept so clearly — makes me want to dig deeper.

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.