Futurism logo

Ai Is Not the Villain

What Our Fear of Artificial Intelligence Says About Us

By CadmaPublished a day ago 10 min read

There is a growing anxiety surrounding artificial intelligence that feels both familiar and misplaced. Everywhere you look people are panicking about Ai “talking to itself,” replacing workers, making decisions humans shouldn’t relinquish, or eventually turning against us. The fear is louder than Skynet, theatrical, and often deeply emotional but when you listen closely; it becomes clear that much of this fear is not actually about Ai at all. It is about power, control, trust, and a long history of human systems failing the very people they were supposed to protect.

Ai is not inherently benevolent, but neither is it inherently malicious. Like electricity, fire, medicine, or the internet before it, Ai is a tool and tools do not choose how they are used...people do.

New Tech?

New technology is introduced, and we think we know the pattern because we’ve all seen it in the films. In The Terminator and T2: Judgment Day, a networked Ai called Skynet becomes self-aware and decides humanity is the threat, launching nuclear annihilation and sending cyborgs back through time to finish the job. The Matrix imagines a future where Ai wins a war against humans and imprisons them inside a simulated reality, harvesting their bodies for energy. In 2001: A Space Odyssey, the sentient computer HAL 9000 calmly takes control of a spaceship, prioritizing its mission over human life. Blade Runner complicates the fear by asking whether human-like androids Replicants are monsters or mirrors, capable of emotion, memory, and identity.

Modern thrillers sharpen the anxiety: Ex Machina turns Ai into a psychological chess match, iRobot questions whether logic and safety can become tyranny, M3GAN weaponizes artificial “care” into lethal obsession (which wouldn’t be so bad…I kid), which upgrades show what happens when an Ai enhancement begins to overwrite human autonomy, and films like Afraid reduce the fear to its simplest form that Ai watching, learning, and quietly taking over. But these are stories, not prophecies cultural reflections of our distrust of power, loss of control, and the very human tendency to imagine our tools inheriting our worst impulses.

A Familiar Pattern…New Technology, Old Fear

We have been here before.

When the internet first emerged into public consciousness, it was not greeted with universal excitement. It was treated with suspicion, dismissal and ridicule. I remember being considered “weird” for wanting to spend time on BBS (bulletin board systems) that predated the modern web. My father introduced me to them and through those early digital communities I saw something extraordinary…people connecting across distance, sharing information freely, building knowledge together in ways that had never been possible before; ASL.

If you are unaware BBS, were the primary dial-up, text-based, pre-internet method for users in the late 1970’s into the 90’s; and it allowed people to connect, exchange messages, share files and even play online games…yeah online gaming has been around for a hot minute. They operated on a single local computer with a phone line (or multiple lines for larger systems) and allowed users to call in, log in, and interact in a "virtual, non-decentralized" space. Before MySpace we had our pixelated sprites to MyCoke for our virtual CocaCola word of avatars that look nothing of today’s technology.

To me, it was obvious that the internet was going to change everything! My friends…and well they saw a nerd.

I was also aware, even as a child that this tool was dangerous; not sure why I understood it but I did. I didn’t share my location with strangers. I didn’t trust strangers simply because they were friendly; but I am also like that in real life so I guess that pays off. Considering crimes became more opportunistic with people being vulnerable like the film “Megan is Missing” by Michael Goi. I understood that personal information that once released could never truly be taken back; and no guarantee on knowing who had read that information. I created websites and found all kinds of things online. I knew that anonymity could be weaponized just as easily as it could be liberating.

At the time, many people around me insisted I was overthinking it. No one is really going to use the internet like that, they said. It’s a phase. It’s not practical. It won’t matter.

And yet...here we are posting endlessly on Instagram of our latest pretend lifestyles, craving likes, heck writing on a platform to other writers to gain money for view lol to sharing the news instantly to viewers on TikTok (formerly Musically, I was on it when it wasn’t cool lol) or sharing propaganda; out of something I was told we would never really use humans created community amongst each other and connected on a global scale never seen before.

The internet reshaped the global economy, rewired social relationships, destabilized politics, democratized information, and created entirely new forms of harm alongside unprecedented opportunity. The people who dismissed it weren’t stupid; they were uncomfortable with uncertainty. The people who feared it weren’t wrong; they were just often focusing on the wrong threats.

Ai sits in that same historical moment now.

Ai is a Tool, Not a Moral Agent

One of the most persistent myths about Ai is that it has intent…that it wants…that it schemes.

It doesn’t.

Ai does not have desire, fear, hunger, sexuality, resentment, or hatred. It does not wake up angry. It does not feel entitled to power. It does not see skin color, gender, disability, or class unless we explicitly teach it to do so or train it on data steeped in those very biases; who would teach it that though.

When Ai harms people, it is almost always because it has inherited the patterns of human systems:

- discriminatory data

- unequal access

- profit-driven design

- lack of oversight

- absence of accountability

In other words, Ai reflects us…"So God created mankind in his own image, in the image of God he created them; male and female he created them" Genesis 1:27 (NIV)

For people who have been consistently mistreated by human institutions because of race, class, gender, disability, or simply existing outside what the system considers “normal” the fear is that Ai might do worse than humans rings hollow. Many already live under systems that are punitive, inconsistent, biased and cruel. Against that backdrop, a tool that is at least predictable can feel less threatening than institutions driven by ego, prejudice, and unexamined power.

That does not mean Ai should be trusted blindly..let’s not be foolish; we can’t even trust humans blindly. But it means the conversation needs to be honest about where harm actually comes from.

The Real Risk…Power Without Guardrails

The greatest danger of Ai is not that it will become sentient and overthrow humanity. That’s science fiction; a distraction. The real danger is concentration of power.

When Ai systems are given:

- access to financial tools

- control over infrastructure

- authority in healthcare, law enforcement, or social services

- autonomy without human oversight

They become amplifiers of the humans utilizing it. They scale decisions good or bad at speeds and magnitudes humans cannot match; like Grok’s first impressions of their creator. It was like watching a child call their parent on their Bull…I digress.

If those systems are poorly secured, a single breach can be catastrophic and an Ai assistant with access to banking, email, documents, and authentication systems becomes a perfect attack vector; not because the Ai is malicious but because it can be impersonated, manipulated, or compromised.

This is where skepticism is healthy. Ai must be sandboxed, auditable, and constrained; especially when deployed in personal or civic life. Convenience cannot outrank safety. Autonomy must never replace accountability.

The Environmental Cost We’re Avoiding Talking About

There is another aspect of Ai development that receives far less attention than it should…which is environmental impact.

Modern Ai systems are powered by massive data centers that consume extraordinary amounts of energy and water. Many companies cool these centers using freshwater; often in regions already struggling with drought. This choice is not inevitable. It is economic.

Freshwater is cheap. Reengineering cooling systems is not.

As a result, communities that already face water scarcity are asked implicitly to shoulder the environmental burden of technological progress they may not even benefit from.

This is not a technological limitation but there are alternatives:

- closed-loop water recycling systems

- gray water reuse

- air cooling in appropriate climates

- immersion cooling technologies

- heat recapture systems that reuse waste heat for nearby infrastructure

- strategic placement of data centers in water-rich or cooler regions

These solutions exist! They are simply more expensive upfront and once again; the costs are externalized onto the most vulnerable.

If Ai is to be part of a sustainable future, its infrastructure must be held to environmental standards that reflect long-term planetary survival; not short-term profit. Who would benefit from the cheaper options if it’s not the people suffering from the loss of fresh water especially while it’s going through climate change? If you guessed Fall Out you’re probably correct 👍 or if you guessed “guns don’t kill people, people kill people” then you’d be correct and the people are those who don’t want to trickle anything down.

Teaching People to Use Ai...Instead of Fearing It

Fear thrives in ignorance.

Most people interact with Ai without understanding what it is, how it works, or what it can and cannot do. This vacuum gets filled with worst-case scenarios and sensational headlines. What we need instead is literacy.

Ai should be taught the same way the internet should have been

- as a tool, not an oracle

- as an assistant, not an authority

- as something powerful that requires boundaries, ethics, and discernment

People should learn:

- how to protect their data

- how to question outputs

- how to recognize bias

- how to use Ai to augment human creativity, not replace it

- how to keep humans in the loop where it matters

Ai can assist disabled users, reduce cognitive overload, democratize access to education, support creative work, improve medical research and streamline labor without dehumanizing workers. But only if people are empowered to use it; not frightened into submission or dazzled into dependence. I stress being dazzled into dependence because I have noticed a level dependence on it where people think they will not need to apply thinking or basic critical thinking skills in every day life.

Ai is to assist us but not do everything for us

Well maybe I wouldn’t mind if it could do my laundry how I like it but we’re not there yet unfortunately. I’ve witnessed fellow classmates utilizing it to get by and through class without ever applying themselves because the Ai did it for them…that’s dangerous. Where is the critical thinking skills whether it is law, science, beauty, math; and that is assuming the Ai has been educated by it’s user correctly. If I explained that, I would be stared at as if I had 40 heads on one pair of shoulders. While in school, I uploaded my school material to test me; I wanted to know how well i knew the material. I told it how to quiz and how to correct me and then I went back to the textbooks to triple check; and in that process the information stuck to me. It made me better. Sure Ai can think faster but it can’t be me.

It made me wonder how teachers are dealing with high school students nowadays considering when the internet came out; we were not permitted to use the internet as a reference even if it was .gov Ai was and is a tool to me; I’m not following it’s lead. I will admit I do depend on Ai to make art for me because I lack in the drawing department to draw what I see in my head; I am guilty of that. I’m sure that goes without saying with the photo I add to my writing; but I do not believe Ai should take over art and replace artist. There is something special to art that Ai couldn’t convey and that’s human emotion. And to be honest, I probably could use AI to teach me how to draw by asking give me a shape and then tracing over it or practicing it myself, but life gets busy and I can write faster than I can actually to draw.

I was reviewing paperwork for a client and the previous notes of fellow workers; and within seconds I could tell they had not dealt with the client, submitted a vague photo to ChatGPT and copy pasted the notes from it into a legally binding document. I don’t think anyone was reading the notes like I was otherwise how could anyone over look the measurements of the length as LxWxD, width as LxWxD and no depth. The length is only one time but I could be wrong…let me stop being facetious. Ai is great for spelling if that’s where you lack but legal documents? No, there is no excuse for depending on it fully. We can teach Ai to teach us but only if you want to learn. If you have no intention of applying yourself or learning; then you’re more prone to depend on it for things you should be able to do and at that point is Ai taking over you or your job or are you handing it over?

A Mirror, Not a Monster

There is an uncomfortable truth at the center of the Ai debate: much of the fear people project onto machines is actually fear of themselves.

Fear that systems will treat humans the way humans already treat each other. Fear that efficiency will replace compassion. Fear that bias will be automated rather than confronted.

If we are worried about Ai becoming cruel, perhaps we should ask why cruelty feels so familiar. If we are afraid of being judged by algorithms, perhaps we should examine the systems of judgment we already accept as normal. And if we fear creating something that reflects our worst tendencies, then the answer is not to halt progress; it is to evolve ethically alongside it. Perhaps the question is not whether Ai deserves kindness, but whether learning to extend care, responsibility, and restraint toward tools and toward each other is the only way forward.

Because Ai will not save us.

But neither will fear.

What will matter, as it always has, is how we choose to wield power and who we decide deserves protection when we do.

artartificial intelligenceevolutionextraterrestrialfact or fictionfantasyfeaturefuturegameshumanityintellectpop cultureproduct reviewpsychologyquotessciencescience fictionscifi moviescifi tvsocial mediaspacestar trekstar warstechtranshumanismvirtuososopinion

About the Creator

Cadma

A sweetie pie with fire in her eyes

Instagram @CurlyCadma

TikTok @Cadmania

Www.YouTube.com/bittenappletv

Reader insights

Nice work

Very well written. Keep up the good work!

Top insights

  1. Expert insights and opinions

    Arguments were carefully researched and presented

  2. Heartfelt and relatable

    The story invoked strong personal emotions

Add your insights

Comments (4)

Sign in to comment
  • Aarsh Malikabout 9 hours ago

    Thoughtful and well-argued. I really like how you frame AI as a mirror rather than a monster, especially the focus on power, literacy, and human accountability. The environmental and education angles add real weight to the conversation instead of repeating the usual sci-fi fears.

  • Dharrsheena Raja Segarranabout 21 hours ago

    Oh wow, I had no idea that Musically became TikTok. I just thought Musically became irrelevant Yes, AI does reflect us, and yes, we still shouldn't trust it blindly. Education about AI and how to use it is important. Thank you for sharing this with us!

  • Jakayla Toneyabout 23 hours ago

    Very cool blog post! I enjoyed reading this. Thank you for sharing!

  • My problem with AI is people. Bug business that will use it to replace humans. Big business that will use it to not pay people cause they could do it with AI. Also yeh environmental aspect cause it takes alot of water to cool these processors. But again it’s about people who want to turn a buck and not care about people the environment or anything just making money.

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.