Critique logo

AI Usage Controversy

The Moral and Realistic Eventuality of AI Usage

By Talented JesterPublished 2 months ago 8 min read
This is what an AI looks like. In its true form.

Disclaimers:

1. This article is written in cooperation with an AI tool, particularly Gemini. Although this article is not entirely written by a person. It was still fact checked and revised by one. Every sentence and idea is an intended part of this article even if the original content was not made by the author.

2. This article is a collection of my view points and opinions on the topic that are based on my experiences. My goal is to provide alternative perspectives on the topic, not a comprehensive guide on what you should feel. If you disagree with certain parts of the article, it should not discourage you from agreeing with other parts. This is not a research article. It is more akin to a twitter rant.

3. This article is not intended for academic use but it’s not specifically discouraged.

Introduction: Summary of the article.

Artificial Intelligence (A.I.), particularly generative AIs, have been deemed as having a negative effect on certain aspects of society by quite a large fraction of the Populus. While I do not entirely disagree that AI may have some unintended both, foreseen and unforeseen, consequences; It is also an absolutely revolutionary step in the development of technology that is tightly knit with the internet age. This article will discuss how certain people tend to negatively discourage AI usage, why it shouldn’t be discouraged, briefly explain how AIs work, and use cases that AIs should primarily be used for.

Functions Behind AI: A brief summary on types of AIs and how they function.

There are various kinds of AIs and they function in several different ways. Starting with the most popular one; Large Language Models (LLMs). LLMs are first designed with the rules of a language in mind. They are taught things like grammar, punctuation, spelling, and formatting of text. When left like this they are capable of interpreting text and producing coherent messages but the AI doesn’t actually know anything yet. It's still very limited in its use case and functionality. When introduced to large amounts of textual data is where the real learning begins. The AI can sift through hundreds of petabytes of data when given internet access and generate an approximate summary or answer depending on what your input is. It can put together detailed instructions or create its very own story based on narratives it has been trained on. Generative Image Models (GIMs) also operate under similar conditions and learn in similar ways. They are usually paired up along with traits of an LLM to be able to understand text and use that ability to create images. GIMs are trained off of the images available on the internet.

One thing very important to note is that although the term AI is what is being used to refer to these LLMs and GIMs, the models are not true AI’s.

It's not really intelligent, or capable of coming up with its own ideas.

A real life example of this would be when an AI model is trained to differentiate dogs from wolves there were a lot of false positives where a field of snow would trigger as a wolf because most images of wolves are depicted with having snow in the background. This implies that the model will never be able to innately know what a wolf is. But, instead associate certain aspects of an image, like a lot of white pixels indicating snow, and thus inferring it is likely a picture of a wolf.

Another important thing to note is that AI is also not able to adjust its pattern recognition on the fly. It's not capable of changing the way it “thinks” when given new information. It will always rely on what it has already been trained on. If given the exact same input twice it will produce the exact same output. Even though most LLMs have a chat-like interface to interact with. You're not giving it new information every time you talk to it. It just uses old chats as context along with what you just inputted and runs that with the algorithm.

These both tie into the fact that AI has some important limitations and is limited in its capacity to create something new. If it encounters a problem it has never learned about it will be unable to solve it. It also ties into the fact that it can be biased based on what the model was trained on. For example, when the AI model gemini is prompted to recite the kinetic energy equation it tends to recite the equation for classical physics KE = (½)mv². This is the most common equation used as it is a close enough approximation used for around 200 years. But it is also incorrect. The correct answer is KE = (γ-1)mc²

Discouragement of AI: Why some people discourage AI use.

Some people irrationally discourage the use of generative AI and outright hate it. One particular community that is not fond of the development of GIMs is the art community. They consider AI art to be soulless and puts real artists out of jobs. Well, number one, inanimate objects don't have souls. And Two, nobody wants to spend more money then they have to on an artist if a computer can do it much cheaper. Making AI art is much easier than learning all the skills necessary to become an artist. It's an innovative advancement. You don't see people trying to preserve the art of starting a fire with some sticks and tinder. They just use a lighter. While it is unfortunate for those artists that honed their skills to become nearly obsolete overnight. It is simply a byproduct of progress; Much like in the case of the invention of the automatic telephone operator, a common story associated behind the motivation for the invention was “Strowger (the inventor) believed that his undertaking business was losing clients to a competitor whose wife was a local telephone operator and was preventing telephone calls from being routed to Strowger's business and re-routing them to her husband's business instead.” according to this wikipedia article. Telephone operators across the nation lost their jobs, but you don't see anybody talking about that anymore.

Other concerns involve environmental one's. A common one I have heard that I want to quickly touch on is that using models creates a large computational load, which is true, but that it also wastes a lot of water? Sure a lot of water is used to cool these computer components but its not like the water gets split into its elementary components of hydrogen and oxygen or get launched into space. The water just gets recycled or goes back into the water cycle.

Some concerns involve the high energy cost of running an AI model. But my issue with this take is that its not specifically an AI problem. Energy and electricity is used for all kinds of things. There is no reason to cherry pick specifically AI for this issue.

One analogy that I see a lot negatively painting AI is that using AI instead of doing something yourself is like the difference between making a pizza and calling yourself a chef and delivering a pizza to your door and calling yourself chef. Why that analogy sucks is because they are both still pizzas. At the end of the day without my input there wouldn't be a pizza at the door. On the other hand, claiming you made something without the help of AI when you did is its own thing. Called lying and fraud. Doesn't really have anything to do with AI specifically.

AI usage should not be discouraged. The technology already exists and can't be undone, it is an amazing tool. Even if its invention is directly hurting some individuals. Its benefits will far outweigh these measly costs.

Some great use cases for AIs, particularly LLMs is data filtering and summarization. LLMs can scan through hundreds of lines of text, codes, or logs and find specific information you filter for. It can also understand the ideas (although not innately) you tell it so even if you don't know the specific words it can still locate what you are looking for. It can also summarize complicated legal documents or guidelines. You can submit large paragraphs of criteria and ask if any of what you are doing is violating that criteria.

Academic use of AI is by far one of the most controversial topics. And I have a pretty radical stance on it. AI should be used in academics. Its the exact same logic as "you won't always have a calculator in your pocket". We already have AI in our pockets. Its everywhere. Writing an essay free form has practically become obsolete. There is no reason for the general population to have to. One of the arguments I see a lot is that it will diminish ones ability to think critically. But that's like saying conversing with another and brain storming ideas with someone else will make you dumber. Being able to use AI properly is a skill. Creating a unique perspective is still just as difficult, just writing out the bulk of the essay is no much easier. Just like my fire analogy; People don't know how to make fire from sticks and rocks anymore, its almost never useful. Embracing this technology will make the efficiency of academia much faster and much more successful.

Another issue that commonly arises from the clash between AI and academia is plagiarism. Its a bit of a nuanced issue because although an AI is a computer, and thus an inanimate object, its still based off of the works of countless people. But AI is a tool much like a pencil, its used to create the essay but it doesn't really create new ideas. The AI learned from its experiences just like people do. I'm not going to give my kindergarten teacher credit for teaching me words and stuff. That's ridiculous and redundant. Crediting an LLM for an essay is also redundant. It just wrote the words for me, its not like its even the original author to begin with. One thing I am a fan of is when LLMs include a source to their information. It makes writing articles much easier and I don't have to worry about plagiarism issues.

Side note: I originally intended to use AI for this article like I said in the disclaimer but I didn't end up needing it. So if you felt like you were able to distinguish my writing from an AI's then think again.

Conclusion:

AI has consequences but it's also an amazing tool when used properly. We as humans are able to adapt and overcome the challenges it may present and I hope its usage becomes a widely accepted practice among all people.

EssayManuscript

About the Creator

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments (1)

Sign in to comment
  • Alexander Trapp17 days ago

    The use of AI to generate a critique of said AI is lazy at best, even with proofreading from a human. This speaks to a lack of critical thinking and independent thought. Even though several points are valid, it would be intrinsically more rewarding to write from the soul and then compare that to AI or use AI for something as simple as grammar//syntax evaluation.

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.