Education logo

The risks of artificial intelligence in school bullying: an analysis by Marcelo Futerman

School bullying and the impact of AI

By Marcelo FutermanPublished about a year ago 4 min read

Artificial Intelligence (AI) is rapidly transforming our everyday lives, but like any powerful tool, it poses risks, especially in vulnerable contexts such as schools. I explore the dangers that AI can pose when used to facilitate bullying in educational settings, as well as the ethical and social implications that arise when technology is put at the service of those who wish to harm others.

Bullying, a persistent problem in schools around the world, has found a new ally: artificial intelligence. AI-based tools enable the creation of fake content, such as deepfakes and doctored videos, that can be used to humiliate, harass, and defame students. Unlike traditional bullying, which takes place in hallways or classrooms, digital bullying has a much wider reach, transcending the physical boundaries of the school and spreading anywhere with an Internet connection.

I highlight how advanced technologies, such as AI-powered image generators that can create completely false faces or situations, have become instruments of aggression in the wrong hands. An article published in La Tercera highlighted how the creation of fake images through AI opens up a new form of bullying in schools. Victims can be exposed to humiliating situations, such as altered photos or manipulated videos that quickly go viral on social media. This type of harassment not only affects students emotionally, but also creates a hostile school environment, making it harder for young people to actively engage and peacefully coexist.

Case in point: The case of deepfakes in schools

Deepfake technology is one of the most popular tools in this area. It allows for the alteration of images and videos with a high degree of realism. In a recent case, it was reported that students used this technology to create videos showing a peer in compromising situations. These videos were shared on platforms such as Instagram and TikTok, where the harassment intensified and had a serious impact on the mental health of the victim. The speed at which such content goes viral makes it almost impossible to stop it before it causes significant harm.

How can AI help identify bullying?

Despite the risks, AI also has the potential to be a valuable tool in the fight against bullying in schools. There are AI-based systems designed to monitor and detect bullying patterns on online platforms such as social media and messaging apps. According to an article in Infobae, researchers have developed AI that can identify behaviors associated with bullying, such as the repeated use of offensive language, threats, or insults toward a student. These systems can generate alerts so that teachers and school officials can intervene in time.

I must emphasize, however, that while these detection technologies are promising, they are not infallible. One of the main problems is the high rate of false positives, where innocent students are misidentified as aggressors due to misinterpretation of context. In addition, the use of these technologies raises serious privacy concerns. Students' personal data becomes the focus, raising concerns about the protection of sensitive information and the potential misuse of that data.

Example: Using AI to detect bullying and its limitations

Some schools have begun using AI to detect potential bullying situations on digital platforms, such as text messages between students. However, the results have been mixed. One particular case involved an AI system used to analyze messages on messaging apps. The system alerted administrators to a potential bullying case, but further investigation revealed that the language used was an innocent conversation between friends and not a case of harassment. This error illustrates the need for human intervention alongside AI technologies.

The ethics behind artificial intelligence in schools

The introduction of AI into the school environment raises important questions about ethics and oversight. While AI can be helpful in identifying and addressing bullying, it can also be misused, either to defame others or even to reinforce stereotypes and prejudices. For example, AI that generates deepfakes or alters images can perpetuate psychological violence by creating false representations that damage students' reputations and integrity.

I caution that AI systems must be designed ethically, with a focus on protecting students' rights and preventing abuses of power. Schools and regulators must establish clear policies for the use of these technologies to ensure that they are used responsibly and do not become another tool for abuse.

The importance of education and regulation

To effectively address this issue, it is critical that teachers, parents and authorities work together to educate about the responsible use of technology. I believe that schools should incorporate awareness programs into their curriculum about the negative effects of digital bullying and the tools available to prevent it.

In addition, it is imperative that governments and educational institutions work together to develop regulatory frameworks that limit the use of AI in schools and ensure that its implementation is transparent and fair. Regulation should also address privacy and data protection concerns, ensuring that students are not subjected to excessive surveillance or inappropriate manipulation of their personal data.

Example: AI regulation initiatives in schools

Some countries have begun to implement regulations that require AI tools used in schools to be reviewed to ensure they do not violate students' rights. A recent example is the European Union's proposal to implement stricter regulations for AI, especially when used in educational settings. These regulations aim to balance the benefits of AI with the need to protect students' privacy and rights.

Final reflection: A safe future with AI in schools?

Artificial intelligence has the potential to radically change the educational landscape, but as we have seen, its misuse can have devastating consequences. I conclude that while AI technologies offer new opportunities to improve the detection and prevention of bullying, they must be closely monitored to prevent them from becoming another instrument of harm.

The key will be to strike a balance between technological innovation and ethics, ensuring that advances do not harm students but contribute to safer, more inclusive and respectful school environments.

bullying

About the Creator

Marcelo Futerman

Soy Marcelo Futerman, aficionado a la tecnología e innovación. Estoy aprendiendo sobre AI y también a programar. Acompañenme en este camino!

Entra a https://www.marcelofuterman.net/ para una píldora diaria de conocimiento

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.