01 logo

Machine unlearning: what if AI learns something incorrect?

How do we make a machine ‘unlearn’ something if the knowledge it learnt previously is incorrect.

By Allegra CuomoPublished about a year ago 3 min read
Machine unlearning: what if AI learns something incorrect?
Photo by Solen Feyissa on Unsplash

Last week, venture capital firm Andreessen Horowitz posted on LinkedIn a list of 36 AI apps that their team had tested throughout 2024, detailing what were the strengths and best uses of each AI app. This list was both wonderfully weird and equally informative, and can be found here.

However, what caught my eye was a comment from consulting and technology agency Vgency, presenting the question of whether AI can be considered truly intelligence until it has the ability to ‘unlearn’ incorrect information. This inability to unlearn wrong information has been called the biggest threat to LLMs (large language models).

When humans are learning new information, if they are wrong about something, there are plenty of social routes to correcting their misunderstanding (if this misunderstanding is one that cannot be resolved with a quick Google search). And while this may at times be a bit awkward for those involved, it poses no real threat or harm.

Instead, for LLMs, this misunderstanding can be harder to correct. According to IBM, the “most effective [method] is to change the model’s architecture by readjusting its weights”. This targeting of the LLM’s weights can be imagined as “influencing its long-term memory”.

The other form of making LLMs ‘unlearn’ is to optimise the machine by “fine-tuning the model on the unwanted data” in a progress called gradient ascent. This updates the LLM by training it ‘in reverse’, cancelling out the effects of the data. However, IBM notes that the disadvantage of this method is that it can “hurt the model’s performance on other tasks”.

Furthermore, both of these processes come at a high costs to the companies doing the re-training. However, this re-learning is arguably essential to the smooth and ethical running of the AI. Therefore, we find ourselves asking the question of whether it is in the interests - or actually should be the obligation - of the company creating these LLMs to go through the processes at that cost.

One solution to this re-training of AI is being provided by company Lamini, an AI-powered platform that helps software teams develop LLMs. Lamini Memory Tuning is ”a new way to embed facts into LLMs that improves factual accuracy and reduces hallucinations” leading to up to 95% accuracy.

To clarify, AI hallucinations occur when an Artificial Intelligence model generates incorrect or misleading results. This can be caused by insufficient or biased training data, incorrect assumptions made by the model, or a model’s lack of real-world understanding.

The model does this through the fine-tuning method which I explained earlier - saying that “the method entails millions of expert adapters with precise facts on top of any open-source LLM. To ensure even further accuracy, it dissects the over-arching topic into subcategories, which it then further analyses by creating ‘experts’ on each subcategory, and any facts that the user provides.

Once the LLM is in action, the model retrieves only the “most relevant experts from an index at inference time - not all the model weights”. This reduces energy required and the costs to the company running the LLM.

This is a very interesting example of how innovative technologies are being used to further improve, and make potentially less harmful, our current use of AI. The importance of machine unlearning cannot be understated. Hopefully, with systems of unlearning and re-training in place, we will be able to optimise the use of AI.

These improvements to AI systems, and well thought out, effective, ethical frameworks for AI implementation and regulation, will lead to fairer and more trustworthy AI systems.

Articles referenced:

Andreessen Horowitz LinkedIn Post

IBM: Why we’re teaching LLMs to forget things

Introducing Lamini Memory Tuning: 95% LLM Accuracy, 10x Fewer Hallucinations

tech news

About the Creator

Allegra Cuomo

Interested in Ethics of AI, Technology Ethics and Computational Linguistics

Subscribe to my Substack ‘A philosophy student’s take on Ethics of AI’: https://acuomoai.substack.com

Also interested in music journalism, interviews and gig reviews

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments (1)

Sign in to comment
  • Paolo Cuomoabout a year ago

    AI "unlearning" will be an increasingly important (and expensive) topic in 2025

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.