Unlearning Intelligence: The First AI That Deletes Information to Learn More

  • Unique Paper ID: 183052
  • Volume: 12
  • Issue: 3
  • PageNo: 197-198
  • Abstract:
  • Over time, traditional machine learning models get better at their jobs by collecting more data and improving their representations. However biological systems, like the human brain, often improve cognition by actively forgetting unnecessary or too-specific details. This process is called synaptic pruning. This paper talks about Unlearning Intelligence, a new way of thinking about how an artificial neural network can get better by purposely forgetting parts of its internal representation. I suggest that and put into practice a forgetting-based training loop, in which parameters or activations that are too dependent on each other are systematically erased or changed, and then retrained on a small amount of data. Using a multilayer perceptron (MLP) on the MNIST dataset shows that models trained with periodic unlearning not only keep up their competitive performance, but they also do better on inputs that they haven't seen before or that have been distorted. Our method makes someone's intelligence grow by letting go.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{183052,
        author = {Kerthiraj M S},
        title = {Unlearning Intelligence: The First AI That Deletes Information to Learn More},
        journal = {International Journal of Innovative Research in Technology},
        year = {2025},
        volume = {12},
        number = {3},
        pages = {197-198},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=183052},
        abstract = {Over time, traditional machine learning models get better at their jobs by collecting more data and improving their representations. However biological systems, like the human brain, often improve cognition by actively forgetting unnecessary or too-specific details. This process is called synaptic pruning. This paper talks about Unlearning Intelligence, a new way of thinking about how an artificial neural network can get better by purposely forgetting parts of its internal representation. I suggest that and put into practice a forgetting-based training loop, in which parameters or activations that are too dependent on each other are systematically erased or changed, and then retrained on a small amount of data. Using a multilayer perceptron (MLP) on the MNIST dataset shows that models trained with periodic unlearning not only keep up their competitive performance, but they also do better on inputs that they haven't seen before or that have been distorted. Our method makes someone's intelligence grow by letting go.},
        keywords = {Unlearning Intelligence, Machine Learning, Generalization, Forgetting Mechanism, Neural Pruning, Subtractive Learning, AI Optimization},
        month = {July},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 12
  • Issue: 3
  • PageNo: 197-198

Unlearning Intelligence: The First AI That Deletes Information to Learn More

Related Articles