Google is working on an artificial intelligence breakthrough towards continual learning through a new technique called nested learning…


Picture this… you feed GPT a prompt. You get a response which is good but not quite there yet. So, you add another prompt and then another one. And soon enough, GPT seems to have forgotten parts of the initial context so you have to feed that in again.

It’s a common problem where AI keeps forgetting bits of information. Google is trying to fix that issue with its experimental model called HOPE. Designed around a new paradigm called nested learning, HOPE will focus on continued learning instead of focussing on just what it has been trained on initially.

Why does Current AI forget?

Most AI chatbots run on Large Language Models (LLMs). These are good at generating text, solving problems and maintaining conversations. But they struggle with continual learning. Every time you teach them something new, they tend to forget what they learnt earlier. In machine learning, this behaviour is referred to as catastrophic forgetting.

Unlike the human brain that builds on prior experience and retains it, current AI needs to be retrained from scratch in order to make it retain anything new. This is a major hurdle for artificial general intelligence (AGI) which aims at developing human-like AI which can learn and reason continuously.

Andrej Karpathy, a widely respected AI/ML research scientist who previously worked at Google DeepMind, recently spoke in a podcast about how AGI is still a decade away because no one has been able to develop an AI system that learns continually. “They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues.”

The Nested Learning Concept

To solve for this ‘forgetting’ problem, Google has come up with a solution that involves nested learning. Instead of treating AI like a single optimisation issue, they have broken it down into several, smaller learning sub-problems.

Each of these sub-problems have their own context flow which helps them learn and optimise based on their own information flow. With this method, Google enables HOPE to learn to compute and learn at a much deeper level.

Google says this allows for “learning components with deeper computational depth” which helps AI models to retain old knowledge while learning new inputs.

Proof of Concept and Early Results

Early tests of HOPE models have been promising. While testing it on language modelling and common-sense reasoning, it has shown less confusion and more accuracy than most state-of-the-art LLM models.

This has given Google hope that nested learning is not just a theory or concept and that future models can be trained to increase their capacity of learning continuously. And as HOPE continues to retain knowledge, it could soon become the bridge between AI and more general, adaptive intelligence.

The Future: Smarter, Human-Like AI

The possibilities are endless if AI can indeed handle nested learning. Without catastrophic forgetting, there is a lot that AI will be able to do:

  1. Long-term assistants: With the power to retain knowledge, AI agents will be able to remember user preferences, past conversations and evolving needs.
  2. Research and innovation: AI agents with nested learning ability can adapt to new scientific findings and changing data.
  3. Safer and more reliable AI: With the ability to retain knowledge, AI agents will become more reliable as they will not ‘forget’ crucial pieces of information, thereby reducing the need to double check or keep feeding it information.

Google believes that nested learning is the key to closing the gap between current LLMs and the continual learning capabilities of human beings.

The Last Word

‘Forgetting’ has always been one of AI’s long-standing problems. With the HOPE model showing potential, Google has taken a crucial step towards the future.

While it is still early days, the model has shown potential, that AI can do more than just respond, it can learn and grow as well. If this proves to be successful, it would be a landmark foray into a future with more intelligent, flexible, and human-like artificial systems.

In case you missed:

Adarsh hates personal bios, Chelsea football club and Oxford commas. When he's not writing, he's busy playing FIFA on his PlayStation.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved