Google unveils new groundbreaking AI approach with Nested Learning

Google Research has introduced Nested Learning, a new AI paradigm that merges model design and optimisation into a self-improving system capable of continual learning without forgetting, validated through the high-performing “Hope” architecture.

Google Research has unveiled a machine learning framework called Nested Learning, a new paradigm designed to overcome one of AI’s oldest problems: the inability to learn continuously without forgetting what it already knows.

Detailed in a new paper published for NeurIPS 2025, Nested Learning reimagines how neural networks update and organise information.

Instead of treating a model’s architecture (its structure) and optimiser (its learning rule) as separate entities, Google’s researchers propose viewing them as interconnected systems that operate across multiple levels – much like the layers of memory in the human brain.

The approach is designed to combat catastrophic forgetting, a phenomenon where AI models lose proficiency on older tasks when trained on new data.

By embedding “learning within learning,” Nested Learning enables multi-timescale updates – effectively allowing different parts of a network to learn and adapt at their own rhythm, mimicking the brain’s balance between short-term and long-term memory.

As part of the research, Google introduced Hope, a proof-of-concept self-modifying model that applies Nested Learning principles.

Hope reportedly achieved state-of-the-art performance in long-context reasoning, continual learning, and language modeling, outperforming conventional transformer architectures in tests like the Needle-in-a-Haystack benchmark.

Nested Learning also introduces the concept of continuum memory systems (CMS), where memory modules update at different frequencies to create a richer, more flexible form of recall.

The paper describes this as a “step toward unbounded in-context learning,” enabling models that can self-optimise and handle context far beyond traditional limits.

Google researchers say this unified framework could eventually help close the gap between today’s static AI models and the continual, adaptive learning abilities of the human brain.

Close Bitnami banner
Bitnami