| TLDR: Google invented a convincing implementation of continual learning, the ability to keep learning "forever" (like humans and animals). Their architecture, Hope, is based on the idea that different parts of the brain learn different things at different speeds. This plays a huge role in our brains' neuroplasticity, and they aim to reproduce it through an idea called "nested learning". ------- This paper has made the rounds and for good reason. It’s an original and ambitious attempt to give AI a form of continuous, adaptive learning ability, clearly inspired by biological brains' neuroplasticity (we love to see that!) ➤The fundamental idea Biological brains are unbelievably adaptive. We don't forget as easily as AI because our brains aren't as unified as AI's. Instead, think of our memory as the sum of smaller memories. Each neuron learns different things and at different speeds. Some focus on important details, others on more global abstract stuff. It's the same idea here! When faced with new data, only a portion of those neurons are affected (the detail-oriented ones). The more abstract neurons take more time to be affected. Thanks to this, the model never forgets repeated global knowledge acquired in the past. It has a smooth, continuous memory ranging from milliseconds to potentially months. It's called a "continuum memory system" ➤Self-improvement over time Furthermore, higher-level neurons contain the lower-level ones, and thus can control what those learn. They control both their speed of learning and the type of info they focus on. This is called "nested minds" (nested learning). This gives the model the ability to also self-improve over time, as higher-level neurons influence the others to only learn interesting or surprising things (info that improves performance, for example). ➤The architecture To test this idea, they implemented it on top of another experimental architecture they published months ago ("Titans") and called the resulting architecture "Hope". Essentially, Hope is an experiment over an experiment. Google is not afraid of experimenting, which is the best quality of an AI research organization in my opinion. ➤Results Hope outperforms ALL current architectures (Transformers, Mamba…). However, it's still just a first attempt to solve continual Learning as the results aren't particularly earth-shattering. [Please feel free to fact-check this!] ➤Opinion I don't care all that much about continual learning (I think there are more obvious problems to solve) but I think those guys are onto something so I will be following their efforts with lots of interest! What I like the most about this is their speed. Instead of brushing problems aside and claiming scaling will solve everything, these guys decided to take on the current most debated flaw of current architectures in a matter of weeks! I think it makes Demis look serious when he says "we are still actively looking for 2 or more breakthroughs for AGI" (paraphrasing here). ------- Paper: https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/ [link] [comments] |