

Despite unlocking enormous potential, modern artificial intelligence systems face a fundamental limitation — they stop learning after deployment. Unlike humans, especially children who continuously adapt through exploration and experience, AI models remain static and require human intervention for retraining. A new research paper by Yann LeCun, Emmanuel Dupoux, and Jitendra Malik highlights this challenge, pointing out that current AI relies heavily on structured pipelines where engineers collect data and rebuild models whenever conditions change. As a result, AI systems often struggle to perform reliably in real-world environments that differ from their training data.
The researchers propose a new framework inspired by natural learning processes. They describe two key systems — System A (learning through observation) and System B (learning through action). While System A excels at identifying patterns, it lacks real-world grounding, whereas System B learns through trial and error but is inefficient. To bridge this gap, they introduce a third layer called System M (Meta-Control), which dynamically decides how and when to learn, similar to how humans manage attention and decision-making. This integrated approach could pave the way for truly adaptive AI systems capable of continuous, autonomous learning.

















Comments (0)
No comments yet
Be the first to comment!