Artificial General Intelligence, AGI, has always been the holy grail of AI but unfortunately it has still not been able to get over the mirage threshold. Big AI companies once promised that scaling up large language models (LLMs) like ChatGPT would quickly lead to artificial general intelligence (AGI)—AI as smart as humans. But leaders like OpenAI's Sam Altman and Microsoft's Satya Nadella are now dialing back on AGI hype.
Pioneers Ilya Sutskever and Yann LeCun argue LLMs fall short: they ace tests but flop in real life, like giving deadly health advice. Sutskever's new startup, Safe Superintelligence (SSI), eyes "continual learning" so AI can evolve post-training, like a teen picking up skills such as doctor or lawyer through real-world experience.
LeCun's Advanced Machine Intelligence (AMI) builds "world models" from videos, helping AI grasp physics and plan like a baby learning gravity. These "neolabs" attract billions from investors betting on the next breakthrough. Timelines stretch out: 5-20 years for human-level AI, per both experts. Big players like Google DeepMind keep pushing, while current gen AI tools boost coding. But does that ensure AGI on the horizon?
Anthropic's Claude Code already writes chunks of engineer code, speeding research, can be the only reasonable expectation. AGI can’t be put on a project with timelines and milestones. Even as research labs multiply, LLMs won't vanish—they're key tools accelerating the path to AGI, letting coders iterate faster on tomorrow's breakthroughs. AGI won't burst forth fully formed; it'll grow through smarter tech and tools.
THE RACE IS ON—BUT HUMAN WIT STILL LEADS THE PACK!
