AI COMPUTE MOONSHOT

The tech behind the AI compute may move from Transformers to Mixture of Recursions, MoR bringing in superior efficiency, but with all that hard core brute compute will remain at the core of the AI growth. What we are talking about are the AI GPUs, he who owns the most would be the AI king, rest all other denominators remaining constant. Sam Altman has let out the Moonshot, that OpenAI is on track to bring “well over 1 million GPUs online” by the end of the year. And 100x is to be figured out. Moonshots have broken tech barriers since times immemorial, and this aspiration is certainly on those lines.

Elon Musk’s xAI made waves earlier this year with the Grok 4 model, which runs on about 200,000 Nvidia H 100 GPUs. OpenAI intends to own five times that power, and it might still not be enough for Altman going into the future. It might sound audacious, but Altman’s record suggests otherwise. In February this year, OpenAI had to slow the roll out of GPT-4.5 as they were literally “out of GPUs.” It is a wake up call. Nvidia is sold out till next year for its premier AI hardware. What he is talking about in terms of AI infrastructure projects is more like a national-scale initiative rather than corporate IT upgrades.

If OpenAI were to achieve this, it would be cementing itself as the single largest consumer of AI compute on the planet. What is the real moonshot? It’s his intention that his team now better work figuring out how to 100x that. It is as wild as it sounds. Almost the GDP of the UK. There is no way Nvidia could even produce that many chips in the near term. As all of us know, moonshot thinking drives Altman. What it broadly means is not a literal target but in reality a vision of laying down the foundation for AGI (Artificial General Intelligence). OpenAI wants to decipher the contours of the future.

It is a part of a larger AI arms race; compute capacity can change the tide in a company’s favour. All major contenders in the race have some chip plan or the other. Altman’s comments also double as a not-so-subtle reminder of how quickly this field moves. Not long back a company with 10,000 GPUs was a heavyweight, now 1 million seems to be just a milestone. Why such audacious statements by OpenAI? It certainly does not end with faster training or smoother model rollouts. “It is about securing a long term advantage in an industry where compute is the ultimate bottleneck.”

THE AVAILABILITY OF REQUIRED COMPUTE CAN TRANSFORM AI AND THE WORLD, SOONER THAN LATER.
Sanjay Sahay

Have a nice evening.

Leave a Comment

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Scroll to Top