AI CHIP RACE!
We have reading of about semiconductors for quite some time now and how it’s supply chain disruption literally messed up with large number of sectors all across the globe. TSMC’s international hegemony has been a matter of concern and various nation states have been desperate to create their own manufacturing capabilities, so that they don’t land into dire straits, if there were to be another Covid-19 type situation. In the meantime, with the release of ChatGPT, AI under wraps has come out in the open. This is the dialogue enabled version of GPT 3.5. Search to conversation, has been a watershed moment, in the tech journey of the world.
Rest as they say is history. On a daily basis, use cases of AI are making news, giving us a feeling that the world is going to change forever. Large language models would be driving our lives in the foreseeable future. What powers the huge tech achievements and its usage at a global scale? These are the chips made for this specific purpose, you can safely call it the AI Chip. The war for its production has begun. Whosoever wins this war, will have an edge over others for a variety of reasons; tech, compute, ownership and windfall commercial gains.
Nvidia is the reigning deity it this field. Microsoft’s new game plan is to work on their own chip and carve a niche in huge business opportunity. Google is also in the race. So, the war is getting hotter between Nvidia, Microsoft and Google. AI actually learns through the process of deep learning. It is computationally intensive, require sharing massive amounts of data and perform complex mathematical calculations. Nvidia makes chips that are optimized for this type of processing with features like tensor cores designed to accelerate matrix maths operations, central to so many deep learning algorithms and is scalable as well.
This predominance is certainly is not to the liking of Microsoft or Google. It is told that 300 staff has been working on Microsoft’s Athena. It could see wider use by Microsoft and OpenAI sometime next year. Microsoft’s supercomputing systems run on Nvidia’s chips. Not having your own chip makes operations expensive. Google with its* TPU v4* is thrown its gauntlet too. It claims that it outperforms Nvidia’s A100. TPU v4 claims to be 1.2 to 1.7 times faster and uses 1.3 to 1.9 times less power. Nvidia constantly pushes the barrier and is miles ahead of the competition. Nvidia new H100 is absolutely incredible, is essentially nine times to 30 times faster on different counts on training AI. The battle has just begun, it would take some time and effort for others to catch up.
FEW COMPANIES ARE DEFINING OUR FUTURE, REST HAVE JUST TO FALL IN THE BANDWAGON.