DailyPost 2893
FREE FOR ALL – OPEN AI
Founded in 2015 OpenAI had put its self proclaimed mission and mandate of building safe and beneficial Artificial General Intelligence, AGI. OpenAI started as a non-profit company and later in 2019 OpenAI Global, LLC, became its for-profit subsidiary. Though a couple of successes came its way, there was not much debate on the basic thought process of the company, to bring AI into the mainstream and move in the direction of safe and beneficial AGI, which was generally felt to be a few decades away. When there are no great profits to show and no great products in the offing which could bring about this transformation, this could mean an empty debate.
The first challenge was to build a product which can be announced and proven to the world that AI has arrived. That happened with the launch of ChatGPT and its phenomenal success. Launched on Nov 30, 2022, it has already been a part of the technology folklore and LLM become a household acronym rather than an alphabetical jugglery. The AI products started coming out in the open with astonishing regularity, proving that LLM in a multifaceted product dimension is there to stay. The doyens in the field of AI got a new confidence that AGI is within reach and worth making an all-out attempt.
In this backdrop we find some disturbing trends at OpenAI, more so because we took it as a guardian organization for safe development of AI and later AGI, which unsupervised and unregulated can take the world into a totally different spin. If that were to happen there would be no return from that precarious predicament. One headline proclaims the trend towards crash commercialization at OpenAI. Exodus at OpenAI: Nearly half the AGI staffers have left, says a former researcher. They focused on the long-term risks of super powerful AI and have left in the past several months.
Some AI researchers were of the view that there is a possibility that AI systems could escape human control and pose an existential threat to humanity. Keeping this in mind OpenAI employed large number of researchers focused on what is known as “AGI safety” It has been said by a known research insider that safety has increasingly, “taken a backseat to shiny products”. Of the 30 researchers working on AI safety only 16 are left. AI could use racist or toxic language, write malware or provide users with instructions for making bioweapons. “People who are primarily focused on thinking about AGI safety and preparedness are being increasingly marginalized.” This seems to be the sign of the times that the future would be a free for all, no holds barred arena for mercenary commercial exploitation of AI.
AI IS LIKE NO OTHER TECHNOLOGY EVEN KNOWN; IT CAN ENSLAVE ITS OWN CREATOR.
Sanjay Sahay
Have a nice evening.