WHAT A MASSIVE AI LANGUAGE MODEL CAN DO?
The only way to make artificial intelligence enabled huge language models make deep inroads into our life and find a place in our day-to-day existence would be to make it available for free. For an extremely high-end technology to reach its commercial proposition with no timelines, is to see how best it can be used by the research fraternity, commercial users looking for applications, and the ones who are out to discover or explore the possibilities of locating gaping holes in the models under development. It is easy availability and the necessary mentoring by documentation release or any other support mechanism that could be of use. To make AI a common place, there is no other way but for its democratisation. This is the beginning of the Meta story.
The Facebook transformed Meta holds the promise to take us to the Metaverse. But for now a challenge Meta has been thrown open to Open AI, the first mover in the field. Meta’s Open Pretrained Transformer (OPT -175B) matches GPT-3 with the 175 billion parameters. What Meta can turn into a game changer in the days to come, if it were to follow the direction being taken now, in the true interest of the technology being put to maximum and varied use. ”Meta is giving away some of the family jewels,” is what is being proclaimed. The researchers at Meta AI have announced that they have created a massive and powerful language AI system. They say that they are making it available free to all researchers in the artificial intelligence community.
The enormous promise that Meta aims to democratise access to a powerful kind of AI, bound to have immense positive ramifications in the days to come, is a natural expectation. The change of approach in the manner of giving access from the previous models is very visible. Meta is not offering only the model to the research fraternity, but with it comes, the ingredients of change. Meta is also offering its codebase, extensive notes and log books about the training process. This model was trained on 800 gigabytes of data from five publicly available data sets, described as ”data card” that accompanies the technical paper posted by metaverse researchers. In what ways can the researchers use it? Joelle Pineau, director of Meta AI Research Labs has a few answers.
She says, there is no need to train their own language models from scratch, they can build applications and run on them. The compute budget she claims would be modest. Language based systems would get a big fillip; whether it’s machine translation, a chatbot or something that completes text to name a few. The second thing which she expects from the researchers is to ”pull it apart” to examine and expose the flaws and limitations, to work upon to improve the model. Generating toxic language can turn out to be one major limitation of these tools, how do you get over it, would be another challenge thrown open. The model generated by Meta researchers on the issue of toxicity, needs a worldwide validation test. Meta believes that by releasing the model with a non-commercial license, would help in development of guidelines for responsible use of large language models. Then the broader commercial deployment can be thought of.
CREATING SUCCESSFUL APPLICATIONS ON META OPT-175B WILL BE ITS IDEAL VALIDATION.