LANGUAGE LEVEL ROBOTS – WITH ARTIFICIAL INTELLIGENCE (AI) BRAIN IN IT

DailyPost 2202
LANGUAGE LEVEL ROBOTS – WITH ARTIFICIAL INTELLIGENCE (AI) BRAIN IN IT

Last couple of years have been exceptionally pathbreaking in the cutting-edge digital technologies. The work done in last decade or so, is now showing its exponential results, the first stages of a totally different understanding and clearly a promised delivery in momentous changes in computer vision and natural language processing. It does not stop there, the aim of final singularity and convergence is also being attempted in a big way. It has not been in our farthest imagination that all these technologies would work as one, each synergising with the other, in any permutation or combination and taking life to a totally different level.

DALL-E 2 has convincingly proved that we are not the right track in the creation of photorealistic images. Language prompt can deliver an image that too a newly created one. The trajectory is now known in this field, the research effort has to happen at that scale. Similarly, in last two years language models have also drastically improved. In one case it conversed so well with one google engineer, that he stated the model has gone sentimental. In another infamous case redditers were fooled by an AI for months. Robots moving away from standard from assembly line type tasks, hard coded operations to natural language instructions, is the next frontier and it seems likely, timelines may still be little far off.

Boston Dynamics and in a smaller sense Ameca robot have already shown it to us that robots can be made with human dexterity. The direction is to combine the AI’s new understanding of language with a physical robot body. Google research labs and Everyday Robots, another Alphabet subsidiary, have teamed up to make a new kind of a robot which can understand a spoken task in a natural language and then do the task for you. It eliminates the pre-programmed hard coding and opens the doors to a massive number of possibilities. The idea is successfully Google’s most advanced brains in a robot body and make it operational. The physical robot in simple terms acts as the language models hands and eyes.

Out of language instructions given, the robot only selects what it is capable of. It then analyses the environment as how to physically proceed with it. Huge engineering research relentlessly goes on at the backend. The PALM, pathway language model, is put in use, having learnt from petabytes of text. Such humongous language models are just two years old. Their learning and capabilities acquired make them vastly different from Siri or Alexa. The machine / robot learns to find out patterns through reinforcement learning training. Training in the virtual world adds to the speed of learning through simulated environments. Real life data and scenarios have its own limitations of physical space and time. Transforming high level instructions into low level doing level tasks is the digital magic sauce. As per Google, robot selects the right sequence of skills 84 percent of the times, and executes its successfully 74 percent of the times. Challenging, fascinating and exponential digital times ahead.

GAP BETWEEN ROBOT AND HUMANS IS BOUND TO DECREASE IN THE COMING DAYS IN A BIG WAY.
Sanjay Sahay

Leave a Comment

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Scroll to Top