DailyPost 810


When regulation is likely to be enforced in any sector, industry or technology to set it right after long years of no regulation, a saleable solution comes to the fore by the name of self – regulation. Self – regulation as a prescribed medicine has never been able to cure any ailment. More often than not, it has not even initiated a basic document in that direction. From media to IT industry, everybody claims to self regulate, in reality, is a free for all. And nothing is done before getting into the same circus, when the issue hots up again. IT industry has generally been on this fun ride since it’s beginning and when fears of Artificial Intelligence is looming large, the industry in the form of Google Chief Sunder Pichai is trying to sell the same formula again.

He recently said that he trusts the AI makers to regulate the technology. This comes on the heels of Google’s AI principles enunciated and published on 7th June by it’s CEO. This was based on Google’s understanding the such a powerful technology will certainly raise questions regarding its use. With criminals and rogue nations being the best adopters of technology for nefarious ends, the loss can be way beyond our imagination. How AI is developed and used would decide the direction of mankind’s development. Google being the leader in this field, Sundar Pichai feels that it has a responsibility. To fulfil that responsibility and give a clear direction of Google’s AI plans, Google AI principles has been put in place.

The AI applications at Google would be assessed in the backdrop of these principles. Foremost is whether it is socially beneficial, the second principle is that it should stop creating or reinforcing unfair bias. Thirdly, it should be built & tested safely. Recently, Facebook’s AI robots were shut down after they starting talking to each other in their own language.

The most interesting one is – be accountable to people, the technologies being developed by Google will be subject to human direction and control. Privacy design principles have to be incorporated and also uphold high standards of scientific excellence. Lastly, it clearly states that Google will not design and deploy AI applications that can cause overall harm, autonomous weapons, for illegal surveillance practices or contravening international law & human rights. Will AI principles & lofty clarion call of factoring in ethics be sufficient?


Sanjay Sahay

Leave a Comment

Your email address will not be published. Required fields are marked *

The reCAPTCHA verification period has expired. Please reload the page.

Scroll to Top