Generative AI platforms like Open AI, Google Bard and others have been advised by the government not to release to the public any experimental variants, just by putting a disclaimer. Tech firms not heeding the advise would not be eligible for legal protection under safe harbour clause in case of any user harm, sources said.
Currently, generative AI platforms put disclaimers stating they are experimental in nature and can make mistakes.
Similarly, ChatGPT’s disclaimer reads, “it can make mistakes. Consider checking important information”.
Officials said that instead of releasing experimental stuff to the public with disclaimers, these platforms should first run experiments on certain specific users in a sandbox kind of an environment, which will be approved by some government agency or regulator.
The advisory has been issued to the companies as several cases of either bias in content or user harm have been flagged by users recently. The ministry of electronics and IT is working on an omnibus Digital India Act to address such emerging issues, but has said in the interim the Information Technology Act and other similar laws will apply in all cases of user harm, which includes deepfakes.
Recently, Google’s generative AI platform Bard caught the attention of the government, when a user flagged a screenshot, in which Bard refused to summarise an article by a right wing online media on the ground that it spreads false information and is biased.
Post this instance, the government came up with an advisory that any instances of bias in the content generated through algorithms, search engines or AI models of platforms like Google Bard, ChatGPT, and others will not be entitled to protection under the safe harbour clause of Section 79 of the Information Technology Rules.
Companies like Google are in favour of a risk-based approach instead of uniform rules for all AI applications. “I think, fundamentally, you have to ask yourself, what kind of bias you are concerned about? There are already laws in place that say certain types of biases are not allowed. So that is why we are pushing for a risk-based approach, proportionate to a particular use case,” Pandu Nayak, vice president of Search at Google, told FE in a recent interaction.
A flexible framework can address diverse landscape of AI technologies without hindering innovation. For example, according to Nayak, risks from using AI in agriculture are very different from what one might find in other areas.
At the Global Partnership on Artificial Intelligence (GPAI) summit, which concluded on December 14 in the Capital, the 29-member countries including India, UK, Japan, France, among others, affirmed their commitment to work towards advance, safe, secure, and trustworthy artificial intelligence (AI), while also looking at relevant regulations, policies, standards, and other initiatives. As per the next steps, over the next few months the countries will work together to lay out some broad principles on AI, including what guardrails should be put in place.
Follow us on Twitter, Facebook, LinkedIn