New year to see tough regulatory steps for social media firms

author
2 minutes, 36 seconds Read

The new year is going to be a difficult one for social media platforms like X, Instagram, Facebook, and others as they will come under increased regulatory scrutiny for artificial intelligence-generated user harm cases.

They would need to increase their due diligence in terms of vetting content on their platforms as the onus of identifying the harmful stuff will lie with them. No more will they be able to take refuge under the safe harbour clause which guarantees legal immunity against third-party content posted on their platforms.

The ministry of electronics and information technology (MeitY) on Tuesday issued a second advisory to the platforms to ensure that users on their platforms do not violate the prohibited content in rule 3(1)(b) of the IT Rules.

As per the advisory, social media companies will need to inform users about such content at the time of first registration, as regular reminders, at every instance of login, and while uploading/sharing information onto the platform.

“Owing to elections next year, the deepfake cases in India are expected to rise extensively. The government putting the onus on intermediaries to control deepfakes is a good starting point, but we need a dedicated AI law to meaningfully tackle the issue at the level where these technologies are created,” said Pavan Duggal, Supreme Court advocate and cybersecurity expert.

According to Duggal, it will be a good idea for platforms to become more proactive in their approach in taking down prohibited content and making users aware. These actions will help the companies against any form of prosecution.

Jaspreet Bindra, founder and managing director of The Tech Whisperer, a technology advisory and consulting firm, said: “Deepfakes are going to be a huge challenge in 2024 as the technology to create that is getting better. As immediate steps, the government has to strictly formulate regulations to control its spread on social media platforms.” According to Bindra, if the spread is not controlled, the situation can worsen, especially when general elections are scheduled.

The deepfake technology can be used to influence voters, and besides controlling its spread, the government and social media companies need to spread awareness and education regarding this among the masses, just like advertisements to abstain people from consuming tobacco.

Rapid evolution of deepfake technology has made it difficult for companies to deploy detection tools on time. “It’s difficult for automated takedowns to distinguish between genuine content and clever parodies or satire. Platforms will have to develop or license technology to distinguish and weed out deepfakes. This, however, is easier said than done,” said Anupam Shukla, partner at Pioneer Legal.

Prashanth Shivadass, partner at Shivadass & Shivadass Law Chambers, said: “The analytical tools incorporated by the platform must include periodical minute by minute checks of posts being generated by users.”

“It is very difficult to identify deepfakes as generative AI is based on self-learning technology which is meant to better itself and evolve at an exceedingly fast pace. In this light, it may also be relevant to consider tracking platforms enabling the creation of adversarial and explicit content rather than shifting the burden entirely on intermediaries,” said Shreya Suri, partner at IndusLaw.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

网站备案号: 粤ICP备16118000号-1