

In view of the growing threat posed by deepfake photos and videos created using Artificial Intelligence (AI), the Central Government has issued strict directives to social media companies. Leading platforms such as Facebook, Instagram, X (formerly Twitter) and YouTube have been instructed to implement new regulations without exception.
Under the new rules, any photo or video created with the help of AI must carry a clearly visible label stating that it is AI-generated content. This labeling requirement is intended to help the public easily distinguish between genuine content and material created using computer technology. Once an AI label is applied, it cannot be removed by social media platforms, content creators or any other individuals.
Furthermore, if the government or a court identifies any AI-generated content as false or misleading and issues a notice, the concerned social media platform must remove the photo or video within three hours. Social media companies have also been directed to deploy advanced technological systems capable of automatically detecting and blocking obscene content, material harmful to children or deceptive AI-generated photos and videos.
These new regulations will come into effect from February 20, 2026. The government has stated that the primary objective of these measures is to curb the spread of deepfake content that misleads the public by blurring the line between truth and falsehood. Special emphasis has been placed on protecting citizens from misinformation, defamatory material and obscene content particularly during election periods by strengthening oversight and accountability on digital platforms.













Comments (0)
No comments yet
Be the first to comment!