Context
- The Ministry of Electronics and Information Technology (MeitY) has proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
- Objective: Tackle deepfakes and AI-generated content on social media platforms, ensuring users are aware of algorithmically generated content and misinformation is controlled.
- The amendments will also increase government accountability when issuing content notices to social media platforms.
Relevance:
- GS-2 (Governance & Technology): Regulation of social media, IT Act, administrative accountability.
- GS-3 (Science & Technology): AI governance, digital ethics, misinformation control, cyber policy.
- GS-4 (Ethics): Transparency, accountability, ethical oversight in governance.
Key Provisions of the Draft Amendments
Accountability of Government Officers
- Notices under Rule 3(1)(d) will now require reasoned intimation.
- Senior officials only:
- Central government: Joint Secretary and above
- State level: Deputy Inspector-General and above
- Notices must clarify:
- Safe harbour does not apply
- It is a warning, not an immediate takedown order
Significance: Reduces arbitrary or unconstitutional use of content takedown powers; improves transparency and legal safeguards.
AI Content Labelling
- Platforms allowing AI-generated content (e.g., X, Instagram, YouTube, ChatGPT, Sora, Google Gemini) must:
- Identify AI-generated content
- Label deepfake content
- Attach permanent metadata/unique identifiers
- Two labels proposed:
- AI-generated content
- Deepfake content
Objective: Prevent misinformation, manipulation, and user deception, especially during elections or communal tensions.
Compliance and Enforcement
- Platforms may lose legal immunity under Section 79 of the IT Act if non-compliant.
- Obligations for platforms:
- Identify and label AI/deepfake content
- Take down flagged content within 24 hours
- Publish monthly compliance reports
- Enable user complaints and voluntary labelling
Expert Oversight
- An expert committee is constituted to finalize rules.
- Consultation includes government officials, tech experts, and academics.
Significance: Brings technical expertise to governance, ensuring rules are implementable and future-ready.
Background and Challenges
- Deepfakes are digitally manipulated media that appear authentic, creating risks to:
- Personal privacy
- Political processes
- Public trust in information
- Social media firms previously challenged Rule 3(1)(d) as arbitrary and unconstitutional, but courts upheld government authority.
- Challenges in enforcement:
- Accurately detecting AI-generated content
- Fast-moving content spread
- Balancing freedom of expression with misinformation control
Key Data & Facts
Feature | Provision / Requirement |
Rule impacted | Rule 3(1)(d) of IT Rules 2021 |
Seniority of officials issuing notice | Joint Secretary+ (Central), DIG+ (State) |
Platforms in scope | X, Instagram, YouTube, ChatGPT, Sora, Google Gemini |
AI/deepfake labelling | Mandatory with permanent metadata |
Compliance timeline | 24 hours for flagged content |
Reports | Monthly compliance reports by platforms |
User participation | Option to label own content as AI-generated |
Policy Implications
- Strengthens governance: Senior officials accountable for content notices.
- Mitigates misinformation: Labels and metadata improve user awareness.
- Technological oversight: Ensures AI/deepfake detection becomes a standard responsibility of platforms.
- Democracy protection: Reduces risk of election manipulation and communal disinformation.
- Private sector collaboration: Platforms need to deploy algorithmic detection and reporting systems, boosting innovation in AI for social good.