Basics of the Incident
- Trigger: Reddit co-founder Alexis Ohanian used MidJourney to animate a childhood photo with his mother.
- Reaction:
- Many empathised with the emotional value of reliving a memory.
- Others criticised it as creating “false memories”, interfering with healthy grieving.
- Virality: Video gained 20M+ views on X, sparking global debate.
Relevance : GS III (Science & Tech – AI & Deepfakes; Cybersecurity) + GS IV (Ethics – Technology & Society, Child Protection, Digital Ethics)
Technology Behind It
- AI Photo-to-Video Tools: MidJourney, Google Photos “Create”, xAI’s Grok Imagine.
- Process: Still photo → AI predicts missing frames → generates motion (hair moving, hugs, eye blinks).
- Evolution:
- Earlier: AI upscaling (removing blur/pixelation).
- Now: Generative AI → morphing, object removal, filling gaps, creating lifelike but synthetic videos.
Potential Benefits
- Memory preservation: Reviving old or damaged photos of loved ones.
- Cultural heritage: Restoring archival photos/videos for museums and education.
- Entertainment: Creative storytelling, personalisation in media.
- Accessibility: Helping visually impaired people experience photos in dynamic formats.
- Therapeutic potential: Comfort for grieving families, closure in some contexts.
Risks & Concerns
- False memories: Risk of altering personal or collective memory.
- Emotional manipulation: Artificial comfort may hinder natural grieving.
- Consent & ethics: Photos of deceased or minors turned into videos without permission.
- Child safety:
- Cybercriminals misuse to create synthetic CSAM (Child Sexual Abuse Material).
- Example: U.S. teen’s suicide after extortion from AI-generated nudes.
- NCMEC reports 7,000+ cases (2022–24) involving AI-enabled exploitation.
- Privacy: Minors’ photos online can be weaponised into deepfakes.
- Cultural harm: Morphing celebrities or leaders → reputational damage, misinformation.
Legal & Ethical Dimensions
- Copyright: Editing copyrighted images usually requires permission.
- EU (GDPR):
- Children (<16) cannot consent to use of personal data/images.
- AI-generated “synthetic media” in legal grey zones unless explicitly illegal.
- U.S.:
- NCMEC raises alarm on GenAI + child exploitation.
- Deepfake laws vary by state.
- India:
- IT Rules 2021: Platforms must remove morphed/AI deepfake content.
- MeitY advisories: Explicit takedown obligations for CSAM/deepfakes.
- Platforms like Meta, Google, X → mandated grievance officers in India.
- Ethics: Raises questions of consent, dignity, autonomy, especially for vulnerable groups (children, deceased).
Platform Safeguards
- Google Photos:
- Limited prompts (“subtle movements”, “I’m feeling lucky”).
- Adds invisible digital watermark (SynthID) + visual watermark.
- Red teaming, content filters, user feedback loops.
- xAI (Musk): No clear safeguards disclosed yet.
- Industry gaps: Guardrails uneven, enforcement weak, AI firms aggressively promoting services.
Governance & Policy Gaps
- Global gap: No comprehensive international framework for synthetic media misuse.
- Law lagging tech: Regulations designed for explicit content, not synthetic “realistic” but non-explicit media.
- Accountability challenge: Who is liable — creator, platform, or AI company?
- Detection limitations: Watermarks can be bypassed; filters not foolproof.
Way Forward
- Stronger regulations: Global framework on AI content moderation (like GDPR but AI-specific).
- Child protection:
- Explicit ban on synthetic CSAM (like real CSAM).
- Technical safeguards: compulsory watermarking, detection standards.
- Consent & transparency: Mandatory disclosure when AI-modified content is used.
- Awareness & literacy: Digital literacy campaigns on risks of AI-generated deepfakes.
- Ethical AI: Encourage responsible use (e.g., memory preservation with explicit consent, educational uses).
- India-specific: Integrate with upcoming Digital India Act, focus on AI deepfake detection, strict liability on platforms.