Content
- A landmark law in 2013, it needs a spine in 2025
- Decoding personality rights in the age of AI
A landmark law in 2013, it needs a spine in 2025
Why is it in News?
- Chandigarh case where a college professor was terminated after ICC inquiry under POSH Act (2013) — rare instance of decisive, time-bound action.
- Complaint (Sept 2024) proved sexual harassment, signalling institutional accountability but also revealing systemic gaps in POSH implementation in higher-education spaces.
- Case has reignited debate on consent, power imbalance, evidence standards, timelines, and institutional hesitation in POSH processes.
Relevance
GS2 – Governance / Social Justice
- Workplace dignity and gender justice as part of India’s constitutional mandate (Art. 14, 15, 21).
- Institutional accountability: functioning of ICC, procedural fairness, grievance redressal.
- Gaps in regulatory architecture: timelines, coordination, evidentiary standards.
GS2 – Polity / Law Reform
- POSH’s statutory limitations vs. evolving forms of harassment (emotional, digital, relational manipulation).
- Need for clearer definitions, informed consent, multi-institution coordination, and digital forensics.
- Revisiting malicious complaint clause and survivorship-friendly processes.
GS1 – Society
- Power asymmetry in academia; hierarchical vulnerabilities in mentor–student relationships.
- Social stigma, delayed reporting, emotional fatigue as barriers to justice.
- Importance of gender sensitisation and behavioural-pattern recognition.
Practice Question
- “Despite being a progressive law, the POSH Act, 2013 remains structurally weak in addressing modern forms of workplace harassment.” Analyse with reference to recent higher-education cases.(250 Words)
Basics
- POSH Act, 2013 enacted to prevent, prohibit, and redress sexual harassment at the workplace; mandates Internal Complaints Committees (ICC).
- Applies to all workplaces, including universities, colleges, research institutions.
- Sexual harassment definition includes unwelcome physical, verbal, non-verbal conduct, quid pro quo, hostile environment.
- Timeline: complaint must be filed within 3 months (extendable by ICC).
Conceptual Gaps in the Law
- Consent vs Informed Consent
- Act does not recognise “informed consent”; ignores emotional manipulation, authority-based persuasion, or information asymmetry.
- In campuses and workplaces, earlier “consent” becomes invalid once manipulation surfaces; Act fails to capture such relational exploitation.
- Emotional & Psychological Harassment
- Law primarily recognises explicit acts; emotional coercion, grooming, betrayal, and manipulation fall outside statutory scope.
- Educated perpetrators exploit what leaves “no evidence,” operating in legal grey zones.
Procedural Flaws Exposed by the Case
- Three-Month Limitation Period
- Survivors experiencing coercion/manipulation take longer to recognise harassment.
- In universities (multi-year engagement), evidence or realisation surfaces later; strict timeline empowers perpetrators.
- Fear of ‘Malicious Complaint’ Clause
- Provision meant as a safeguard ends up intimidating genuine complainants, discouraging delayed reporting.
- Terminology Diluting Seriousness
- Calling accused as “respondent” softens gravity, unlike criminal law.
- Same conduct outside workplace is a cognisable offence; workplace label cannot trivialise harm.
Investigative and Evidentiary Challenges
- Vague Definitions
- Burden of proof shifts heavily to women.
- Harassment usually occurs as a pattern, not an isolated act; ICCs often dismiss cases for lack of direct evidence.
- Need for Behavioural Assessment Tools
- Anonymous student feedback, corroborative testimonies, pattern recognition essential.
- ICC should read circumstantial evidence, social behaviour patterns, informal networks.
- Digital Evidence Gap
- Technology allows ephemeral messages, disappearing media, encrypted chats.
- ICCs lack technical training; POSH lacks protocols for digital forensics, leaving cases unproven despite real digital harassment.
Inter-Institutional Blind Spot
- No mechanism to handle misconduct across multiple campuses, collaborations, conferences, visiting faculty roles.
- Repeat offenders evade accountability due to institutional silos; Act silent on sharing information or joint proceedings.
Institutional Barriers
- Procedural delays, institutional hesitation, fear of controversy, and lack of sensitisation cause secondary victimisation.
- Women face emotional fatigue, reputational risks, power imbalance, especially in academia where mentor-student hierarchies shape vulnerability.
Needed Reforms
- Extend or remove three-month limitation period.
- Recognise informed consent, emotional coercion, digital harassment within statutory definitions.
- Standardised digital evidence protocols and legal–technical training for ICC members.
- Enable inter-institutional coordination, especially for academia.
- Strengthen role clarity, confidentiality norms, survivor-centric processes.
- Replace chilling clauses (“malicious complaint”) with nuanced safeguards.
Conclusion
- Chandigarh case is a rare success, but it exposes structural gaps: narrow definitions, procedural rigidity, digital blind spots, and institutional hesitancy.
- Without reforms, POSH remains strong on paper but weak in implementation, especially in universities where power asymmetry and delayed recognition are common.
- The Act needs clearer language, extended timelines, recognition of emotional/digital abuse, and stronger investigative frameworks to deliver consistent, empathetic justice.
Decoding personality rights in the age of AI
Why it is in News?
- Actors Abhishek Bachchan and Aishwarya Rai Bachchan have sued Google and YouTube in the Delhi High Court for hosting AI-generated deepfake videos depicting them in fabricated, often explicit scenarios.
- Petition alleges violation of personality rights (name, image, likeness, voice), reputational harm, commercial loss, and seeks future AI-training safeguards.
- Case spotlights India’s legal vacuum on AI-enabled impersonation and the rising challenge of deepfake abuse across platforms.
Relevance
GS2 – Governance / Regulation of Technology
- Rising legal vacuum in regulating AI-enabled impersonation, deepfakes, and identity misuse.
- Intermediary liability, safe-harbour limits, and absence of a dedicated Personality Rights framework.
- Constitutional dimensions under Article 21 (privacy, dignity).
GS3 – Cybersecurity / Emerging Tech
- Deepfakes as threats to trust, authenticity, national information integrity.
- Need for AI watermarking, provenance logs, dataset consent, and high-risk classification.
- Comparative global models: EU (dignity-based), US (publicity right), China (strict synthetic content rules).
GS1 – Society / Ethics
- Ethical issues: autonomy, consent, posthumous identity, commodification of persona.
- UNESCO’s rights-based framework for human-centric AI.
- Manipulation, misinformation, reputational harm, and psychological effects.
Practice Question
- “The rise of deepfake technologies has exposed a critical gap between India’s constitutional guarantees of dignity and the absence of a statutory personality-rights regime.” Discuss with global comparisons.(250 Words)
Personality Rights
- Protects control over a person’s name, image, likeness, voice, signature, gestures, persona.
- Origin: Common law doctrines of privacy, dignity, and unjust enrichment.
- Economic dimension: Prevents unauthorised commercial exploitation.
- Moral dimension: Upholds individual autonomy, honour, and human dignity.
- Enforcement mechanisms: tort law, passing off, privacy claims, IP principles (copyright/trademark analogies).
Personality Rights Strain and AI era
- Deepfakes and generative models rapidly replicate faces/voices, making unauthorised impersonation easy and scalable.
- Blurs lines between authenticity vs. syntheticidentity, enabling:
- misinformation,
- harassment and explicit content,
- commercial misappropriation,
- erosion of public trust.
- AI models often train on scraped internet data without consent, leading to use of celebrity likeness in outputs.
India’s Legal Position (Hybrid Model)
1. Constitutional Basis
- Personality rights derive from Article 21 (privacy, dignity); affirmed in Puttaswamy (2017).
2. Key Judicial Precedents
- Amitabh Bachchan v. Rajat Nagi (2022): court recognised personality rights and restrained unauthorised commercial use.
- Anil Kapoor v. Simply Life India (2023): prohibited AI-generated recreations of Kapoor’s face/voice and misuse of “Jhakaas”.
- Arijit Singh v. Codible Ventures (2024): Bombay HC restrained AI cloning of his voice.
3. Statutory Gaps
- No codified personality rights statute.
- IT Act 2000 + 2021/2024 Intermediary Guidelines → address impersonation, harmful deepfakes, takedown duties.
- Enforcement issues:
- anonymity of creators,
- cross-border platforms,
- absence of explicit liability for AI training datasets,
- reactive takedown rather than preventive obligations.
Comparative Global Framework
United States (Property–Centric Model)
- “Right of Publicity”: heritable, assignable property right.
- Haelan Labs v. Topps (1953) established monetisation of identity.
- Tennessee’s ELVIS Act (2024): bans unauthorised AI use of voice/likeness.
- Character.AI sued for bots generating harmful outputs; First Amendment defence rejected.
European Union (Dignity–Centric Model)
- GDPR requires consent for processing biometric data.
- EU AI Act (2024): deepfakes labelled high-risk → mandatory transparency, watermarking.
China
- Beijing Internet Court (2024): synthetic voices must not mislead users.
- Voice actor awarded damages for AI-replicated voice sold without consent.
Academic Proposals
- Westkamp et al. (2025): expand rights to include style, persona, aesthetic signatures.
- Scholars propose global harmonisation, high-risk categories, and explicit prohibitions on deceptive AI impersonation.
Ethical Debates
- AI threatens autonomy, authorship, posthumous identity.
- UNESCO’s Recommendation on the Ethics of AI (2021) → human-centric, rights-based approach.
- Concerns:
- use of dead artists’ voices,
- non-consensual explicit deepfakes,
- AI becoming quasi-author,
- risks in granting AI legal personhood (Forrest 2023: human rights dilution).
Key Problems
- No statutory definition of personality rights.
- No binding obligations for AI watermarking, provenance tracking, dataset consent.
- Weak intermediary liability and safe-harbour loopholes.
- Lack of cross-border cooperation given global AI models.
Way Forward
- Enact a Personality Rights Act: define rights, remedies, licensing, posthumous scope.
- Mandate:
- watermarking and provenance logs for all synthetic content,
- compulsory consent for training datasets,
- strict liability for deceptive deepfakes,
- rapid takedown + penalties.
- Create an AI Ombudsman + high-risk AI registry.
- Global alignment using UNESCO standards.
- Public digital literacy on AI harms.


