Call Us Now

+91 9606900005 / 04

For Enquiry

Countering deepfakes, the most serious AI threat

Context: The debate around “deepfakes” has been rekindled recently with the popularity of applications such as FaceApp (for photo-editing) and DeepNude ( that produces fake nudes of women).


GS Paper 3: Basics of Cyber Security; Role of media and social-networking sites in internal security challenges; Internal security challenges through communication networks

Mains questions:

  1. It is crucial to enhance media literacy, meaningful regulations and platform policies, and amplify authoritative sources. Discuss the statement in context of Deepfake. 15 marks
  2. Disinformation and hoaxes have evolved from mere annoyance to high stake warfare for creating social discord, increasing polarisation, and in some cases, influencing an election outcome. Elaborate. 15 marks

Dimensions of the Article:

  • What is Deepfakes?
  • What are the threats related to Deepfakes?
  • Measures to address the challenges related to Deepfakes
  • Way forward

What is Deepfakes?

Deepfake is a portmanteau of “deep learning” and “fake”. It is an Artificial Intelligence (AI) software that superimposes a digital composite on to an existing video (or audio).The origin of the word “deepfake” can be traced back to 2017 when a Reddit user, with the username “deepfakes”, posted explicit videos of celebrities.

Deepfakes are the digital media (video, audio, and images) manipulated using Artificial Intelligence. This synthetic media content is referred to as deepfakes.

What are the threats related to deepfakes?

A cyber Frankenstein: Frankenstein is not a computer virus, which is a program that can multiply and take over other machines. But, it could be used in cyberwarfare to provide cover for a virus or another type of malware, or malicious software. Therefore it has multiple challenges:

  • Deepfakes, hyper-realistic digital falsification, can inflict damage to individuals, institutions, businesses and democracy. They make it possible to fabricate media — swap faces, lip-syncing, and puppeteer — mostly without consent and bring threat to psychology, security, political stability, and business disruption.
  • Nation-state actors with geopolitical aspirations, ideological believers, violent extremists, and economically motivated enterprises can manipulate media narratives using deepfakes, with easy and unprecedented reach and scale.

Targeting women:

  • The very first use case of malicious use of a deepfake was seen in pornography, inflicting emotional, reputational, and in some cases, violence towards the individual.
  • Pornographic deepfakes can threaten, intimidate, and inflict psychological harm and reduce women to sexual objects. Deepfake pornography exclusively targets women.

Damaging individual dignity:

  • Deepfakes can depict a person indulging in antisocial behaviours and saying vile things. These can have severe implications on their reputation, sabotaging their professional and personal life.
  • Malicious actors can take advantage of unwitting individuals to defraud them for financial gains using audio and video deepfakes.
  • Deepfakes can be deployed to extract money, confidential information, or exact favours from individuals.

Harming social fabric of society:

  • Deepfakes can cause short- and long-term social harm and accelerate the already declining trust in news media. Such an erosion can contribute to a culture of factual relativism, fraying the increasingly strained civil society fabric.
  • The distrust in social institutions is perpetuated by the democratising nature of information dissemination and social media platforms’ financial incentives.
  • Falsity is profitable, and goes viral more than the truth on social platforms. Combined with distrust, the existing biases and political disagreement can help create echo chambers and filter bubbles, creating discord in society.

Challenge to internal security:

  • Imagine a deepfake of a community leader denigrating a religious site of another community. It will cause riots and, along with property damage, may also cause life and livelihood losses.
  • A deepfake could act as a powerful tool by a nation-state to undermine public safety and create uncertainty and chaos in the target country.
  • It can be used by insurgent groups and terrorist organisations, to represent their adversaries as making inflammatory speeches or engaging in provocative actions to stir up anti-state sentiments among people.

Undermining democracy:

  • A deepfake can also aid in altering the democratic discourse and undermine trust in institutions and impair diplomacy. False information about institutions, public policy, and politicians powered by a deepfake can be exploited to spin the story and manipulate belief.
  • A deepfake of a political candidate can sabotage their image and reputation. A well-executed one, a few days before polling, of a political candidate spewing out racial epithets or indulging in an unethical act can damage their campaign.
  • A high-quality deepfake can inject compelling false information that can cast a shadow of illegitimacy over the voting process and election results.
  • Deepfakes contribute to factual relativism and enable authoritarian leaders to thrive. For authoritarian regimes, it is a tool that can be used to justify oppression and disenfranchise citizens. Leaders can also use them to increase populism and consolidate power.
  • Deepfakes can become a very effective tool to sow the seeds of polarisation, amplifying division in society, and suppressing dissent.

Measures to address the threats related to deepfakes:

Collaborative actions and collective techniques across legislative regulations, platform policies, technology intervention, and media literacy can provide effective and ethical countermeasures to mitigate the threat of malicious deepfakes.

Media literacy:

  • Media literacy for consumers and journalists is the most effective tool to combat disinformation and deepfakes.
  • Media literacy efforts must be enhanced to cultivate a discerning public. As consumers of media, we must have the ability to decipher, understand, translate, and use the information we encounter.
  • Even a short intervention with media understanding, learning the motivations and context, can lessen the damage. Improving media literacy is a precursor to addressing the challenges presented by deepfakes

Legislative regulations:

  • Meaningful regulations with a collaborative discussion with the technology industry, civil society, and policymakers can facilitate disincentivising the creation and distribution of malicious deepfakes.

Technological solutions:

  • We also need easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources.

Way forward:

Deepfakes can create possibilities for all people irrespective of their limitations by augmenting their agency. However, as access to synthetic media technology increases, so does the risk of exploitation. Deepfakes can be used to damage reputations, fabricate evidence, defraud the public, and undermine trust in democratic institutions. To counter the menace of deepfakes, we all must take the responsibility to be a critical consumer of media on the Internet, think and pause before we share on social media, and be part of the solution to this infodemic.


1: Types of cybercrime:

  • Cyber Warfare: states attacking the information systems of other countries for espionage and for disrupting their critical infrastructure.
  • Phishing: It is a kind of fraudulent attempt that is made through email, to capture personal and financial information.
  • Cyber Stalking: repeated use of electronic communications to harass or frighten someone
  • Identity theft: It is a type of fraud in which a person pretends to be someone else and does crime with the name of someone else
  • Denial of service (DoS): It attacks refers an attempt to make computer, server or network resources unavailable to its authorized users usually by using temporarily interruption or suspension of services.
June 2024