Focus: GS-II Governance
- Last week, a committee of the Delhi Legislative Assembly took up Facebook’s alleged “inaction” due to reports that a top executive of Facebook in India had “opposed applying hate-speech rules” to users linked to the ruling party, citing business imperatives.
- Facebook has been accused of bias, of not doing enough to discourage hate speech, and of letting governments influence content decisions on the platform.
History of the debate
- Back in 2009, when Facebook had only 200 million users (fewer than its current user base in India), critics noticed Holocaust deniers on the platform – and German prosecutors launched an investigation against the company’s local head executive for abetment of violent speech.
- In the run-up to the 2016 presidential election in the United States, Facebook allowed videos posted by then candidate Donald Trump that violated their guidelines, saying the content was “an important part of the conversation around who the next US President will be”.
- In 2018, Facebook admitted to having failed to prevent human rights abuses on the platform in Myanmar.
Regulations in place
- Internet regulations in several countries (including India’s Section 66A of The Information Technology Act) protect platforms from having immediate liability for content, ostensibly to guard against over-censoring.
- Because of rising frustrations, however, governments are threatening those legal protections.
How content moderation works
- The vast majority of content on Facebook is flagged by an algorithm if it violates company rules.
- Users can manually flag content, which gets directed to content moderators sitting in contracted offices all over the world.
- If the content comes to these Facebook employees in a team called ‘Strategic Response’, they can rope in other local teams.
- Public policy is often involved if there are certain sensitive risk factors, such as political risk, that need to be considered.
- If internal groups disagree, the decision is escalated, potentially all the way up to Zuckerberg himself.
The situation in India
- The Wall Street Journal report said Facebook’s top public policy executive in India cited government-business relations to not apply hate-speech rules to certain individuals and groups liked to a particular party which are internally flagged for violent speech.
- The report’s findings on a “pattern of favoritism” towards the party calls into question the personal political leanings of company officials, and their ability to dominate content deliberations in these processes.
- Some technology policy experts have called for a thick line between public policy and content moderation, while others have criticised the processes as “informal and opaque”.
- Traditionally, public policy roles have entailed government relations. This model doesn’t extend itself well to social platforms though.
- There is an inherent conflict when government relations and policy enforcement go to the same set of people or organisation units.
- The real concern is the lack of public transparency and insufficient internal accountability in decision-making when it comes to content moderation.
- The problem in the recent revelations regarding Facebook India is public policy teams having the apparent ability to veto or block these decisions based exclusively on business considerations.
The platforms are reacting by self-regulating in a manner that imperils free expression while failing to truly address the problem of hyper-local harmful speech.
What is the Information Technology (IT) Act?
- The Information Technology Act, 2000 is the primary law in India dealing with cybercrime and electronic commerce.
- The laws apply to the whole of India. If a crime involves a computer or network located in India, persons of other nationalities can also be indicted under the law.
- The Aim of the Act was to provide legal infrastructure for e-commerce in India.
- The Information Technology Act, 2000 also aims to provide for the legal framework so that legal sanctity is accorded to all electronic records and other activities carried out by electronic means.
- It also defines cyber-crimes and prescribes penalties for them.
Section 66A of IT Act – Struck down
- Section 66A of the IT Act has been enacted to regulate the social media law India and assumes importance as it controls and regulates all the legal issues related to social media law India.
- This section clearly restricts the transmission, posting of messages, mails, comments which can be offensive or unwarranted.
- The offending message can be in form of text, image, audio, video or any other electronic record which is capable of being transmitted.
- In the current scenarios such sweeping powers under the IT Act provides a tool in the hands of the Government to curb the misuse of the Social Media Law India in any form.
- However, in 2015, in a landmark judgment upholding the right to free speech in recent times, the Supreme Court in Shreya Singhal and Ors. Vs Union of India, struck down Section 66A of the Information & Technology Act, 2000.
- The judgment had found that Section 66A was contrary to both Articles 19 (free speech) and 21 (right to life) of the Constitution.
- The repeal of 66A does not however result in an unrestricted right to free speech since analogous provisions of the Indian Penal Code (IPC) will continue to apply to social media online.
-Source: Indian Express, Livemint