How to protect human rights in the digital era

Upendra Baxi writes: Despite the threat of information disorder, human rights-friendly governance is both possible and doable.

Two very critical human rights events occurred recently. The Supreme Court of India (SC) further warned against any “clampdown” on “free speech”. And the UN Human Rights Council (UNHRC) Special Rapporteur Irene Khan submitted her report on “‘Disinformation and Freedom of Opinion and Expression”, which is slated for discussion between June 21 and July 9.

Justices Dhananjay Chandrachud, L Nageswara Rao and S Ravindra Bhat declared that any “clampdown on information on social media or harassment caused to individuals seeking/delivering help on any platform will attract a coercive exercise of jurisdiction by this Court”. The SC mandated not just the registrar (judicial) to place this “order before all district magistrates in the country” but also directed the central and state governments to notify “all chief secretaries/director generals of police/commissioners of police”. Suo motu “coercive action” action for contempt of the court may also spill over to other types of criminal proceedings. Although tethered to the Covid-19 context, the SC reinforces past precedents enshrining the principle that abuse of public power may not unreasonably or arbitrarily curb the freedom of speech, press, and media platforms.

The UNHRC report specifically speaks of “information disorder” that arises from disinformation which is “politically polarising, hinders people from meaningfully exercising their human rights, and destroys their trust in governments and institutions”. Human rights provide a “powerful and appropriate framework” to “challenge falsehoods and present alternative viewpoints”. It justifies utilitarianism of human rights: Because freedom of opinion and expression enables governance and development; further, “civil society, journalists and others are able to challenge falsehoods and present alternative viewpoints”. Human rights friendly governance is both possible and doable; it is also desirable, as it protects political power against itself.

Recalling the UNHRC’s condemnation of inherently “disproportionate” and “blanket” internet shutdowns, the report asserts that “reactive content moderation efforts” are unlikely to make any worthwhile difference in the “absence of a serious review of the business model that underpins much of the drivers of disinformation and misinformation”. Problems of “inconsistent application of companies’ terms of service, inadequate redress mechanisms and a lack of transparency and access to data” re-emerge constantly. Furthermore, “although the platforms are global businesses, they do not appear to apply their policies consistently across all geographical areas or to uphold human rights in all jurisdictions to the same extent”. Internet shutdowns do “not curb disinformation but, rather, hamper fact-finding and are likely to encourage rumours”, and are manifestly contrary to rights against discrimination when “aimed at silencing minority voices and depriving them of access to vital information”.

The report unequivocally maintains that disinformation “endangers the right to freedom of opinion and expression”. It “poses a threat not only to the safety of journalists but also to the media ecosystem in which they operate” and forces the “legacy media to divert precious resources from reporting to dispelling and debunking lies”. Bemoaning the lack of legislative and judicial clarity on the twin concepts of “disinformation” and “misinformation”, it emphasises that the intention to harm is decisive to the former. “Disinformation” is “false information disseminated intentionally to cause serious social harm”. In contrast, misinformation consists in “the dissemination of false information unknowingly”. Nor are these terms to be used interchangeably.

Acknowledging the brute fact that “extremist or terrorist groups” frequently engage in the dissemination of “false news and narratives as part of their propaganda to radicalise and recruit members”, the report disfavours any sledgehammer state response that adds to “human rights concerns”.

However, the growth of disinformation in recent times cannot be attributed solely to technology or malicious actors, according to the report. Other factors such as digital transformation and competition from online platforms, state pressure, the absence of robust public information regimes, and digital and media literacy among the general public also matter. Moreover, disinformation mongers enhance the “frustrations and grievances of a growing number of people”, “decades of economic deprivation, market failures, political disenfranchisement, and social inequalities”. Disinformation is thus not the “cause but the consequence of societal crises and the breakdown of public trust in institutions”. Strategies to “address disinformation” will succeed only when these underlying factors are tackled.

All this makes for elegant and substantially accurate reading. But are states, governments and political parties, encased already within what Shoshana Zuboff describes as surveillance capitalism, an integral aspect of information civilisation? A 2020 Oxford study of “Industrialised Disinformation” mentions that as many as “81 governments” use “social media to spread computational propaganda and disinformation about politics”. Despite Facebook and Twitter recently removing more than 3,17,000 accounts and pages, the “cyber troops” often act as “agents” of political parties and a tool of geopolitical influence. Some “authoritarian countries like Russia, China and Iran capitalised on coronavirus disinformation to amplify anti-democratic narratives designed to undermine trust in health officials and government administrators”. Cyber troops remain available for “pro-government or pro-party propaganda”, to attack “the opposition”, or mount “smear campaigns”, “suppressing participation through trolling or harassment”, and manufacture “narratives that drive division and polarise citizens”. Online disinformation also results in offline practices of violent social excursion on actually existing individuals and communities such as ethnic, gender, migrant, sexual minorities. Nor are the offline experiences of the social perils of cyberwar to be ignored.

How does one decide on a sustainable mix of regulatory with penal regimes? How may these yield to the new criminological approach stressing the three Ds — decriminalisation, de-penalisation and deinstitutionalisation?

Reactive content moderation efforts are simply inadequate without a “serious review of the business model that underpins much of the drivers of disinformation and misinformation”. No doubt, the platforms are global businesses, but do they “apply their policies consistently” or “uphold human rights in all jurisdictions to the same extent”? Will future itineraries of human rights in the digital era repeat past mistakes? The report offers grist to the mill for profound thought and conscientious action.

This column first appeared in the print edition on June 9, 2021 under the title ‘ The disinformation detox’. The writer is professor of law, University of Warwick, and former vice chancellor of Universities of South Gujarat and Delhi.

Source: Read Full Article