What is hate speech?
In common language, “hate speech” refers to offensive discourse targeting a group or an individual based on inherent characteristics (such as race, religion or gender) and that may threaten social peace.
The UN Strategy and Plan of Action on Hate Speech defines hate speech as…“any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.”
However, to date there is no universal definition of hate speech under international human rights law.
What are the attributes of hate speech?
Hate speech has three important attributes:
I . Hate speech can be conveyed through any form of expression, including images, cartoons, memes, objects, gestures and symbols and it can be disseminated offline or online.
II . Hate speech is “discriminatory” (biased, bigoted or intolerant) or “pejorative” (prejudiced, contemptuous or demeaning) of an individual or group.
III . Hate speech calls out real or perceived “identity factors” of an individual or a group, including: “religion, ethnicity, nationality, race, colour, descent, gender,” but also characteristics such as language, economic or social origin, disability, health status, or sexual orientation, among many others.
It’s important to note that hate speech can only be directed at individuals or groups of individuals. It does not include communication about States and their offices, symbols or public officials, nor about religious leaders or tenets of faith.
What are the challenges raised by online hate speech?
We must confront bigotry by working to tackle the hate that spreads like wildfire across the internet.”
ANTÓNIO GUTERRES, United Nations Secretary-General, 2023
The growth of hateful content online has been coupled with the rise of easily shareable disinformation enabled by digital tools. This raises unprecedented challenges for our societies as governments struggle to enforce national laws in the virtual world’s scale and speed.
Unlike in traditional media, online hate speech can be produced and shared easily, at low cost and anonymously. It has the potential to reach a global and diverse audience in real time.
The relative permanence of hateful online content is also problematic, as it can resurface and regain popularity over time.
Understanding and monitoring hate speech across diverse online communities and platforms is key to shaping new responses. But efforts are often stunted by the sheer scale of the phenomenon, the technological limitations of automated monitoring systems and the lack of transparency of online companies.
Meanwhile, the growing weaponization of social media to spread hateful and divisive narratives has been aided by online corporations’ algorithms.
This has intensified the stigma vulnerable communities face and exposed the fragility of our democracies worldwide.
It has raised scrutiny on Internet players and sparked questions about their role and responsibility in inflicting real world harm.
As a result, some countries have started holding Internet companies accountable for moderating and removing content considered to be against the law, raising concerns about limitations on freedom of speech and censorship.
What is the impact of hate speech?
Although hate speech has always existed, its ever-growing impact fueled by digital communication can be devastating not only for those targeted, but also for societies at large.
Hate speech is a denial of the values of tolerance, inclusion, diversity and the very essence of human rights norms and principles.
It may expose those targeted to discrimination, abuse and violence, but also social and economic exclusion.
When left unchecked, expressions of hatred can even harm societies, peace and development, as it lays the ground for conflict, tension and human rights violations, including atrocity crimes.
How hate speech can be prevented?
Addressing hate speech does not mean limiting or prohibiting freedom of speech. It means keeping hate speech from escalating into more something more dangerous, particularly incitement to discrimination, hostility and violence, which is prohibited under international law.
Addressing and countering hate speech is a necessity. It requires a holistic approach, mobilizing society as a whole.
All individuals and organizations – including governments, the private sector, media, Internet corporations, faith leaders, educators, youth and civil society – have a moral duty to speak out firmly against hate speech and a crucial role to play in countering this scourge.
Importantly, combating hate speech first requires monitoring and analysing it to fully understand its dynamics.
Since the spread of hateful rhethoric can be an early warning of violence – including atrocity crimes – limiting hate speech could contribute to mitigating its impact.
The authors of hate speech should also be held accountable, to end impunity.
Emphasis on initiatives to promote greater media and information literacy among online users while ensuring the right to freedom of expression.
When International Day for Countering Hate Speech is observed?
In July 2021, the UN General Assembly highlighted global concerns over “the exponential spread and proliferation of hate speech” around the world and adopted a resolution on “promoting inter-religious and intercultural dialogue and tolerance in countering hate speech”.
The resolution recognizes the need to counter discrimination, xenophobia and hate speech and calls on all relevant actors, including States, to increase their efforts to address this phenomenon, in line with international human rights law.
The resolution proclaimed 18 June as the International Day for Countering Hate Speech, building on the UN Strategy and Plan of Action on Hate Speech launched on 18 June 2019.