Social media exists in a legal grey area where traditional legal policies are no longer relevant or able to sufficiently regulate. So what are the implications of media convergence on hate speech?
It’s simple. When existing media is digitised and distributed differently, it brings the existing rules pertaining to each medium into question.
If you ask Attorney-General Robert McClelland, he’ll tell you that copyright reform is challenging because of the speed of technological developments resulting in legislative solutions that often lag behind reality. More problematic is that governments are being asked to try to find a national solution to a global problem.
It’s time for legislation to “catch-up.”
CC BY-SA – courtesy of Adnan Islam
Take TV for example: TV content can be seen on TV, delivered online, and to a mobile. The service is the same, but the rules for each medium are different. Worse still, many of the communication options it contains (audio, visual, print) were once governed by distinct policy areas.
In order to hold Facebook accountable for failing to remove anti-semitic hate speech, social media platforms must be able to define or provide salient examples of anti-semitism. The European Union Agency for Fundamental Rights offer examples like “Holding Jews collectively responsible for actions of the state of Israel” or “Accusing the Jews as a people, or Israel as a state, of inventing or exaggerating the Holocaust.”
Can we balance a global regulation policy with national laws?
Image thanks to Funmunch.com
Social media providers like Facebook already act of their own volition to remove material such as nudity and copyright-infringing materials. Yet when it comes to hate speech, their reticence to tackle the problem largely derives from the First Amendment in the US constitution that protects free speech.
This legislative anomaly puts the United States out of step with international legal expectations. This clash has resulted in US companies facilitating hate speech in countries like Australia where such free speech privileges are not constitutionally guaranteed. A little imperialistic, don’t you think?
Check out our #Unbonjuif case study in which Twitter said they are committed to maintaining free speech unless there are legal rulings that take precedence, in which case they will reluctantly cooperate.
Who should be in charge of policing offensive content?
Image thanks to Qwaider
Determining what is a breach of policy is often a subjective matter. Facebook admits they “allow attempts at humour or satire” that may otherwise be considered hate speech, to remain online. We need to question who decides what is offensive as opposed to funny and are these guidelines open for public review?
Naturally, when discussing the possibility of regulating the Internet, libertarian arguments arise such as the need for self-moderation and the idea that the truth will emerge through public debate. is this ‘free market’ idea a little too romantic?
The Internet Law Bulletin argues that the burden placed on individuals to bring complaints is insurmountable as the sheer volume of content on Facebook makes it difficult to identify every single circumstance of hate speech. Andre Oboler believes that the argument for free speech is ignorant of the damage inflicted on the victims and fundamentally flawed as the consequences far outweigh the benefits.
Although self-policing the Internet presents itself as democratic, this form of regulation requires legislative reform and additional mechanisms for grievance processes which go beyond the individual complaint system.
The Convergence Review – Realistic?
Image thanks to Social Media House
The Convergence Review clearly favours self-regulation over government intervention except in cases where serious breaches of industry codes have occurred. Internet activist Mark Newton is convinced the Convergence Review “is searching for a local regulatory responses to a global phenomenon.” By framing ‘convergence’ as an Australian media issue which requires an Australian regulatory response, Newton believes the results of the review will be obsolete by the time they’re published.
Sounds like a catch 22, right?
One of the key problems with the Convergence Review is that it advocates self-regulation by industry professionals. Crawford and Lumby argue that users have limited roles in regulating online hate speech as they are restricted to making complaints. They believe users should have a more active role in the formation of policy and industry practices.
Some thoughts to consider:
Image thanks to Demotivatingposters.com
Currently there is no international body with jurisdiction over policy in the global and converged Internet space. It’s difficult to know whether user regulation, national regulation, industry-based regulation or global unified regulation is more preferable.
- Are traditional regulation policies still applicable within a dynamic and converged online landscape?
- Is it time to adopt a fresh approach to regulating the Internet?
- Do social media companies like Twitter and Facebook owe a duty to their users to take reasonable steps to discourage online hate, a public wrong against the community?