The Johnny Depp-Amber Heard trial provides lessons in tackling harmful content on social media
Article originally published in The Star
It’s easy to write off the recent American defamation trial involving Amber Heard and Johnny Depp as sensationalist celebrity fodder, but that would be a mistake. The verdict itself has profoundly chilling implications for free speech, while the overall “memeification” of the trial revealed a lot about ourselves as a society.
Additionally, there is an important lesson for the federal government’s forthcoming online harms legislation. Social media played a huge part in framing the narrative around the trial, as did the algorithms pushing all of this content into our feeds — whether we were interested in hearing about the case or not.
The vast majority of the viral content on platforms like YouTube and TikTok were fervently pro-Depp. As reported by NBC News, a “wide variety” of content creators quickly pivoted to posting content about the trial as a way to increase their engagement, and ultimately, their earnings.
Experts and academics have long pointed to the fact that many of the social media algorithms incentivize polarizing and otherwise harmful content, but this trial was a perfect example of how powerful these algorithmic incentives can be.
The only way to adequately deal with online harms is to focus on preventative measures that minimize the reach of harmful content to begin with. Yet much of the discourse surrounding online harms in this country has focused on content removal once it has already been posted, and its impact on free speech. Luckily, we know there is a better way, both in terms of getting at the root of the issue and preserving free speech.
Both the EU and the U.K. have developed an approach to online harms that focuses on the algorithmic incentives that lead to the content being disseminated and amplified in the first place, which requires the platforms to be much more transparent in terms of how their algorithms work.
Currently, platforms keep their algorithmic data hidden inside a veritable black box, and do not have to consider anything other than their own bottom line when it comes to how their systems are designed. All other consumer-facing products have to consider the risk of their products, and then demonstrate that steps have been taken to mitigate those risks. Social media platforms have been largely exempt from this, which makes no sense.
This is especially true once one considers the long-standing Canadian statutory legal principle of the duty to act responsibly. As the Canadian Commission on Democratic Expression concluded in their final report, “The duty to act responsibly to be required of platforms must also include a recognition of the negative impact their activities can generate.”
Everyone has a right to be as lawfully offensive or objectionable as they choose to be online. Nobody, however, has the right to the algorithmic amplification of their speech so that it can be monetized or used for targeted harassment purposes.
Anti-violence advocates Farrah Khan and Mandi Gray recently wrote for the Star, noting that “The social media campaign against Heard is concerning, considering a recent study found that one in three Grade 9 and 10 Canadian youth in relationships report having experienced dating violence. The social media response to the trial contributes to the myths of intimate partner violence.”
Our online discourse is heavily set by the opaque, commercialized algorithms of a handful of social media platforms, and that has very real negative impacts on our kids offline.
Both Justin Trudeau and Jagmeet Singh are dads who claim to be feminists. Luckily for them, they have the opportunity to pass online harms legislation this fall that disincentives the algorithmic amplification of rampant misogyny to Canadian youth.