Can Facebook be sued for live-streaming suicides?

1982
nshepard/flickr, CC BY-NC-ND

Shontavia Johnson, Drake University

In January, two different suicides were streamed using Facebook Live, a service that allows Facebook users to create and broadcast real-time videos to their followers. At the end of the month, a third was streamed live using a different service and is still publicly available on Facebook.

On January 22, a 14-year-old girl hanged herself in front of about 1,000 viewers. It took nearly an hour for her to prepare, and followers watched her body hang in the bathroom where she took her life for another hour.

The very next day, a 33-year-old father of six told his Facebook followers that he was going to kill himself and subsequently did so while live-streaming. The video remained up on his Facebook page for eight hours, with many users sharing it on their own pages.

The following week, a popular gospel singer filmed a two-minute live video of himself drinking poison after breaking up with his girlfriend. People who knew him were shocked, and his suicide sparked conversation on social media about suicide prevention.

This disturbing trend isn’t new. In fact, just weeks before these suicides, a 12-year-old girl created a 40-minute live video of her own suicide using the streaming app Live.me. It went viral on YouTube and Facebook, and even though her family immediately deleted it, it took Facebook two weeks to scrub all traces of it from the network.

These streams can be damaging on a number of fronts. Could they cause suicide contagion, in which a suicide attempt in an online or local community raises the likelihood of more suicide attempts within that network? What about the trauma inflicted on those who watch, especially family and friends?

Should Facebook be legally obligated to do more to prevent these types of disturbing live broadcasts?

Some see an ethical obligation for them to do so. But as a law professor who researches the real-world implications of social media, I don’t believe current law requires Facebook to take any additional steps.

Though these types of videos are tragic and devastating, the law has evolved in a way to protect social media companies from most lawsuits.

Suing for emotional harm

In general, you can sue for emotional distress when you witness the death of a family member. In recent times, relatives have sued television stations that aired deaths and suicides live on air. But it can be difficult to win these cases, even though the legal rules are well-established.

Since the late 19th century, the law has recognized a limited right to sue for emotional harm. However, these laws were controversial from the start because courts typically think of harm as physical in nature. Courts have also been skeptical because emotional harm is difficult to prove and they disfavor rules with indefinite boundaries.

The 1968 case Dillon v. Legg recognized that relatives could bring a lawsuit for emotional distress without any physical injury. After a child was hit by a car and killed, his sister and mother – who witnessed the accident – sued the car’s driver, claiming emotional distress. The court held that they could receive damages, even though they weren’t physically harmed or in danger of harming themselves.

The rule for this sort of claim – called “negligent infliction of emotional distress” (NIED) – generally requires that:

  • the plaintiff was near the scene of the incident;
  • the plaintiff’s observation of the incident caused significant distress;
  • the plaintiff and the victim were closely related.

But this is hard to prove and often unsuccessful in cases involving media. For example, in 2015, the Indiana Supreme Court denied damages to a father who learned of the car accident that killed his son on television and then rushed to the scene. The court determined that the father could not prove NEID because he knew about the accident before arriving on the scene, did not observe any injury, blood or resuscitation efforts, and never saw his son’s body uncovered by the white sheet.

Does Facebook already do its part?

It isn’t clear how this applies to social media. In the United States and Ireland, Facebook has been sued in “revenge porn” cases. One was dismissed before a decision; the other is still pending.

Federal law, however, probably protects social media companies like Facebook if they’re confronted with revenge porn or live suicide cases. For example, Section 230 of the 1996 Communications Decency Act notes that providers of “interactive computer service[s]” are not the speakers or publishers of information provided by others. This means that Facebook isn’t liable for what people post, though there are exceptions for things like crime and intellectual property violations. It protects companies like Facebook from being held responsible for what it takes down or leaves up.

Furthermore, though it isn’t required to under the law, Facebook has created its own community standards, which prohibit “the promotion of self-injury or suicide.” It also allows users to report such posts so that it can consider whether to remove them (though Facebook is generally hesitant to take information down). Its basic policy “is to not remove any user content, as long as the value of public discourse outweighs the discomfort caused by said content.”

In addition, the social media site has voluntarily created a detailed suicide prevention page and a two-tiered reporting system. Viewers can report the content to Facebook directly from the post itself or through the “Report Suicidal Content” page, which implores users to contact law enforcement or a suicide hotline immediately.

A slippery slope

To force Facebook to change its approach, either the law must change or users must demand more corporate accountability. But changing the law to extend liability for emotional distress stemming from live-streamed suicides would open a Pandora’s box of issues. If Facebook could be sued for money damages based on live suicide posts, it could lead to countless lawsuits. This would also force the company to take drastic, and perhaps implausible, measures to protect itself by monitoring and deleting billions of posts.

This would raise serious, legitimate concerns about when censorship is appropriate. As of the end of 2016, Facebook had 1.86 billion monthly active users worldwide. Monitoring each user’s content for acceptable information would be a monumental, if not impossible, burden. Legally requiring the company to determine when something should stay up or be removed because of the emotional distress it could inflict would seem to require Facebook to make nearly instantaneous decisions about complicated law.

In addition, the company could potentially be vulnerable to millions of lawsuits. Content is constantly being posted. If Facebook’s team makes any wrong calls, it could find itself responsible for significant legal damages on a number of fronts.

Finally, the negative impact on speech would be significant: The company would have the power to determine what posts are permissible and what posts aren’t. In the United States, one bedrock legal principle is protecting the free exchange of speech and ideas. Others, however, have noted that allowing nearly all speech online has created a dangerous world full of doxing, bullying and live suicide videos.

Live suicide videos are harrowing and alarming, and social media has made them easy to watch. Centuries-old laws related to emotional distress could not have anticipated these events that are, unfortunately, becoming more frequent. It may be impossible for Facebook to eliminate them from its site, and current law does not obligate them to do so. To prevent live-streamed suicide videos, we may have to ultimately rely on providing social media users with the tools to help prevent suicides.

The Conversation

Shontavia Johnson, Professor of Intellectual Property Law, Drake University

This article was originally published on The Conversation. Read the original article.

LEAVE A REPLY