A Lack Of (Good) Faith: Why Social Media Needs To Respect Section 230 Immunity

Call me old fashioned, but I actually like the healthy exchange of ideas.

Sometimes you really need to be careful with what you have — especially if you are a social media platform. In today’s current political climate, this point is a compelling understatement. All you need to do is look at the “compliance monitoring” by the largest social media platforms and you will see a disturbing trend — the elimination of everything from politically charged posts to certain memes under the auspices of non-compliance with the terms of service and acceptable use policies of these platforms. At a time when the political divide in this country can scarcely get any bigger, these actions are now brought to the fore by political figures claiming they are being targeted by these platforms based on  “fact checking,” with President Donald Trump even signing an executive order against online censorship. Whether or not you agree that such moderation equates to censorship, the point here is that there is little question that social media platforms are flagging or removing content (and even suspending accounts) at an alarming rate in 2020. What they may not realize is that they need to be careful that the previously stable ground does not collapse underneath their feet in the process.

What I am talking about here is the immunity provided under Section 230 of the Communications Decency Act (CDA). I have written about aspects of Section 230 before here, but as time has passed I have seen more and more inconsistent activity across many social media platforms such as Facebook, YouTube, and Twitter. By inconsistent activity, I mean the inconsistent application of the very terms of service upon which these platforms ostensibly rely to form the contractual basis for customer use. At times, the actions taken under the guise of “compliance” appear questionable, only to rely on the immunity shield of Section 230 when the platform is legally called out for it. I am not writing about this in a vacuum — I am seeing it happen with my own clients.  So it’s time to ramp up the conversation, and why I believe that online service platforms may be reading the writing on the wall for Section 230 if they are not careful.

To understand why this may be the case, it is important to understand the structure and application of Section 230. First, Section 230(c)(1) specifically states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This section basically shields online service providers from civil liability for defamatory, tortious, and even illegal content that its users post onto the platform. In other words, the online service provider can’t be treated as a publisher or speaker of such content. This is the portion of Section 230 of which most people are immediately aware, such as where users post defamatory comments on a website.

The other provision of Section 230 focuses on immunity from civil liability for the moderation or restriction of content posted on the platform.  Specifically,

No provider or user of an interactive computer service shall be held liable on account of—

    (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

    (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in [Section 230(c)(1)].

Unlike Section 230(c)(1), this portion deals with a platform’s actions to restrict (or even remove) specific content that it believes in good faith meets these criteria. Most online service providers (including social media platforms) have acceptable use policies or provisions prohibiting the posting of such content and regularly exercise their muscle on ensuring compliance with such policies. Although such policies and terms are not required to trigger such immunity, most interactive computer services incorporate such policies as a contractual matter and to place its users on notice of the standards to which the platform seeks to adhere. As aforementioned, however, the problem here stems not from the fact that compliance actions occur, but whether they are occurring in good faith.

If you think I am being reactionary, think again. Facebook moderators have been caught on hidden camera claiming to delete posts that are pro-Trump.  Further, under the guise of protecting the customers from “misinformation” about COVID-19, a joint statement was released by Facebook, Google, Reddit, LinkedIn, Microsoft, Twitter, and YouTube that they would be “combating fraud and misinformation about the virus” as well as “elevating authoritative content on [their] platforms.” Online service providers have every right to do so, but who is deciding which information is “authoritative”? Why are posts being removed or accounts being suspended (such as that of Dr. Lei-Meng Yan here on Twitter) when, in fact, her (albeit incendiary) claims on the man-made origin of COVID-19 can be compared to alternative claims from other sources? Why not just cite other sources to prompt the reader to make the determination? Whether or not these actions can be taken is not an issue so long as such actions are taken in good faith.  Good question indeed.

Sponsored

Call me old fashioned, but I actually like the healthy exchange of ideas, the foundation of which requires that those ideas actually be shared, unadulterated and unfiltered for reasonable minds to engage. I also realize that this viewpoint has its counterpoint and that Section 230 involves some passionate arguments both pro and con. In fact, I fully anticipate that some of you may disagree with the points made above, no doubt citing the history of Section 230, supportive case law, or other observations to support your position. No problem — that’s a good thing. Why? It means there is discourse, and that a (hopefully) productive community conversation can continue regarding CDA Section 230 immunity and whether it should survive as-is, be modified, or die. I promise you, I will continue this discourse and share it on social media whenever possible — whether you will be able to follow it on those platforms, however, it another matter entirely. Time will tell.


Tom Kulik is an Intellectual Property & Information Technology Partner at the Dallas-based law firm of Scheef & Stone, LLP. In private practice for over 20 years, Tom is a sought-after technology lawyer who uses his industry experience as a former computer systems engineer to creatively counsel and help his clients navigate the complexities of law and technology in their business. News outlets reach out to Tom for his insight, and he has been quoted by national media organizations. Get in touch with Tom on Twitter (@LegalIntangibls) or Facebook (www.facebook.com/technologylawyer), or contact him directly at tom.kulik@solidcounsel.com.

Sponsored

CRM Banner