WASHINGTON, D.C.– Section 230 has been a hot topic throughout 2020, with some noticeable moments in the spotlight in recent years as well.
But ever since President Trump and other notable conservatives have been taking aim at Section 230, mainly due to aggravation with big tech censorship of certain content on social media platforms, talks of repealing Section 230 have been touted throughout 2020.
I’ll admit that I was one of those proponents carrying the proverbial pitchfork, too, citing reasons why certain social media platforms shouldn’t enjoy the protections afforded by Section 230 when they decide to censor legally protected speech as defined by the First Amendment.
And yet, I have come to the realization that I was wrong.
Section 230 should not be repealed. Now, it doesn’t mean that it should not be altered, but it must be done with extremely meticulous care.
Here’s why I have come to that recent conclusion.
It boils down to two main factors: a misunderstanding of what Section 230 affords in protections for internet platforms; and how the concept of repealing Section 230 can have the exact opposite effect of what proponents of the idea think it will accomplish if repealed.
Section 230, when compared to other pieces of legislation, is a short read; taking the better part of five minutes to read the entire text of the law. Plus, it is relatively easy to understand as well.
The meat of the law, where the legal protections come into play for any internet platform, be it Facebook, LinkedIn, Twitter or 4chan, is found in 47 U.S. Code § 230 (C) of the legislation, dubbed as the “Protection for ‘Good Samaritan’ blocking and screening of offensive material”:
“(C)Protection for “Good Samaritan” blocking and screening of offensive material
“(1) Treatment of publisher or speaker
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
“(2) Civil liability
“No provider or user of an interactive computer service shall be held liable on account of –
“(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
“(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).”
If you have been paying attention to the discourse regarding Section 230 over the past few years, you’ve likely heard the “publisher vs. platform” argument used as a reason for Section 230 protections to be removed from companies that arbitrarily remove or censor content that would fall under free speech under the First Amendment.
The argument usually proclaims that when, say for instance, Facebook flags something you post as going against “community standards”, even though what you posted was not a violation of any laws.
At that point they are acting as a publisher because they’re picking and choosing what can and cannot be present on the platform.
And, as the argument typically goes, if a social media platform has any sort of bias on what content can be removed that does not violate any state or federal laws, then those social media companies should be treated as a publisher and no longer a platform.
Here’s the thing, Section 230 (as it was written in 1996) allows for platforms to remove objectionable content that is not actually a violation of the First Amendment.
Section 230 states clearly that under the portion where platforms can “restrict access to or availability of material,” to pretty much anything that the platform finds “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”.
And as frustrating as that may sound, it is simply just how the bill was written at the time to allow online platforms the luxury of what physical businesses get to enjoy. Because it allows platforms, which are businesses, to foster the type of environment that they want in their business.
Section 230, in a nutshell, essentially provides the same type of protections to internet platforms that every other business, where people physically patronize, gets to employ.
Take for example the following scenario: you are sitting inside of a movie theater trying to enjoy a film, and a group of obnoxious people begin to habitually interrupt the movie throughout the entirety of the film.
Interrupting a movie is not illegal and the people interrupting the film were merely exercising their First Amendment rights.
To remedy the situation, all one must do is get ahold of a manager on duty at the movie theater to tell the disruptive group to quiet down, a.k.a., censoring them. Then they would be told that if they continue to be disruptive, then they will be removed from the premises.
Those defending the Section 230 protections for online platforms, equate them to social media’s ability to suppress certain types of speech as being the same thing as escorting a disruptive patron out of a movie theater.
But, if we are being intellectually honest, someone disrupting your movie-going experience is not the same as someone posting content that another individual (or the platform itself) finds objectionable or offensive.
The way every modern social media platforms works nowadays gives users the opportunity to curtail their experience however they like.
They can block individual users, follow or unfollow specific people or pages, and even create profile preferences to be served certain types of content that they may be interested in.
There’s no user-interface like that inside of a Harkins Theater or a McDonald’s where other patrons can magically mute or un-see offensive people or acts.
Which is why physical businesses should still be allowed to censor and remove boisterous or offensive people from premises, because other patrons are unable to consent to that experience and block it from their line of sight.
But outright repealing Section 230 instead of making some adjustments, would spell bad news for conservatives, liberals and everyone in between.
Repealing Section 230 could completely ruin the desired outcome and instead enable a more realistic outcome of just about every form of speech possibly getting crushed in online platforms.
Let’s say Section 230 protections get repealed for social media platforms completely because it has been a rallying cry by a handful of prominent conservatives.
It will not magically make big tech censorship go away, and it may make big tech censorship even worse.
You see, Section 230 makes it so a platform cannot be held liable for the sort of content posted by the users of the platform. Basically, if someone writes something libelous about you or anyone else on Facebook or Twitter, the platforms cannot be sued for the libel, only the individual who wrote it can be.
However, if Section 230 simply gets repealed, then every single piece of content posted on these platforms could hold the companies liable for libelous statements.
And if it became a reality, you could best believe that every single social media platform would effectively ban just about any kind of speech to cover their assets.
Sure, companies like Facebook and Twitter might be able to survive that as they’ve got the funds to hire numerous content moderators/editors to screen every submitted post and deem whether to host it.
But what about Parler?
Well, that company would effectively be gone since it’s so new, hardly monetized at this point, and the appeal of it would also completely vanish since their touting of free speech on the internet would no longer be allowed to exist as they could be held liable for libel and misinformation spreading.
Which is why me must instead redirect efforts to make some changes that would compel companies that decide to host online platforms, like Facebook, to simply not stifle any speech that does not strictly violate state or federal laws.
If we are to tamper with Section 230, it must be done with the utmost care.
We already have real world examples of amendments that affected Section 230 in recent years that wound up forcing companies to stifle free speech.
That example comes in the form of the 2018 law signed known as the Allow States and Victims to Fight Online Sex Trafficking Act, or “FOSTA”.
FOSTA started out as two separate Senate and House bills, one called Stop Enabling Sex Traffickers Act (SESTA) and then FOSTA. Eventually, the two joined together and simply became FOSTA.
The legislation removed Section 230 protections for companies that “knowingly assist, facilitate, or support sex trafficking”, which was aimed mainly at the website Backpage as they hosted an escorts section on the website.
There were some instances where users were posting escort ads that advertised minors. Obviously, sex trafficking of minors is a bad thing, and taking measures to stop that is commendable.
But what happened as a result, was that platforms like Craigslist and Reddit, which were not even remotely centered on promoting escort services, ended up having to remove sections akin to personals/casual encounters since they could potentially be used to advertise illegal sex trafficking.
Since they didn’t have the resources to moderate every single posting that went on their platforms in that regard, they simply just wiped out those sections completely.
If Section 230 is further tinkered with again in a manner that inadvertently restricts types of legal speech on platforms, we can rest assured that free speech on the internet will be more difficult to attain.
The digital “Times Square” that we hope to achieve isn’t going to come from completely repealing Section 230 protections for companies like Facebook, Twitter and the rest of the social media platforms.
We should instead push to have language present within Section 230 which states:
If an entity wants to operate a social media platform, they must host all viewpoints (that do not violate any laws) on their platform. Simultaneously, these platforms must ensure that end-users can create their own experience in the form of following, unfollowing and blocking content that the individual does not wish to see.
_
Want to make sure you never miss a story from Law Enforcement Today? With so much “stuff” happening in the world on social media, it’s easy for things to get lost.
Make sure you click “following” and then click “see first” so you don’t miss a thing! (See image below.) Thanks for being a part of the LET family!