on May 28, 2020 President Trump issued an executive order called the Executive
Order on Preventing Online Censorship.
Does this order risk abridging your
First Amendment rights? Well today we're
going to talk about the executive order,
and the law it addresses, and what it
means for you. Coming up on Legal Bytes!
So before I get into the executive order
itself I want to first set the table by
talking about the law that's in place
the law that the executive order is
actually aimed at. So in order to do
that I kind of need to go back just a
little bit further to talk about the
legal landscape that existed before that
legislation was enacted. Before the
internet existed there really were only
a few avenues for information about
current events,
chief among them, newspapers and TV news
channels. Now as I'm sure you're aware,
newspapers editorialize their content--
meaning, they screen their articles long
before they ever get published for
accuracy and for other quality checks.
One reason for why they screen it is
because if they were to publish an
article about someone, and that article
ends up being false, or having false
information, and that false information
ends up causing harm to the reputation
of that person, the newspaper itself
could also be liable for defamation.
While defamation is a tort claim that can
vary from state to state, generally
speaking, there are four main elements
that a plaintiff would have to prove in
order to show defamation. Number one,
they'd have to show that the defendant
made a statement of fact, and they'd have
to show that that statement of fact was
false, then they'd also have to show that
the statement of fact was published,
meaning that it was made to a third
party, and finally they'd have to show
that the publication of that statement
actually caused harm to the reputation
of that person.
Defamation includes libel, which is
defamation in a permanent form, meaning
in writing, for example. And it also
includes slander,
which is spoken. So note the newspaper is
being held liable for something that
another individual did--meaning, the
author wrote the article that had a
false statement.
That's called secondary liability. It's
liability that attaches to a person or
to an entity when someone else performed
a certain harmful conduct, but where they
played some sort of a role somehow where
they had some some responsibility in
ensuring that that harmful conduct
doesn't actually happen. So for a
newspaper, for example, the newspaper has a certain amount of control over the
articles and the quality of the articles
that it publishes. With that control
comes a certain responsibility to make
sure that the content is not harmful.
When the internet came around,
now you had everyday people who were
able to create their own platform and
reach increasingly large audiences
through websites, chat rooms, and
eventually social media. One consequence
of that development is that information
is being spread after being
user-generated as opposed to coming from
an editorial board which is able to
screen that information. Now on the one
hand, this user-generated content is
great when it comes to genuine
self-expression and accurate
on-the-ground reporting.
However, the question started to come up
of what happens when user-generated
content is aimed to defame, harass, expose,
or otherwise harm other users or other
people? Who's to be held responsible for
that? Even before social media platforms
like Facebook, Twitter, YouTube, Instagram
were created, courts were trying to
wrestle with this question. And the case
law that developed in the 1990s really
started to hinge on something called
editorial control. So just like what we
were talking about with the newspaper. So
if a website or an internet provider was
exercising on a general basis more
editorial control than they would be
treated more like a newspaper publisher
and have more liability. On the other
hand, if there was a website or an
internet provider that
not exercising as much editorial control
on a general basis, they would be treated
more as a distributor of the content. So
as opposed to the newspaper publisher
that's editorializing, then on the
other hand you have someone treated more
like the newspaper stand out on the
street.
This obviously created a dilemma when it
came to offensive or harmful content, and
it started to look like one of two
extreme conclusions were going to happen.
On the one hand, you either had harmful
and offensive content that stays on the
internet forever because you have
websites that are fearful of taking down any content because then they would
be considered as moderating that content.
Then, on the other hand, you had internet
providers that would choose to moderate
but then come down hard on any and all
comments that are even remotely
offensive. Neither scenario is
particularly great. So in 1996, Congress
passed section 509 of the Communications
Decency Act which was later codified as
47 US Code section 230. I'm going to
refer to it as section 230 just for
shorthand for the rest of this video.
This is the law that many credit as
having basically given way to the
internet in general and in particular to
social media platforms like Facebook,
Twitter, YouTube, and Instagram. So what is
section 230 say? Well the meat of it is
in section 230(C) which has
two parts. The first part says that no
provider or user of an interactive
computer service is to be treated as the
publisher or speaker of any information
provided by another information content
provider. In other words, if Joe Schmoe
were to get into a nasty breakup with
his girlfriend and then post a family
content about her on his brother Bob
Schmoe's Facebook wall, the only person
that would be held liable for that is
Joe Schmoe. Not Facebook, and not Bob. The
second part talks about civil liability
in taking down content.
Specifically, it says that no provider or
user of an interactive computer service
will be held liable for two types of
action taken in good faith. The first
action is restricting access to or
availability of material that the
provider or the user considers to be
obscene, lewd, lascivious, filthy,
excessively violent, harassing, or
otherwise objectionable, whether or not
such material is constitutionally
protected; or, actions enabling or making
available the means to restrict access
to that kind of material--for example,
Facebook giving you the ability to
delete a comment or a post that's on
your wall. Taken together, these two
sections mean that if Facebook, or
Twitter, or other online social media
platforms were to become aware of some
content that is harmful or offensive,
they wouldn't be held liable in taking
down that material, and they also
wouldn't be held liable for other
content just because they have a regular
habit of taking down that kind of
material as well. It's important to note
that the language that's used in the
statute is very broad and confers a very
powerful immunity for online platforms.
In particular, the fact that a platform
can take down content because it is
"otherwise objectionable" means that it
can take down content for reasons
outside of what's specifically listed in
the statute--for example, content that's
lewd, or lascivious, or harassing. The only
requirement is that it's objectionable,
which some people would say can be kind
of subjective. And another important
thing to note is that the immunity from
liability also extends to constitutional
claims which is a very powerful shield.
And case law has confirmed the wide
breadth of this immunity, and not just
for cases that are for defamation. From
defamation, to false information, to
negligence over sexually explicit
content, to even terrorism, courts have
for the most part upheld secondary
liability immunity for these social
media platforms and other internet
providers. Okay so that brings us to the
executive order itself. The policy
section of the executive order goes to
talk a lot about the
importance of free speech under the First
Amendment and the fact that social media
platforms like Facebook and Twitter and
YouTube and Instagram have grown so
large and that they now hold a lot of
power in shaping public discourse and
shaping interpretation of public events
and determining what people do or do not
see. It also talks a lot about claims of
political bias in moderating the
content and determining what people do
or don't see based on the particular
viewpoint of the person that is
generating that content and sometimes
with or without justification based on
the Terms of Service. It also touches on
claims that have been made about some
platforms cooperation with foreign
governments and hiding human rights
abuses or spreading propaganda and
misinformation abroad. But in terms of
what it actually does in practical terms,
there's no real immediate effect for
social media platforms or for its users.
It mostly directs at the executive
agencies and departments to take certain
actions. For example, it directs the
National Telecommunications and
Information Administration to file a
petition for rulemaking with the Federal
Communications Commission to clarify the
good faith portion of section 230. The
executive order also seems to be giving
examples of what it thinks are probably
activities that are not made in good
faith, for example, situations where a
platform might be taking actions that
are deceptive, pretextual, or inconsistent
with the platform's Terms of Service or
where a platform fails to give a reasoned
explanation to the user for taking down
content, or for not giving a meaningful
opportunity to be heard in taking down
that content as well. Then it also
directs the National Telecommunications
and Information Administration to look
into what kind of circumstances in which
a platform that's not taking certain actions
in good faith could cause them to have
their immunity removed. It also directs
the Federal Trade Commission and the
Attorney General to work in separate
groups related to enforcement of the laws
concerning deceptive and unfair
practices related to an online platform's
public representations of their
practices. And finally, it directs the
Attorney General to propose federal
legislation to promote the policies of
the executive order. My overall
impression of the executive order is
that it probably has the end goal of
creating legislation that's going to
amend section 230 in order to limit the
liability immunity for online platforms--
particularly maybe in one or two ways.
The first way is probably to create the
ability for constitutional claims to be
heard against them, I mean it talks about
the First Amendment a lot. And secondly
it talks a lot about how the immunity
related to taking content down should be
interpreted narrowly. Particularly, it
talks about how the good faith element
should really be the focus of any kind
of analysis in determining whether or
not an online provider is entitled to
that in immunity. So if I had a crystal
ball I would predict that an amendment
to section 230 would probably be aimed
at creating some sort of responsibility
or some sort of mechanism for putting
responsibility on social media platforms
in justifying the removal of content in
one way or another, and to allow
constitutional claims to be heard. But,
then again, all of this could just be
political saber-rattling. So what do you
guys think? Is there gonna be some sort
of an amendment section 230 or a
revocation to section 230? If you have
any thoughts on it put them in the
comments below. I hope at the very least
that you found this informative. If so,
please go ahead and like the video and
if you want to see more videos go ahead
and hit the subscribe button and hit the
notifications bell so you can see when
the next video is coming. Thanks!
