Home » Uncategorized » The Supreme Court Should Keep Big Tech Content Protection

The Supreme Court Should Keep Big Tech Content Protection

The Supreme Court (SCOTUS) has heard separate cases this week from individuals claiming that Big Tech is to blame for a number of killings during separate terrorist attacks. 

Alleged is the argument that companies like Google, YouTube, Twitter are responsible for the deaths because the companies posted terrorist content due to their algorithms that promote violent videos and messages.

However, these firms are protected by Section 230 of the 1996 Communications Decency Act that shields them from liability for content they post that came from some other source—a third party.  But families of victims want the high tribunal to find Big Tech responsible despite Section 230 protection.

As sympathetic as I am to the families who are looking for someone to blame for the killings, Big Tech is not responsible, and the Supreme Court should reject the plaintiffs’ arguments.

My objection is the assertion that the Big Tech companies algorithms push harmful content.  It implies that this is an intentional malevolent act perpetrated by the companies.  

I don’t believe that.  These programs do what ChatGPT does—they curate from existing available content across a wide swath of the Internet. They access only what is accessible to them according to their programming. 

The software doesn’t consciously seek harmful content nor does it intentionally promote such material.  It is limited to the parameters built into it by its developers.  As such it’s not surprising that some material appears that many consumers dislike or even consider abhorrent.

Regardless of how SCOTUS rules, efforts to rein in the major social media players will continue as the public, press and policy makers continue to press these firms to remove harmful content from their sites.

Now if Congress wants to encourage these firms to do more about restricting what lawmakers believe is harmful content, then legislators must be able to specifically define what is harmful content.

This becomes a slippery slope.  What is harmful content?  Is there a difference between Islamic terrorist diatribes and false assertions that the United States was founded as and should be a Christian nation?

The Swastika evokes anger among those who associate it with Hitler’s Germany and the genocide of Jews.  But the same symbol appears in many cultures as is often associated with good and prosperity.

Which is it?

Congress has demonstrated on multiple occasions that it is incapable of reaching consensus on even the most obvious issues where consensus should be easy.  And censorship of content is an extremely difficult issue.

The best solution is for all content to be available. There is a host of Internet domain names available such as .com, .edu, .gov, .mil.  Material that that many find offensive could be easily available on the Internet at a domain to be determined by Congress.  Then consumers can easily either access it or restrict it from the eyes of their families.

The Supreme Court should not penalize Google, Twitter and other social media just because they host content that others generated. 

There are other remedies.  And those should be explored.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: