I’ve gotten a ton of questions about the recent decision by YouTube to demonetize the account of Steven Crowder after examining his content, but it’s something I’ve been thinking about a great deal.
So I want to spend all of the Friday mailbag responding to this issue this week.
Here’s one version of the question from Jeff:
“What’s your take on the Steven Crowder/YouTube situation? This seems somewhat like the Colorado baker case. Can the government force YouTube to serve (I.e. monetize) videos with which it disagrees? I disagree with the government’s decision in the baker case, but it seems like the precedent would apply here as well should those demonetized content creators like Crowder take this to the courts.”
Here we go with my answer:
I am utterly fascinated by your question because I think it touches on so many undecided legal issues in our modern era.
The question we are really asking is this — how do private companies, which to a large degree base their businesses on publicly provided user-generated content — determine who has the right to use their Internet platforms and who does not? (The second question you are asking is about monetization, which is a further step in the analysis, but not as pertinent to me in the meantime. I’m less troubled by the monetization decisions and more troubled by the decision of whether or not to allow someone to use a platform).
The private companies would say that everyone is subject to their user agreements and they reserve the right to “deplatform” or “ban” anyone they find violates their terms, but is that really the rule we want applied? After all, does anyone actually read those user agreements? And aren’t they written in a way where a private company can basically decided to do whatever it wants without recourse?
Of course they are.
I think we need a better solution than simply saying a private company can decide to do whatever it wants based on a user agreement that most people don’t read or understand.
What’s more, in an age when Twitter, Facebook/Instagram, YouTube/Google, and Apple are essential communication devices for most people, should content-based discrimination be permitted at all? (I think it’s fair to say that content-based discrimination is occurring. The majority of those being deplatformed are from the right wing, but that could change in the future. As a first amendment (and boobs) absolutist, we need a set of working guidelines that work for everyone.)
I don’t think there’s an easy answer here, but I do think all of these technology companies are in the government’s cross hairs now, both for left and right wing political related issues, because I think everyone recognizes the incredible power these companies have and there’s also the collective sense that they aren’t getting enough right.
Whether you’re upset with the election meddling, privacy failures, vile discourse, or just plain inaccuracies that are rampant on social media, I think everyone would agree that things need to get better. I also think a form of governmental regulation is inevitable, but I’m not sure what form that will end up taking or what form that should end up taking.
I’ve got a big idea solution at the end of this column, but before I break this down further, let’s think about other communication devices and how the law has treated them.
While many like to pretend that history doesn’t exist, the Internet isn’t our first massively adopted communication device.
So before we get to more difficult questions let’s start with one I think most of us can reach agreement on: should a telephone company be able to bar someone from getting a telephone because they don’t like what that person is saying on the telephone?
I think virtually all of you would answer no to this. (Provided, of course, the person isn’t imprisoned). That is, a telephone company shouldn’t be able to stop a free individual from getting a phone line or buying a cell phone because they find the comments that are being made on that phone to be distasteful. (Criminal conduct, clearly, could change that, but we’re not dealing with criminal conduct in 99.9% of Internet deplatforming controversies).
So I think this is pretty straightforward, I would imagine no one out there reading this right now thinks a phone company should be able to refuse service to someone based on what they will say in those phone conversations. Now, granted, most of these calls are “private” in nature, but a phone can be used to broadcast public comments too. For instance, a person with a phone could call my radio show and talk to a massive nationwide audience there. A person with a phone could also call into live television and broadcast his thoughts that way. A phone can also be used to address a large audience as when, for instance, a coach calls in for his weekly press conference with national media.
A phone is, essentially, a conduit to allow the distribution of your opinions or information to a group, either large or small. When you buy a phone you are basically buying the ability to broadcast your opinions to a private or public group of your choosing.
And, again, I have never heard of a company refusing to service someone with a phone based on what they say on phone calls.
As a society we have decided that everyone should have access to phones.
Okay, so let’s move on to the Internet.
Assuming you can pay for the service, have you ever heard of any company refusing to provide Internet access to a customer based on how the customer might use the Internet? (Now, read carefully, a company can choose not to provide service to a particular area because they don’t believe it’s profitable — high speed Internet can still be an issue for many in rural areas — but that decision isn’t being based on how those people are going to use the Internet. In other words, it isn’t a content-based discrimination.)
But has any company, for instance, refused to provide Internet access to any person in New York, Los Angeles, Chicago or Houston because of how that person is using the Internet? (Again, this assumes non-criminal use of the Internet).
I don’t think so.
In terms of access to the Internet, we have treated it essentially like we’ve treated phones, if you can pay for it, you should have access to it.
And I think most people out there reading right now, regardless of your politics, would agree that access to the Internet and phones, essentially to communication devices in general, shouldn’t be prohibited based on how you will use those devices so long as you aren’t violating the law.
What’s more, I don’t think websites can ban you from visiting them once you have Internet access. That is, I haven’t heard of Twitter, Facebook, YouTube, or Apple banning someone from ever being able to visit their legal sites. (This is in America. In China, and many other foreign countries, they regularly block sites and even search terms they don’t like. But the Internet in America has remained “free.”)
So what about the next step, should companies like Twitter, Facebook/Instagram, Google/YouTube or Apple be able to ban people who violate the user terms of agreement? That’s even more complicate because these companies have simultaneously fought for two different powers: protection from being responsible for user-generated content because they didn’t make it while also simultaneously arguing they can delete user-generated content they don’t like.
Now deciding a question like this get harder because we don’t have unanimity of opinion any more.
I think the answer is, in certain cases, yes, companies should be able to ban users. But not in most cases. And I think these decisions should always have to be transparent and clear in the cases of popular users who are banned.
Now let me explain why I have these opinions.
The key, to me, is that the first amendment’s marketplace of ideas has pretty much moved online. That is, if you want to advocate and influence policy, much of that debate now takes place on the Internet. You need Twitter, Instagram, YouTube, Google, and Apple to be able to reach your audience.
I’m increasingly of the belief that platforms like Facebook, Amazon, YouTube, and Twitter have become the equivalent of digital town squares. That is, their very popularity can best be analogized as the equivalent of town squares back in the day, before we had modern communication techniques.
Public debates used to take place in town squares back in those days. There is substantial legal history relating to town squares and what’s permitted in them, but, and this is a key distinction, most town squares were publicly owned physical locations in public venues, unlike these digital platforms which are privately owned in a non-physical place.
But what’s fascinating is there is a legal precedent that could fit our existing situation — what happens if a private company owns a public square? Isn’t what we basically have online now akin to a town square in a company town that’s owned by the company?
The Supreme Court examined a case like this in Marsh vs. Alabama, when a woman argued she had the right to distribute literature outside the post office in a community owned by the company. The Supreme Court ruled then, and this is from a Wikipedia article on the case, that the town could not prohibit the woman from distributing her fliers outside the town square post office in a company town:
“The Court initially noted that it would be an easy case if the town were a more traditional, publicly administered, municipality. Then, there would be a clear violation of the right to free speech for the government to bar the sidewalk distribution of such material. The question became, therefore, whether or not constitutional freedom of speech protections could be denied simply because a single company held title to the town.
The State attempted to analogize the town’s rights to the rights of homeowners to regulate the conduct of guests in their home. The Court rejected that contention, noting that ownership “does not always mean absolute dominion.” The court pointed out that the more an owner opens his property up to the public in general, the more his rights are circumscribed by the statutory and constitutional rights of those who are invited in.
In its conclusion, the Court stated that it was essentially weighing the rights of property owners against the rights of citizens to enjoy freedom of press and religion. The Court noted that the rights of citizens under the Bill of Rights occupy a preferred position. Accordingly, the Court held that the property rights of a private entity are not sufficient to justify the restriction of a community of citizens’ fundamental rights and liberties.”
If I were advising Crowder and others who have been deplatformed by these tech companies, I’d tell them to sue and cite the Marsh precedent as evidence that the tech companies don’t have the power to make these decisions because those platforms have now become so integral to the first amendment’s marketplace of ideas that in doing so they are violating the rights of citizens by excluding them from the digital town square.
That’s especially the case when the entire business model of these companies is for users to produce their own content. In other words, what would happen to each of these companies if user opinions didn’t exist? Then essentially they wouldn’t exist either.
Ultimately I believe there’s a very strong legal argument that Twitter, Facebook/Instagram, Apple, and Google/YouTube are the equivalent of company towns trying to restrict access to their digital town squares.
(I also think there’s even a property argument to be made here on behalf of users. Consider my Twitter account, for instance, I’ve spent years building up my Twitter audience on their platform. It’s very valuable to me. What if Twitter just took that away from me and alleged I’d violated a term of their agreement just because they didn’t like what I was Tweeting?
I might well sue to retain the right to my account.
Now it might end up taking years for that suit to be decided, but I think the companies would be very nervous about the precedent I might set in the event I won.)
Each of these companies is facing an incredibly difficult question: what’s acceptable speech on a public Internet forum?
And as much as I’d like for the answer to be “everything,” I don’t agree with that either. I think there need to be some restrictions on what videos, for instance, YouTube posts.
Let me give you an example in my own life, my kids use YouTube all the time. I don’t want my 11, eight or four year old’s watching videos teaching them that vaccines cause autism. Or that they should treat someone differently based on their race and religion. I want YouTube to make sure those videos aren’t on their site, especially not for kids to see.
Now you can argue that shouldn’t be YouTube’s responsibility, but I disagree. YouTube has a powerful algorithm that feeds my kids, and your kids, videos one after the other that it expects they will like. These videos often have millions and millions of views. I don’t think it’s too much to ask for YouTube to be monitoring popular videos.
Now you can certainly argue, “Well, Clay, you should be sitting beside your kid watching every video he watches on YouTube,” but I don’t think most parents agree with that. First, it’s nearly impossible to monitor every minute of every video my kid watches online, especially if you have multiple kids watching multiple devices like I do, and second, my watching what my kid watches doesn’t change whether the video exists for others to see online.
So what content should be banned then?
I hate to sound like Justice Potter Stewart based on the obscenity cases before the Supreme Court, but I know it when I see it.
Which leads me to an interesting idea, what if these companies created their own Supreme Courts of content review and paid lawyers to sit on those courts and hear cases? You could make half the panel Republicans and half the panel Democrats and then let the CEO break any ties.
Treat it just like a Supreme Court, only have it decide, in a public fashion, difficult questions confronting the companies as it pertains to content.
If someone with a substantial public following challenges a company decision they should be entitled to a public hearing and a public review of their case. All of the users of that platform should be able to read the ultimate decision to better understand what is permissible and what’s impermissible on the platform.
The rules should be applied fairly and evenly and not discriminate based on content.
That way instead of hiding the decisions and providing inconsistency, you’d have an open, clear and transparent precedent that existed for exactly what kind of speech was allowed and what kind wasn’t.
Because I think what’s going to ultimately happen is these issues are going to continue to arise and what bothers people, myself in particular, is the haphazard nature in which these decisions occur.
It isn’t that I think the people running these companies are trying to make bad decisions — I think their intentions are good — it’s that I think they have so many things they are working on that it’s hard to make good decisions on complicated cases.
There’s a famous legal aphorism, tough cases make bad law and I think it applies to many of these incidences.
I also think, to be fair, that Silicon Valley isn’t filled with that much intellectual diversity so I think you end up with an echo chamber effect. That’s why I don’t think it’s a surprise that a liberal community like Silicon Valley is disproportionately restricting “conservative” speech.
So why not create your own Supreme Court made up of half Republicans, half Democrats — all of whom would be trained lawyers — and let popular users on the platform sit as judges?
The rulings they set down would exist as precedents to follow going forward and would go a long way towards solving many of the issues facing these companies today.
It may be a bit of a radical idea, but I think big changes are coming for big tech.
The solutions may well need to be radical, but I believe the Supreme Court has established in Marsh v. Alabama that these tech companies can’t merely site their user agreements as evidence that they can do whatever they want.
That’s just not true.
Thanks for reading Outkick.