There is no doubt: today, big tech has an enormous impact on our lives. From what we buy, where we go, and even what we feel, almost everything we do is conditioned by what search engines and platforms are showing us. It even affects our democracy. Because if what we can and can’t say, read or watch online is limited by big tech censorship, it means we are less free.
Free speech can not be absolute
Limits on free speech are not, per se, bad. We do not want to live in a society with totally free speech. For example, where someone can shout threats of violence at you in the street. The old adage still rings true: a person's freedom ends where another person's freedom begins.
But these are principles that have been inscribed by our elected representatives, our parliaments and governments, into our legal systems. And then it’s up to judges to determine when the rules have been broken. The problem is that online, it is the big tech platforms themselves who have become the governments, the police and the courts, perhaps unwillingly. They make their own rules about what people are allowed to say. And it’s quite tricky and expensive for individuals to turn to the courts when it looks like a tech company has violated their free speech. These companies answer first and foremost to their own shareholders. And their number one concern is maximising profit, not making sure that the internet serves as a place for healthy, well-informed, democratic debate. So when a private company gets to decide what is acceptable and what is not acceptable online, that can cause tensions for a democratic society, and people may be left wondering why Facebook or Twitter accept some things said by others while it has censored them.Big tech companies block and remove content of child sexual exploitation or militant terrorists executing their victims. We agree that this material goes beyond free speech, because it is downright dangerous. But those are clear-cut issues. What do we do with conspiracy theorists who claim vaccination is an attempt by government to control us? Their actions are dangerous, because it endangers our society’s ability to protect itself from the virus. But should big tech be deciding what kind of content gets taken down here and what content can stay up?
Donald Trump
Donald Trump was perhaps the highest profile example of somebody whose communications were cut off by social media. Many people feel that Twitter was far too slow, given that Trump had spent years, including before becoming president, stirring up division and racial hatred. When Twitter did remove his account, it said that this was because Trump was encouraging a violent insurrection against Congress. Many critics said that Twitter waited until Trump was on his way out to act because the company was worried the president could have found a way to punish the company when he was still in office.
Whether or not you agree with the outcome, is this really the way society should decide what amounts to free speech and what crosses the line? These decisions belong with independent courts whose judges rule according to the law, rather than companies worried about their bottom line.
How censorship works
Currently there are few, if any, legal challenges to the power that the likes of Twitter have. Should one company have so much power?
At the moment, Twitter and Facebook have codes of conduct that they apply to situations online. For example, any content that promotes self-harm is against its rules. You can’t post adult content, violent imagery or show sensitive images either. The rulebook has got heavier over the years.
Now companies face fines for allowing people to post content that breaks certain rules. For example, content that breaches copyright, promotes terrorism, or shows child sexual exploitation. Nobody wants illegal content online. But the problem is accurately identifying unlawful content. If the platforms were going to have to take down such content, they would have needed an army of checkers. That costs money. A lot of it.
Their solution has been to turn to algorithms which automate the detection of this kind of content. But again, lots of content which should not be online finds its way there, while legitimate content which documents war crimes for example gets taken down. A.I. might be able to spot clear-cut cases, but it can’t distinguish perfectly legal content that is satirical or educational or artistic from the genuinely bad stuff. And because companies are trying to protect their profits and avoid fines, they make their algorithms err on the side of caution. Which means a lot of legitimate free speech is going to get censored.
If you get content taken down or your account banned
If you have content removed from a social media platform or worse, your account gets taken down, you have the right to receive a justification. The platform should normally provide you with one, though most of the time you do not get a very precise reason. If you disagree with the vague reasoning, there is an appeals process, but it is rare to get a ban overturned. And taking a case to court is slow and expensive. And most of the time, you are not sure what to challenge except for the fact you were being censored.
This can be a problem if you have built up a large following or if you rely on your social media presence for your business. And you can bet that this gets abused in some countries, where political rivals manage to get their opponents banned or their content removed.
The solution?
So far, the EU has made some rules about what Big Tech platforms can’t allow on their social media channels. And these are backed up by fines. But this incentivises Big Tech to err on the side of caution and censor legitimate free speech. And most of what we are allowed to say, spread or watch on social media is decided by Big Tech companies themselves. Several years ago, the EU drew up a code of conduct together with Big Tech and social media platforms. But that remains voluntary and all the responsibility lies with Big Tech. And it still allows Big Tech to set the rules about what users can and can’t do on their platforms. Big tech has too much power, and probably doesn’t enjoy wielding tools of censorship anyway.
The solution is for governments and the EU to change the rules so that they protect us against censorship. We have a right to free speech whether we’re offline or in the online world. It should be our elected representatives, acting in the interests of the public, who write the laws about behaviour in the online world, not companies, acting in the interests of shareholders.This doesn’t mean that companies don’t play a role in policing their platforms. But decisions about what users can say can’t be left entirely to algorithms. Platforms make huge profits. And more of this income should be ploughed into hiring more staff to review decisions to block or take down content or ban users. It should always be possible for an individual to get a decision checked promptly by a human. On top of this, individuals should be able to appeal to the courts quickly and cheaply. This system could also be funded through taxation on big tech platforms.
Internet platforms have become the equivalent of our town squares, hosting much of our public debate. It may be OK for tech companies to make money from the services they provide. But these platforms have become public spaces that have an impact on our society and our democracies. And this means that they have to be governed by laws made by our representatives and run in a way that doesn’t damage democracy.
When Big Tech draws up its own rulebook, it means that our speech and our democracy is being governed by an unelected tech firm motivated primarily profit. That’s not the kind of democracy we want, is it?
Image credit:History in HD