Liberties and several other fundamental and digital rights organizations argued that filtering creates an environment where people don't have access to certain information. Filtering is a type of automated, preemptive censorship in the name of protecting specific values, let that be copyright, the protection of children from harmful content, or removal of hate speech to protect vulnerable groups in our society.
However, there are several problems with mandatory filtering. Here we discuss the two most important of them.
Insufficiency of upload filters
Upload filters are automated decision-making tools created for content recognition and blocking. They block content according to an algorithm. However, they lack the understanding of linguistic or cultural differences and are unable to assess the context of expressions accurately. They are poorly trained and there are several cases where videos were blocked for undue reasons.
These automated decision-making tools are trained to recognize patterns and models. They scan visual and audio files and block content if there is a positive match with certain other content, such as copyright, child sexual abuse material or terrorist material. Although they can achieve a very high accuracy in identifying content, they don't understand the context and the impact content can have on its audience.
We all want our governments to keep us safe from illegal activities, such as hate speech or online terrorist content; however, automated content-recognition software will not solve any problem in our society. Automation will and should be used. But not on a mandatory basis and not without human review.
Privatized law enforcement
The other problem with mandatory upload filters is that they would require privately owned companies to solve difficult fundamental rights problems, namely to distinguish between protected free speech and copyright infringement, harmful content, misinformation, or terrorist content.
This solution does nothing but shift responsibility to companies, such as search engines, video sharing platforms, and social networks, to solve the problems of our societies. But these companies lack the resources and knowledge to solve these societal problems.
And it is not only about their expertise in this area – it also goes against their business model. For these companies, the protection of fundamental rights is not of primary importance. When it comes to choosing between business interests and protecting freedom of speech, businesses have a strong incentive to opt for the former – to block content if there is any risk that the company may be legally liable.
Putting businesses in control of content puts free speech and freedom of information at serious risk because it makes it more difficult for individuals to exercise and enforce their right to free speech.
We don't want to sacrifice our fundamental rights for fake security. Mandatory upload filters are avoidable to circumvent suppression of freedom to access information and express one's opinion.