DANIEL VAUGHAN: YouTube’s purge chills speech for everyone

June 9, 2019

YouTube joined its Silicon Valley compatriots in either banning, demonetizing, or restricting accounts that it accuses of “promoting white supremacy, Nazism and other bigotry-boosting ideologies, as well as those denying that violent events like the Holocaust or the shooting at Sandy Hook Elementary took place.” It’s the latest example of tech companies attempting to police the internet by restricting access to anything they deem as “hate speech.”

This effort won’t work. It won’t work because no algorithm or artificial intelligence platform will ever be able to split the hairs between what is and isn’t hate speech. And the reason computers can’t do that is that we can’t do it — and every country that’s attempted hate speech regulation has ended up censoring and imprisoning regular citizens.

The YouTube purge began after Vox writer Carlos Maza targeted right-wing personality Steven Crowder. Maza posted a long thread on Twitter complaining to YouTube that Crowder had attacked him with “homophobic and anti-Hispanic slurs.”

While some sites targeted by YouTube’s purge were racist or neo-Nazi, others were educational or news channels. YouTube’s algorithm shut down Ford Fischer, a video journalist who reports on extremism in U.S. politics. He posted videos about various supremacists, detailing what they said but not endorsing any of it. The purge also swept up many scholarly videos reflecting the history and archival footage of people like Adolf Hitler.

The algorithms couldn’t tell the difference between history, journalism, and people engaged in promoting the ideas of these evil groups.

This failure is the least surprising outcome ever. It was a foregone conclusion that YouTube would end up harming speech that wasn’t hateful. It’s not Google’s first foray into algorithms they design going wrong.

In 2015, Google rolled out an algorithm to help write captions for its vast repository of images. The goal was for the system to scan the photos and then train itself to correctly give labels to those images without the need for human intervention. There was just one big catch: somewhere along the way, Google’s system started identifying gorillas as black people. And after that issue got identified, Google had no answer:

Google said it was “appalled” at the mistake, apologized to [Jacky] Alciné, and promised to fix the problem. But, as a new report from Wired shows, nearly three years on and Google hasn’t really fixed anything. The company has simply blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.

Google has no idea why its programs started racially categorizing pictures. The applications, built by humans, were supposed to perform straightforward tasks with no racial bias.

Or take the new trend among police forces: predictive policing. An MIT Technological Review report found that police departments around the country were training with Silicon Valley systems’ bad data, which in turn led to racial profiling by police.

Perhaps the most entertaining example of artificial intelligence systems failing was Microsoft’s Chatbot. Microsoft tried to design an AI system capable of having conversations with humans. They had to shut it down after one day of operation because it turned into a literal Nazi — claiming in posts that Hitler was right and that it “hated Jews.”

Watching YouTube’s systems target journalists and educators while trying to curb “hate speech” is the least surprising result — unless you’re Carlos Maza. After targeting Crowder and igniting the firestorm, in a tweet, he claimed he was shocked, “I don’t understand how YouTube is still so bad at this. How can they not differentiate between white supremacist content and good faith reporting on white supremacy?”

Because, Maza, we have years of evidence documenting Silicon Valley’s failures here. And not because these companies aren’t good at designing software to tackle these issues — but because writing code to intelligently parse human speech and meaning is not an easy task.

Leave it to one of the Vox bros to prove they’re illiterate on all things tech and censorship. They may be the stupidest set of people to call themselves wonks.

But ultimately, the reason none of this works is that “hate speech” regulation is both: 1) impossible, and 2) leads to censoring “normal” speech. In Europe, you can find both sides. In Austria, police jailed a woman for violating blasphemy laws against Islam — saying her words constituted an offense worthy of a criminal penalty.

And in London, police have arrested hundreds of people for “offensive” Facebook and Twitter posts. Offensive means whatever the hearer or the state defines as offensive.

This censorship is the future of hate speech regulation, whether by tech companies or by the state. People in the U.K. can be jailed, while people in the U.S. may have their career destroyed if they happen to violate whatever falls under the hate speech umbrella this week.

Social media allows anyone to post anything that’s on their mind at any time, in the heat of the moment. The inevitable result of the left’s push to get tech companies to regulate “hate speech” is that if you ever slip up once — you deserve all the damnation society can heap upon you.

U.S. law explicitly rejects the very idea of hate speech regulation. It’s impossible to define and subject to abuse-of-power by whoever gains control. Liberal tech companies seem poised to doom us to learn that lesson the hard way. Adopting free speech culturally, and not just legally, is our only hope.

JOIN THE MOVEMENT

Add your best email address below to start receiving news alerts.

Privacy Policy


Daniel Vaughan

Daniel Vaughan is a columnist for the Conservative Institute and lawyer in Nashville, Tennessee. He has degrees from Middle Tennessee State University and Regent University School of Law. His work can be found on the Conservative Institute's website, or you can receive his columns and free weekly newsletter at The Beltway Outsiders. Connect with him on Twitter at @dvaughanCI.