I was torn on whether the headline would say “suicide” or “unaliving,” and that itself is a pretty good place to start. Content sharing and streaming platforms are messing with language itself in the name of protecting children. Vague policies on what’s considered appropriate subject matter are broadly enforced by automatic systems that pick up raw words without context, demonetizing or outright penalizing videos and channels that use those words. So now everyone’s afraid to use those words.
So instead of frankly talking about suicide, they say “aliving.”. Instead of frankly talking about sexual abuse, they say “grape.” And don’t you dare mention gambling in your video after the Draftkings ad plays!
I’ve been thinking about this subject for a while, wondering how to best discuss it. And while I was thinking about it, Stephanie Sterling captured most of what I wanted to say in a 25-minute video that went up just a few days ago. Following up on a video she put out last July. So watch those videos. She’s spot-on. I’m still going to talk about it myself, though.
Like so many things claiming to be done for children’s safety (like the recent wave of age verification laws), subject policing on sites like TikTok, Twitch, and YouTube are completely performative, irresponsibly automated, and do more to hurt the very people they claim to be looking out for. They aren’t just not helping, they’re actively making kids less safe.
Subject and language rules on these platforms are intentionally vague because the people that run them have no interest in actual, active moderation. It’s hard work to actually consider context when deciding if a video is appropriate. Complex topics and linguistic subtlety are complicated and understanding them takes effort. So instead of having anyone actually try to understand them (and paying them for the work), it’s easier to use shoddy pattern recognition systems to pick out any time a bad word is said or shown on the screen and automatically disable ads or even punish the channel for daring to show them. Saying “this is a list of words you cannot say if you want your video to make money,” and “this is another list of words that will get your channel banned” would lay bare how stupid the process is, and so instead we just get opaque guidelines about what’s appropriate and have to glean some standards out of them.

It actually gets worse when you look at the specifics of those guidelines. On the subject of suicide, for example, YouTube says it “may consider” whether videos that discuss it should be restricted or censored depending on whether it’s meant to be educational or artistic, in the public’s interest, or graphically describes self-harm. Yes, it lumps all of those factors into the same list as something it might think about limiting or allowing in some form. Like I said, vague.
But what is a specific “do not post” subject? Anything “related to suicide, self-harm, or eating disorders that is targeted at minors.” Related to. Because teen suicide, especially vulnerable groups like trans teens, shouldn’t be mentioned at all.
Thus, suicide is unalive and rape is grape, because without the bad words the censorship won’t trigger. And now it’s harder to actually talk about any of these subjects on these platforms because you have to dance around them using infantilized language. You can’t sound serious, because sounding serious requires using all of those bad words, and we can’t have kids hear those bad words.
Fuck, banning expletives isn’t as bad as what’s being done with the subjects of self-harm and sexual assault, because at least you can still functionally and honestly communicate without dropping f-bombs. You can’t talk about the forces that contribute to sex abuse and suicide without using those terms.
But you can make those subjects harder to take seriously by requiring cutesy language to acknowledge them in the first place. You can undermine how significant they are by turning them into dumb little jokes.
Meanwhile, the actual factors that contribute to those problems? They’re totally fine. You can’t say rape, but you can promote “masculinity” and “tradition” in ways that enable and justify everything from grooming to date rape. You can’t say suicide, but you can make jokes about the unaliving rates of trans people. You can spread as much toxic garbage as you want, if you just not use those bad words.
But if you want to discuss those subjects because you actually want to address them and prevent those things? Well, fuck you.
The really insidious part of this is how much is self-censorship. Remember, the guidelines are vague. They don’t actually tell you what will get you in trouble. They also aren’t enforced with any consistency (a fundamental aspect of both fascists with law enforcement and AI with subject categorization). Does saying grape and unalive work? Is it even necessary, or can you say the real words like you’re a grown-up and these are serious subjects? Does anyone even know? What they do know is that appealing any decision is difficult at best and impossible at worst.
Whether the methods and outcomes are intentionally evil or just horrifically negligent, I don’t know. It doesn’t matter, because the damage is being done either way.
And that’s fucked up.
One thought on “YouTube and Twitch Are Making Their Users Commit Linguistic Suicide”