On Content Warnings

I am going to rotate content warnings around like an object, cast upon in different lights, and examine their various implementations, justifications, and effects on online spaces in particular. I hope to locate a balanced dialectic on content warning best practices.

American Libraries: Labels are not neutral

In 2015, the American Library Association adopted the current iteration of our position statement on labeling systems as they pertain to the Library Bill of Rights. The ALA, and library schools in accordance with the ALA's official position, caution librarians against the use of labeling systems which are not purely directional in nature. Even labels which appear to be purely directional can still subtly serve as prejudicial labels which caution readers towards or against reading certain books based on subjective evaluation of the moral appropriateness of a given title.

Tangent on "African American Collections"

An ongoing controversy in many public libraries is over whether libraries should have an "African American Collection." The arguments in favor are that it highlights books by a group that is otherwise underrepresented in publishing, and thus makes it easier to find books by Black authors. Many patrons will ask for the section by name and are upset when they are told that the books are not separated out by race. The arguments against are that these collections tend to be dominated by "Urban Fiction" pulp paperbacks, and so the "African American" spine sticker ends up creating a certain association between all Black authors and a particular—stigmatized—genre. While it may seem like a purely directional label, it indirectly results in all Black authors in the library being tucked away with the other "genre sections" such as Romance, which are perceived as crude and of lower quality and appropriateness for all audiences. Being Black is not a genre and books in this section always circulate worse than the general collection. People who aren't Black don't go to the "African American collection" to browse. The need to explicitly go to the place where the labeled books go discourages going there even if the labels don't explicitly say "Warning."

I used to have a white supervisor who was beyond retirement age, and who adamantly supported having an "African American section." He told me that before it was there, those books just weren't in the library at all. Books by Black authors were perceived as inherently crude and low quality. There were concerns about appropriateness for patrons, and the effects of such "crude content" on the minds of young adults in particular. There were concerns about bringing "the kinds of people who would read those books" into the library.

When Flyygirl by Omar Tyree came out, it was incredibly popular among teenagers and young adults in predominantly Black neighborhoods of Philadelphia. However, libraries would not collect it. It was considered inappropriate for youth. You had to go to a Black owned book store to get it. My former supervisor, among other librarians, argued that collecting Flyygirl among other "African American Literature" would get young Black people into the library and reading books, which builds literacy and keeps them off the street. For this, white supremacist groups issued death threats against librarians advocating for the inclusion of Black authors in the library.

Eventually, a compromise was struck. Flyygirl would be collected, but it would not be shelved with the white books. "African American literature" would be given its own section and labeled as such. Toni Morrison and James Baldwin would be shelved alongside "Urban Fiction" books—but they would be collected. Even though Flyygirl is a young adult novel, it was labeled as and shelved as adult. The persistence of these collections into today has been controversial. Patrons expect books by Black authors to be shelved together, and older librarians fear that without its own collection the books won't end up in the library due to their contents.

The label appears to be neutral at first, but it ultimately comes from a place of imposing subjective moral judgements onto books. It is not a "normal book" it is an "African American" book.

Have you seen that section? All those books are about sex and gangs. Very violent and crude.

Some libraries have replaced their "African American collection" with an "urban fiction" section, and only put books marketed as urban fiction in that collection, while other Black authors are shelved in the general collection. While this certainly sounds better, patrons do not think of it as the urban fiction section and they don't call it that. The way the question is always phrased to me is "where the Black books at" or sometimes "where the hood books at."

But "Urban fiction" is still a racialized genre. I'm not going to say that Kwan and Lady J should be held in as high esteem as Percival Everett but the genre conventions that distinguish them are still racial in nature. Most Urban Fiction books are romance novels or thrillers. They are very similar in quality and tone to Harlequin books. I used to joke that the romance section is the "white romance section" and the urban fiction section is the "Black romance section."

What is the difference in genre conventions? Your Harlequins star a white lady living in the suburbs who meets a duke and is whisked away to an obscure European country where she overcomes the challenges of her low birth to marry the love of her life. Your Urbans star a Black woman living in a low income redlined neighborhood who meets a white doctor who whisks her away to Windsor, Connecticut where she overcomes the challenges of her low income birth to marry the love of her life. As for the "male" books. Your James Pattersons star a white cop who goes undercover to investigate and bust a gang. Urban Fiction books will have a gang member who goes undercover in a rival gang in order to steal a payload. It's still just about two gangs having a fight, one of them just has a gang that's on government payroll. The difference is whose daily life is the basis for the book, not what kind of story is being told. Still, James Patterson is shelved in general fiction with the likes of Camus. Patterson is a "normal" (white) author. Kwan is a Black author.1

This tangent will be relevant, I swear.

Labels as Warnings

If a book is labeled as having sexual content by the library, this cannot be neutral. It subtly cautions the reader against the book, and creates the feeling that a title is dangerous or trashy. To be a part of a content label is to imply that it may be undesirable or dangerous; while books without content labels are tame and appropriate for all audiences.

You can tell which gay romance novels are not intended for gay people, because they will have large warning labels advertising "Warning! Hot steamy M/M smut inside!" This may sound positive, but it is framing the contents as something to be wary of. These warnings do not appear on heterosexual romance novels. A judgement has been made that homosexual content is potentially offensive and requires a warning. It is not a normal book anymore now that it has been labeled. LGBT lives are potentially offensive, but Duke romance books do not carry the same warnings.

Libraries do not apply content warnings to books. The publisher or author may do so, but it is the position of the ALA that it is not the responsibility or place of librarians to tell other people what they need hidden from them or what they need to be cautions of.

MPAA, ESRB etc. Content Ratings

Movies

The Motion Picture Association of America implemented a content rating scale in 1968 at the end of the Hays Code era. Self-regulation by the industry was a response to government censorship. By labeling films with their contents and age suitability, parents could make their own decisions.

MPAA ratings are entirely based on a subjective judgement of what age someone should be to engage with certain content. The ALA does not believe in telling anyone what is and is not age appropriate. Parents can make their own decisions and do their own research, but the ALA refuses to have professional staff participate in that process. The MPAA has a suggestion, but frames it in the language of "parental guidance." Content may be inappropriate, but it's up to you as the parent to decide. Notably, once you are an adult, all content is deemed appropriate. There are no MPAA content ratings for people over the age of 17

Subject matter which contributes to the MPAA rating system includes:

  • Violence
  • Language "beyond polite conversation"
  • Drug use
  • Nudity
  • Sex

This thus captures what American culture deems to be offensive, immoral, and inappropriate. These five things are for adults only, whose maturity and constitution allow them to be exposed to such matters without harmful consequences to their psyche. That is the cultural narrative.

Television

In 1997, the United States Congress mandated that all televisions be equipped with a chip allowing parental control of what TV stations and programs could be received. The FCC would implement a rating system that all programs would be classified by, allowing parents to technologically prohibit their children from accessing certain programs. Notably, the News and Sports are exempt from ratings—as are commercials.

The TV rating system includes special categories under each age, for further customization of what is appropriate for your child. For instance, TV-Y7 means you have to be 7 to watch it, but there's also TV-Y7 FV meaning there is "fantasy violence." What an American term.

Contributing factors and content labels for television are:

  • Suggestive dialogue
  • Coarse or crude language
  • Sexual situations
  • Violence

The maximum rating is "Mature." Maturity is framed as what allows one to safely be exposed to all content.

Video games

The Entertainment Software Ratings Boads (ESRB) began for similar reasons to MPAA ratings. Moral backlash concerned about violence in video games threatened to result in government regulation of content akin to the Hays code. The ESRB would self-regulate the industry, allow for parental discretion, and stave off censorship.

The ESRB ratings begin with "E for Everyone" and conclude with the rare "Adults only" which only applies to games affected by pornography laws. Most games cap out at "M for Mature." Age is the framing device, but if you think your kid is really mature, then sure they can have access.

The ESRB additionally puts specific content warnings on the cover next to each rating. These can get incredibly specific to an amusing degree.

  • Alcohol Reference
  • Animated Blood
  • Blood
  • Blood and Gore
  • Cartoon Violence (discontinued)
  • Comic Mischief
  • Crude Humor
  • Drug Reference
  • Fantasy Violence
  • Gambling Themes
  • Intense Violence
  • Language
  • Lyrics
  • Mature Humor
  • Nudity
  • Partial Nudity
  • Real Gambling
  • Sexual Content
  • Sexual Themes
  • Sexual Violence
  • Simulated Gambling
  • Strong Language
  • Strong Lyrics
  • Strong Sexual Content
  • Suggestive Themes
  • Tobacco Reference
  • Use of Drugs
  • Use of Alcohol
  • Use of Tobacco
  • Violent References
  • Violence

By far the most extensive list thus far, we now see the addition of new moral vices. Gambling, rape, and "mischief" now require content warnings.

What all of these ratings have in common is that they are based on the assumption that adults can handle anything they see, and only need content warnings to make paternal judgements for children who are unable to handle seeing some content.

NSFW and Shock Sites

In the early days of the internet, it was rare to own a personal computer or have access to home internet. Most users accessed the internet via association with a university or employer. Computers are still expensive, but were especially so back then. Most people casually browsing were doing so from a public place where others could be around.

Urban legend has it that in 1998 a woman posted on the snopes.com forum that users should be careful with their language because "British school children might see it." Users began to label content as NSFBSK: Not Safe for British School Kids. This acronym spread across the internet and eventually evolved into "Not Safe For Work." Don't open this content while at work.

By the early 00s more people were accessing the internet from home with the rise of AOL and RCN, so this contrast came to be meaningful in a way it was not before. Before, everyone was at work, and everyone was an adult. Now, different users could be different ages and accessing from different contexts. When I was in kindergarten, I was using computers at school. My father actually worked for a failed tech startup focused on making a separate internet for kids. The home computer was rising and we could no longer assume that everyone online was likely a college student or adult working professional.

NSFW has been a controversial term in recent years, with more and more people are replacing it with terms like "lewd." The term has been criticized for being non-specific, for implying the content is "not safe" e.g dangerous, and for not being considerate that different people have different kinds of work. Some people's "work-inappropriate" is different.

I do like the NSFW label for one thing which is that it is pretty neutral and context specific. There's nothing wrong with this content, you just might get in trouble for looking at it on a work computer. The baseline assumption is not that the content will harm you because of your psychology, just that it's not exactly professional to look at this stuff when you should be doing spreadsheets.

One issue I've had with NSFW is people getting really pedantic about what is and is not NSFW. If my employer saw me looking at a big boobed anime lady with barely the slightest bit of fabric covering the nipples, then I would still get in trouble. That content is NSFW. But some people will argue that it's not NSFW because it isn't showing the nipples, and therefore would be rated M for Mature and not AO for Adults Only. NSFW is treated as a binary rating system for what is and is not appropriate for children, rather than what will or won't get you in trouble for looking at while at work.

Often uninterrogated is the idea that looking at a female-presenting anime nipple is psychologically harmful, whereas it's safe and harmless if the nipples are covered. Massive furry bulge struggling to burst through a jockstrap is safe; it will only cause irreparable damage if that cock liberates itself from its confines.

Shock sites

The Internet once was full of sites that existed for no other purpose than to shock you. Goatse is the most famous one, along with meatspin and two girls one cup. These websites were pranks. Send them to someone and give them a shock. There was a pretty strong homophobic component underlying a lot of these shock sites. I remember being pretty intrigued by meatspin and not quite so disgusted as straight boys around me performed.

Some content, even if not psychologically harmful, is still shocking to see. It's emotionally impactful if you weren't expecting to see it. This expectation that a hyperlink might lead to something shocking further encouraged utilizing NSFW tags. We've all experienced that moment of opening a link and immediately trying to close it as quickly as possible.

One person's shock is another's pleasure. The internet has bred a development of new fetish art unlike anything anyone could have imagined before: Candyvore, dronification, mechabare, guro, hyperinflation, hyperstagflation, size theft, zero stroke, and more.2 What one person seeks to avoid and finds shocking, another seeks out actively. The content exists to meet demand.

The increased usage of warnings online thus becomes a way of mitigating shock. You are being told what something is so that you cannot complain when you open it and get exactly what you were told to expect.

Forums and Spoiler Tags

With the rise of Internet forums, we also saw the usage of the spoiler tag. Spoilers used to be a huge deal online. Remember "Snape kills Dumbledore?"

In order to discuss media without causing fights over spoiling each other, web forums began implementing spoiler tags. Wrap your text in [spoiler]the appropriate tags[/spoiler] and the text will appear as a black box. Click the box and the text appears. Simply write your content warning of "Farscape Season 3 Spoilers" and now the reader knows if clicking through will spoil them or not.

As humans do, this was immediately used for comedic effect. Comments made in spoiler tags mid sentence served to create a beat, creating comedic timing, or the vibe of a Shakespearean aside to the audience.

Some forums implemented spoilers as a collapsing box. Click the button and it would expand to reveal paragraphs of text. Many forums used this for organization of text, saving space, and so forth. There's nothing objectionable, it's just long. Perhaps multiple chapters of a serialized story posted in the same forum post.

Tumblr Savior and Obama Era Social Justice

Jump forward to 2010. Teenagers and young adults on tumblr are navigating how they want to exist online having grown up in a world of content ratings. Anything could appear on your dashboard, often porn, and Tumblr Savior was developed as the must-have browser extension.

Tumblr posts could be tagged to increase visibility. Tags can organize posts on your blog and made posts searchable. Tags did not appear in the body of the post but in subtle grey small text beneath. In library science we call this system of organizing information "folksonomy."

Tumblr users also used tags for making quiet comments to their followers that they did not want reshared as a full addition. Again, as an aside to the audience.

Tumblr savior allowed you to put tags on a blacklist. Blacklisted tagged posts would be collapsed spoiler style and show only the tags but not the post. Tumblr users began tagging posts for the sake of adding content warnings instead of for discoverability. If you wanted something to be spoilered for mention of rape, but not be discoverable on the "rape tag" then you would tag it something like "tw rape" "rape tw" or "rape tw ////"3

A wonderful thing about tumblr savior is that if you were looking to hide a certain fandom, those posts would already be tagged for the sake of discoverability, and you did not usually need to ask someone to tag it. Sometimes you would ask someone to tag something, and that would lead to controversy at times, but a lot of it was seamless.

Tumblr's culture was shaped by people who were or had just recently been teenagers. A lot of its users were not adults.

Tumblr savior's rise coincided with the rise of the Trigger Warning, a label given to warn of common PTSD triggers. This is a very different content warning framing than previous eras of the internet. Previously, those who would be harmed by seeing content were "British school kids" or perhaps it would raise the ire of the boss. Perhaps you would be shocked or annoyed. This is the first time that the consideration was that you the reader might be psychologically harmed by seeing content.

The developer frames things a bit differently than how the user base did:

Tired of posts about the pandemics filling up your dashboard? Hate hearing about a particular politician's latest blunders?

If you just want to hide posts about certain topics, Tumblr Savior is here to save you. Just add your most despised terms to the black list and Tumblr Savior will valiantly protect your delicate sensibilities. And if you wonder what got hidden, there’s a handy link to show you.

Tumblr Savior saved you from annoyances, according to the developer. It came with default blacklist tags which were not universally objectionable. In 2020, it was patched to add "coronavirus" and "Trump" to the default blacklist.

Tumblr savior was often used in conjunction with Missing E and then Xkit supplanted both. The tumblr savior function was the most popular and high priority function in xkit.

By this point, these extensions were not perceived as just being for the managing of annoyances but as a piece of assistive technology. People with epilepsy could use xkit to hide flashing images. People with PTSD and phobias could hide their triggers.

I can't find the original default lists anywhere, but to my memory the default was to hide NSFW, porn, nudes, gore, blood, rape, suicide, self-harm, sexual assault, and flashing images.

Survivors of sexual assault were the center of the discourse around this.

When I was a recent survivor of sexual assault, I remember adding tags to my tumblr blacklist just on the assumption that it would help me. My actual main PTSD triggers had been mundane things like blue flannel, unexpected touch, the smell of a certain laundry detergent, and a certain mustache that was in fashion. There was no avoiding these things. But I figured I was a Survivor now so I should add those tags and maybe it would help me.

Mostly it just highlighted posts containing them and created the world's most tempting button encouraging me to open the posts anyway and look at them. But supposedly having the "advance warning" would soften the shock.

Campus Politics and the Trigger Warning Backlash

When the tumblr generation, my generation, went to college, there was a new expectation faculty were unprepared for. We grew up on the ESRB and MPAA, forum spoilers, and tumblr savior. The world was always warning us of content before we consumed it. We were the children born in the 90s who legislators were concerned about protecting with TV content ratings. Now, we were adults, and there were no more content ratings.

The world the faculty grew up in assumed that only children required content ratings, and adults could handle anything. When these faculty were students, they were the ones coining "not for British school kids." It never occurred to them that their students would eventually be kids who grew up in a world where everything warned of distressing content on its label. Everything online warned you that it might shock you. It just seemed socially normal!

I don't think that any of us students at the time believed ourselves to be so fragile that we needed to be protected from comic mischief and allusions to sex. Most of us found all the content ratings to be a bit silly. But we did perceive them as normal and reasonable. So when there were individuals who had experienced primarily violence against women, it felt reasonable to us to request a heads up if we were going to be reading something particularly upsetting that might trigger someone's PTSD.

The generational clash resulted in trigger warnings becoming a major component of the culture war at the time. There was little room for nuance. Either you supported survivors of sexual assault and feminism and therefore had no qualms around trigger warnings, or you were probably a literal fascist using trigger warnings as a dog whistle to complain about not being able to say the N word anymore. Alternatively, supporting trigger warnings meant you were a fragile immature adult who didn't want to be challenged, and if you opposed trigger warnings you were a warrior for free speech.

In retrospect this conflict was blown up so much more than it needed to be. We eventually landed on a compromise where students could privately request faculty give them content warnings for a specific subject on the syllabus and a rare few could get ADA accommodations to read alternative materials. Progressive faculty took the stance that most students would not be able to avoid reading the assigned works, but that a content warning gave a moment to prepare emotionally.

The progressives used the existence of content ratings as a major argument for the reasonableness of trigger warnings and began using the phrasing of content warning instead of trigger warning as a rhetorical strategy to conflate content rating labels with content warnings. It's perfectly reasonable to give a heads up, became the main argument.

Content ratings began from conservative concerns about the corruption of children and the need for parents to control what their children were exposed to. Now, adult progressives put content warnings on everything as a matter of course and in-group signaling.

Are content warnings harmful? The right wing has made arguments that they are. I am doubtful of the arguments that they make.

Entire books have been written about trigger warnings in an academic context. Research has been done into their psychological impact. Every study I found basically found nothing conclusive or statistically significant. They don't really seem to impact anything for anyone one way or the other in any consistent measurable way. They did not prevent anyone from being triggered who was going to be triggered by reading that content. They did not trigger anyone who wouldn't have been triggered. They don't really even seem to deter anyone who would have been triggered from reading the content with a warning on it when offered an alternative.

There is no downside to the inclusion of content warnings and no benefit. They are a neutral action when it comes specifically to psychology and mental health for the vast majority of people. There's nothing we can conclusively say happens or does not happen.

How long do you need trigger warnings and "amateur exposure therapy"

One of the most common back-and-forths of trigger warning goes as follows:

Shammai: People with PTSD and phobias need trigger warnings to function in society so that they do not have panic attacks.

Hillel: Avoidance of potential triggers for fear of having a panic attack is actually a symptom of PTSD in the DSM. We should not be enabling and encouraging avoidant behavior. People struggling with trauma triggers need to go to therapy and focus on being able to go through life without being avoidant.

Shammai: What, so you're their therapist now? Are you qualified to treat them? Are we endorsing amateur non-consensual exposure therapy being foisted upon people at random? Exposure therapy should be conducted by a professional in a careful measured manner if and when someone elects to go through with it. Showing arachnophobes random pictures of spiders is cruel and pointless.

Hillel: Anything could be a PTSD trigger for someone and we cannot construct a world that enables anyone to potentially avoid anything forever. People need to sit in their discomfort and work through their issues. It's not my problem. If we make it too easy to avoid the triggers forever, they'll never be motivated to overcome their triggers in therapy.

Shammai: It's not discomfort. By making it impossible to avoid their triggers you are making their trauma into something disabling when it does not have to be. It's not hurting anyone to just take a split second to add a content warning. Take a moment to care about other people.

This dialogue never leads to a place of agreement. The fact is that PTSD can be quite disabling if you have to avoid something unavoidable. Not everyone's PTSD triggers are so easily avoided. Trigger warnings on content about racism is not going to enable a person of color to avoid experiencing racism.

Trauma triggers causing panic attacks generally is a temporary experience one goes through during the acute stages of PTSD, and eventually one learns to tolerate them. They are not generally a permanent state that one struggles with forever.

What is merely a moment of shock for some readers may be deeply triggering for others. But labels are not neutral. By labeling something with a trigger warning we state it is normal to be upset and triggered by it, and not normal to be triggered by other things.

And that button is so very tempting to touch. When presented with a trigger warning, most do not avoid, they advance anyway. It may dissipate the shock, but it does not change their level of emotional distress.

Twitter and Rot13

Twitter does not have any sort of tagging system, tag filters, spoiler tags, etc. so users invented our own.

Rot13 is a simple cipher anyone can use. Each letter is replaced by the letter 13 places ahead of it in the alphabetical. To make the cipher, just get two wheels and overlay them, writing the alphabet around them. Rotate one wheel thirteen places and each letter will now align with its rot13 counterpart. Tools online can automatically cipher and decipher rot13 text for you in a split second.

Users on Twitter would use rot13 as their spoiler tags and for putting trigger warnings on potentially upsetting tweets about self harm and the like. The ease of ciphering and deciphering made it quick and simple to peek through. It was a bit of a clunky workaround but it worked. And so rot13 came to be associated with spoiler tags.

Mastodon and Content Warnings

When Mastodon first launched, it did not have the content warning system it is now known for. Most users came over from Twitter, and so we imported the Twitter practice of using rot13 to cipher and decipher our text.

Mastodon user @jk@mastodon.social (the rotating coyote) wrote a bookmarklet that instantly converted all text on screen into rot13 and back again without refreshing your feed. This made it easier than ever to spoiler your text, and so people used rot13 liberally.

I recall even NSFW writing and lewd kinky sentences were put behind rot13 even though it was only words. I believe that the global timeline (as it was called at the time, before we split it into local and federated) provided a major motivation for this. Since everything was being shown to everyone there was an increased level of courtesy for what people might want to see or not see. Since Mastodon used to be a bit more slow paced, most users kept the global timeline column open on the side to get some extra content and meet new friends.

Blackle Mori coded the first iteration of Mastodon's content warning system. I recall also being involved in the design somehow but I honestly don't remember in what way. I had my paws in everything back then. Eugen approved it and Mastodon gained what became one of its most well known features—for better or for worse.

Blackle's CW design was incredibly simple. It was universally agreed upon by everyone involved in mastodon development at the time that this simple design was what made sense. There is a button in the compose box labeled "CW." Click on it and a box appears. Type anything in the box and now your post is collapsed like a forums-style spoiler box, with your content warning on top. Now you can post anything you want and allow others to opt into seeing it after having received an adequate warning.

This feature was initially the most universally beloved feature of Mastodon, something that Mastodon had that Twitter did not, and a lot of people joined Mastodon because of it. It was common to hear Mastodon referred to as "Twitter with content warnings" which for many, given the culture war around trigger warnings in 2014–2016, told them everything they needed to know.

"CW Discourse" quickly became an eternal presence on Mastodon. The first CW discourse came when people inevitably used CWs to make jokes. Your CW was the set-up, and the post body was the punchline, creating that comedic timing. Sometimes people made jokes using false content warnings, labeling their delicious hoagie as "lewd" for instance. This led to discourse about if it is disrespectful to people with PTSD to use CWs so frivolously and if unreliable CWs meant people would start opening CWs expecting something to not actually be what's on the tin, and then being shocked anyway.

People find it so difficult to not click the button that shows the post that eventually filtering was implemented that hides the post entirely rather than spoilering it.

There were complaints that Mastodon users were so liberal with the CWs that people had to click to open every single post. Some users responded to this by saying that CWs should be treated as "the subject line to an email" and that we should probably just CW every single post by default.

Being able to CW posts lead to an increased sense of freedom to discuss things others might not want to see, but it also led to pressure to avoid upsetting potentially anyone.

Mastodon's culture emerged in the moments immediately after Trump's election in 2016 and the feeling that the internet was an eternal battle for space against neonazis. A lot of people were exhausted and distressed by thinking about the news. And since not everyone wanted to see the constant discussion of US politics, the "uspol" and "current events" CWs emerged.

It became expected that all posts about politics be hidden behind a CW. It felt impolite to discuss politics and it received less engagement. All current events required this CW, including events in one's own highly politicized life.

On tumblr, tags were added by the reblogger, so they knew what trigger warnings had been requested by their own followers, who in turn enabled the hiding of those tags in their own client side blacklist. On Mastodon, you could not curate your tags for a known audience, and what was hidden for one user would be hidden for all users. This meant anything that any individual might wish to have hidden behind a CW, even if that individual does not follow you, needed to be hidden just in case it was federated or boosted onto the timeline of someone who would want that CW.

On tumblr, tagging food was for organizing posts, and had the added effect that someone who wanted to hide all food from their timeline could put food on their blacklist. The tagging of food was neutral because it did not inherently hide the food it just described it as being food.

On Mastodon, all food was hidden, even if you did not want to hide it. This was not neutral. Food now felt scandalous and dangerous to discuss and share.

"Eye contact" or "scopophobia" became culturally mandated tags which were applied to all pictures of all faces, even if the eyes were not actually looking at the camera. I am sure that people with scopophobia exist but I had never once actually seen someone request that others tag eye contact for their own sake, it was only ever justified as being for a hypothetical other. Sometimes it was attributed to being "for autistic people" which always felt infantilizing to me as an autistic person.

Many people joining Mastodon would report a similar experience: They made an account and posted a selfie in their introduction post. They would immediately get people in their replies telling them to go fuck themselves and hide their face behind a content warning with a word they didn't know. I had witnessed this occurring a number of times and it was never "Oh hi welcome it's good to see you FYI you should tag eye contact in photos" it was always rude and aggressive. Since "scopophobia" is not a commonly known word, this meant a lot of people had their first interaction on Mastodon be someone telling them that they should hide their face behind a trigger warning lest it upset somebody. That comes across as an insult to one's appearance. A lot of people who had this experience were people of color, and interpreted the interaction as their race being the issue. White people might not want to see Black faces so hide them please.

When you are a person who experiences racism in your daily life, it's quite challenging to tag all mentions of politics. Your life is politicized. White lives are safe for general audiences, but Black lives need to be given a mature content rating in case it upsets a white person to read about racism. These labels aren't neutral. A value statement is implied by their labeling.

Mastodon's intense CW culture, where everyone constantly argued about what should and should not be hidden behind a content warning, resulted in the "meta" tag for posts about mastodon or which were a part of site wide discourse. The worst days on mastodon had your entire timeline be nothing but "meta" tagged posts which were often vague and confusing. All of this was commonly cited as a big reason why Mastodon felt immediately hostile to BIPOC. What is more white than taking issue with people discussing politics at the dinner table?

People began to fight endlessly about if white people should CW racism too or if only BIPOC were exempt from CWing racism. Was it perhaps actually valorous and important to forcibly expose people to certain posts instead of using CWs? The arguing was exhausting and never stopped.

After all, Mastodon is decentralized. Different instances had different rules around what needed a content warning and when those instances interacted you would get inter-communal conflicts debating if they should defederate because their CW practices were incompatible. New people and instances were always joining Mastodon and every time it would reignite every single CW discourse topic over and over again.

Years into this, the consensus among Mastodon early adopters was that we had messed up. Our original CW design was backwards. We imported our Twitter workaround and turned it into a feature without substantial modification. It was the onus of the poster to filter out what every stranger might now want to see, instead of the onus of the reader to filter out what only they did not want to see. We discussed how tumblr did this correctly. Posts should have tags at the bottom, and users should opt into filtering those tags, rather than opting into seeing them.

"It triggers me to read about racism, as a white person with anxiety"

I told you that the urban fiction tangent would come back.

One problem on Mastodon was the overextended influence of the fragile self-infantilizing white person with anxiety who would quickly become aggressive and mean when something made them anxious. In retrospect, much of the performed fragility and sensitivity was emotional manipulation and classic white women tears. Some of the most cruel and epithet-laden things said on early Mastodon came from people who were quick to identify as a sensitive soul who needed to be handled with care.

Much of the demand for the intense CWing of everything even slightly upsetting came from these people or was supported for the sake of these people. They came to Mastodon to get away from thinking about politics and the plight of the unfortunate—and they were aggressive about keeping the peace. It was incredible how often I learnt these people were extraordinarily wealthy and working in tech or cybersecurity. I was living off $23k/year and devoting much of my time to organizing anti-ICE protests. I had trauma and pain too. But I was confronting the horrors of the world and directing my anger at the government; while they directed anger at people who reminded them that the horrors of the world exist.

Why is an urban fiction romance story darker than a harlequin romance story, when the primary difference is the race of the protagonist? Because the lives of Black women are generally more challenging than the lives of white women. You can write about white women falling in love without writing about their position in a violent empire. If you do the same when writing about Black women, the absence is conspicuous. When Talia Hibbert does not write about systemic racism in her romance novels, it is "escapist fantasy romance."

A white woman can post about her daily life on social media and never have to use a content warning to hide politics or violence.

A Black woman's life is politicized by a white supremacist society. She cannot write about her daily life without inevitably having to choose between writing about systemic racism or actively eliding mention of race from her posts.

If we label every post about racism, one person's life remains shelved in general fiction, browsable by all people. Another person's life will end up filtered out, so as not to upset the sensitive souls.

There are many BIPOC who may want CWs for racism because they don't want to constantly be reminded of the world being built to oppress them. But it should be something they choose to filter, and are not mandated to filter out for the sake of white people.

In the name of inclusion for the anxious, Mastodon ended up with crowds of angry white people telling BIPOC to stop talking about racism, or to at least do so in a way that is out of the way and unobtrusive.

There are many other groups whose lives are politicized. There are many people whose daily lives would require a content warning lest they upset a stranger. Oh if only when you walked down the street, every impoverished panhandler was hidden behind a curtain reading "cw mh-, poverty, homelessness, addiction, drugs, unsanitary, abuse, desperation, bedbugs, severe dental disease." The effect of telling someone online to hide their life behind a CW is to tell the marginalized to retreat further out of sight and into the margins so you can enjoy this beautiful weather without thinking about such dreary affairs.

Cohost and Spoiler Tags

Before Cohost was launched, Jae Kaplan and I had a conversation or two about the design flaw in Mastodon's CW system. Avoiding Mastodon's CW culture was seen as important. When Cohost launched, the content labeling was split into three different features.

  1. 18+ content was tagged with a special unique flag which labeled it as adult content. This would result in hiding the post behind a label stating that the content was for adults only, but this could be disabled in one's personal settings. Minors could not see these posts. Accounts that mostly post porn could be set to default to all of their posts having this tag automatically.
  2. Posts could be given folksonomy tags for organization and discoverability like on tumblr.
  3. Posts could be given spoiler tags like on Mastodon which collapsed the entire post behind a content warning.

Early on, users began discussing content warnings.

Nex3 wrote shortly after launch:

I really hope at some point cohost adds the ability to opt in or out of specific tags being treated as content warnings. I don't usually cw stuff like alcohol or food because I don't want to make it a click-through for the majority of people who don't care, but I feel bad that makes following me worse or impossible for people who do.

I think the ideal situation would be for me to list those as normal tags, and for people to be able to upgrade them to content warnings or mute them entirely at their preference. Similarly, I'd like to be able to downgrade some content warnings so that they're open by default for me without disabling the feature entirely.

What is being described is essentially tumblr savior's blacklists and whitelists.

The desired culture around CWs was then also discussed. Numberonebug soon after posted:

As an ex anorexic I strongly caution against cw food

Food is neutral, it is neither good nor bad. Or rather it is all three. It is a beautiful and rich blend of culture, history, connection and flavor, and it is harmful in excess or deficit, and in the end it simply is fuel we use to survive.

Understanding this dialectic is a cornerstone of recover and of recovering a restorative peaceful relationship with food.

Building a space that validates the notion that food is bad, placing even it's mere mention behind the same screen one places racism or violence or things one could get fired for seeing at work, is extremely extremely ill-advised. It runs counter to every aspect of the recovery process

Ideally we would have an opt out tag system so people could make that decision themselves, and although I think it would be the wrong one it would still be their choice, instead of this opt-in culture we have here means that any change like this has an impact on the culture of the space. This is a moment that reminds me that the developers and initial users of this site are from a wildly different culture to my own lol

This is something that scared me while on mastodon, but I joined late so could do nothing to speak out (and I was still very anorexia at the time so, dangerously, appreciated the space for that at first). But cohost is new, and the pressure for this content warning is new as well (I didn't see it once before masto people started making accounts), so I feel a responsibility to say something

This isn't a "we need to exposure therapy people by force" this is a "creating a culture where people feel the need to hide food fosters a mindset that is extremely dangerous"

Also this doesn't apply to any other CW topic, maybe it does but I can't speak to those issues, this is a dynamic unique to eating disorders

Anyways, do whatever, you're not A Bad Person either way and if you have a friend who asked for this then yeah of course, but also please talk to that friend about it more than just "yeah sure of course".

This post was widely circulated with additions from other ED survivors concurring that hiding food behind CWs is not neutral and is more harmful than helpful. Seeing pictures of food may seem very disgusting when you're in the throes of anorexia, but hiding it behind a CW like violence or gore validates the idea that food should be seen as being in the same category as something vile and shocking.

It was clear what staff needed to do and they got to work implementing the ability to filter posts by tags in an "opt-out" manner. Once tag filtering was implemented, the consensus on cohost was that only universally or severely shocking things should be behind CWs and everything else can just have a regular tag that allows for filtering but does not hide the content for everyone else. Even porn was discouraged from being put behind a CW because that created a second click-through after the 18+ click-through. Everyone on Cohost is at least 16 years old. Nipples are not psychologically harmful. We just don't want to see them while at work. Finally, the human body and sexuality was not being conflated with gore.

Cohost never had CW discourse after this. It was excellent. You could tell the system to hide posts entirely with certain tags or warnings. You could tell the system to not hide posts with certain warnings. It was very customizable. You curated your feed.

The Website League and the Future

Right now, a group of people are trying to build a new social media network built on fediverse software but inspired by cohost ideals. You cannot design and program people out of bad behaviors, but nor can moral character and kavanah override the behaviors that software and systems encourage.

Day one on the website league was immediately miserable. Nodes are decentralized and use either GoToSocial or Akkoma. GTS has no front end and requires the use of software like Enafore.social which is designed for mastodon. It has the featureset and design sensibilities of Mastodon. It has the CW system of Mastodon.

On day one, we immediately broke out into network wide CW discourse. It was exactly like Mastodon immediately. We argued about CWing food and "nonconsensual exposure therapy." I hated it and decided I'm not touching the "weague" until we are not just using fediverse mastodon-inspired software with minimal customization.

Labeling things with content ratings and warnings is not neutral. Statements are made about what is and is not "safe" or "normal." You cannot CW something without making a statement.

Content warnings and spoiler tags can be an accessibility tool, although most people will read the content anyway and be affected the same amount anyway.

We must move on from the 2014 culture war around trigger warnings. It does not make you a neonazi to take issue with labels and content rating systems which originated from conservatives concerned about controlling children.

The primary need for something that the vast majority of people need to click through for is something that is shocking or could get you in trouble for looking at while in a certain context like an office.

I believe the following is the best compromise on content warnings:

  1. Warnings applied to the original post which require all users to click-through to see them should only be used for content that would be shocking to see in public for the vast majority of people. If you saw it on the street or saw someone looking at it in the office, it would shock and surprise you. If only some people would be disturbed by seeing it, then
  2. Posts should have tags which do not appear at the top of a post or require a click-through to see the post. Users can add these tags to a blacklist which applies a click-through to that post or which prevents those posts from appearing on their timeline at all (this is important for the people who really should not click through but will anyway)
  3. If full-text filtering is available, then make sure to use these keywords in the post somewhere or at the bottom, and do not CW posts that are not shocking for the vast majority of people.
  4. If minors are present in a community, the system for hiding porn from them for legal reasons should be distinct from the system for filtering undesired content.
  5. You should never tell someone else to CW their post—instead politely request that they tag it at the bottom so you personally can filter it, as a courtesy to you individually: someone who follows them. If the tag is not needed for your own accessibility and filtering needs, then do not advocate for an imagined other who might need it. That person might not be in the room. Allow them to self-advocate if they decide that they need it.
  6. Understand that it can sometimes be harmful and alienating to suggest that someone make their own life filterable for you. Sometimes it's better to quietly mute or block instead of asking them to do it for you.
  7. Hiding flashing images should be handled by software accessibility settings that prevents videos and GIFs from auto-playing. Giving epilepsy warnings is wise.

This combination seems to result in the best balance of community dynamics, enabling people to make their timeline more accessible for themselves without universalizing their needs to the entire network.

I hope that the website league or any future social media network takes these design concerns to heart. I still do believe it never hurts to give others a heads up before getting into something particularly dark. There is no value to intentionally shocking or surprising others. But this is simply a courtesy, and not something we should impose on others as an expectation—except when it comes to epilepsy where exposure could be physically harmful.


  1. Eventually I decided to interfile the "African American"/Urban Fiction books with the general collection. When patrons ask where it is, I tell them we've racially integrated the library. If they want recommendations for Urban Fiction titles I'm happy to help them find Lady J and Kwan etc. but they're going to be shelved properly as romance, mystery, thriller, etc. and not based on the race of the author. ↩︎
  2. Some of these are economics terms. ↩︎
  3. The slashes, back then, broke the URLs for searching and made a tag unsearchable. ↩︎