Trigger warning for abusive language, harassment, rape threats, violent and graphic imagery.
Today a petition was started to include a "report abuse" button on twitter after a woman received a deluge of rape threats. This topic in general - how we report harassment and abuse in online mediums - is something that I and a number of friends have talked about at length. As someone who has personally received death and rape threats as well as hate speech, harassing comments and emails, and other personal attacks, it's something I think about a lot.
What we do know is that currently the model is broken. It's widely known that abusive, harassing, and threatening behavior happens online. There have been tech conference talks about this, TED talks, articles, blog posts, and so many publicly documented incidents that it's hard to ignore.
With a wider range of people getting on the internet every day, we know that online harassment is on the rise. People have left industries or the internet at a whole. More people have been forced into having "protected" or hidden accounts or using a pseudonym (which many social networks are punishing people for) to, in essence, hide from harassment. Women use genderless or historically-male gendered pseudonyms and avatars to move more safely in online spaces.
It's caused school-aged children as well as adults to commit suicide.
Reporting Abuse Now
Since this conversation was started based on harassment on twitter, let's use them as an example.
On twitter, the only person that can report abuse is the person receiving abuse which is highly problematic. If we take a recent incident as an example: this past March a woman received hundreds of death and rape threats as well as tweets with abusive and harassing language. To avoid having to see that, she shut off her phone and computer and tried as best as possible to avoid seeing them.
In the meantime, the worst amongst these tweets was sent: she was doxxed, a graphic image of her photoshopped to show her bound, gagged, and beheaded. Language threatening and inciting rape, physical and sexual abuse, and murder were included. A number of her friends saw this and took to back channels, trying to find someone who had a connection to a person at Twitter to get it removed as quickly as possible. We contacted friends at twitter who empathized, but told us that, unfortunately, that only the person receiving abuse could report it. This meant that she would have to be exposed to this vile stuff to have it removed.
While this is shocking to a lot of people, this isn't an uncommon situation. I know very few vocal women, people of color, LGBTQ people, or people with disabilities who haven't received rape and death threats, harassing or violent language hurled at them, or worse. When it comes to reporting these things, it can take hours if not longer to get any real kind of response. Sometimes you receive no response or they give you a non-solution.
Twitter, for instance, tells people to just block the offending person or to make your account protected. The former is a non-solution: this person is still able to include your handle in tweets, they can still incite abuse and violence from their followers. The latter puts the punishment on the victim, cutting them off from a larger social experience against their will purely to protect themselves from abuse.
Many, myself included, have stopped reporting all together because so little, if anything, is done that it's more trouble and frustration than it's worth.
Reporting Abuse as a Silencing Tactic
Many social networks have the ability to report abuse somehow. Generally this is a form buried somewhere. You provide "evidence" of abuse and a ticket is created and someone has to read through these all day long. That person is then tasked with making a judgement call of whether or not the incident is abuse.
The issue here is the same issue we have in general society: the people making these rules and judgement calls belong to the same class of people that do the harassing. Since a person within a privileged class doesn't necessarily recognize hate speech as such, it can easily be dismissed.
If Twitter installs a report abuse button, I'd be reported for telling ppl to fuck off for being homophobic, yet homophobes would be fine.
— Alan Hooker (@awhooker) July 27, 2013
Additionally, when a person becomes a very visible target for this kind of behavior and receive an onslaught of attention, what is to keep them from being falsely reported to have their account suspended? This happened under the current system just this past week to a woman speaking out against someone making transphobic/cissexist remarks.
Who determines what is abuse? What about free speech?
This is what most people get worried about. Will something I say be considered abusive? I don't want to lose access to my account!
Violent and abusive behavior is the most obviously wrong to people. Seeing language threatening someone with murder is obvious to most people that it's abuse.
My Indiegogo campaign has been live *2* days. I’ve received 1 death threat, 3 rape threats, various insults about my appearance and worth.
— ashe dryden (@ashedryden) July 25, 2013
But what about hate speech? What about racial slurs or derogatory terms towards groups of people? What about threats to someone's livelihood or professional and social connections? Is that abuse? Do we take the word of the people reporting it or what we have personally or our policies have decided is abuse?
Twitter says "We have found the reported account is currently not in violation of the Twitter Rules at this time" pic.twitter.com/HnJ6gueb3u
— Feminist Frequency (@femfreq) July 28, 2013
On top of that, you can't talk about these issues online without someone (read: tens of people) bringing up the free speech issue. The most common responses include "are you advocating for a reduction in freedom of speech?" and "I don't agree with [rape/death threats], but I will defend their right to say those things!" In the United States, if you walk up to someone and threaten them with bodily harm, you are not protected under the first amendment. What you just did is against the law. Why do we treat this differently on the internet?
Why not go to the police?
Many law enforcement agencies either do not take these types of threats seriously or they simply don't have the resources to act upon them. Additionally, many of the people receiving this treatment don't see justice when these incidents happen in person where threats are assumed to be more likely or imminent, becauses, again, the class of people that are doing the abusing are the same that are supposed to be protecting people from abuse.
Additionally, what happens when the abuser is outside of your police department's jurisdiction? Outside the state? Outside the country? What happens when you can't tell where threats are coming from?
So What is the Solution?
Truthfully, I don't know. Currently the power in the system is so imbalanced that I can honestly say I haven't seen any suggestions brought forward that would even things back out. I think that a stricter set of guidelines for people reviewing abuse is the bare minimum that can be done. And better training on what is abuse especially when it comes to silencing and hate speech is imperative.
Related: The Risk in Speaking Up
I am not compensated for my writing
Consider supporting me via gittip. Your support helps me continue to write, speak at conferences on these subjects, and create more projects which further diversity in tech.