We've All Got a List

  1. By Ashe Dryden
  2. On November 30, 2013
  3. Tagged diversity tech safety

On a terrifyingly regular basis, incidents of bad behavior surface in the open source community. This can range from dismissiveness toward someone in a marginalized group, to outright hostility at conferences, to harassment and intimidation in the workplace, to physical violence and sexual assault in our community spaces. The scary thing here is that we only hear about them when they either happen in public or when the affected person comes forward, not all of them knowing what will happen when they do

If it does become public, I hear quite a few people say things like "I wish there were a list so I knew who to avoid/not hire/not allow at my events/etc".

What would a list accomplish?

The idea here being that a public blacklist would serve multiple purposes at once. Below are some of the reasons many people have told me they want something like this.

Note that these do not necessarily fall in line with my beliefs, but I am putting them all in one place because I hear them so frequently.

Consequences for Actions

Like many things on the internet, there are very few consequences for bad or threatening behavior. Part of this is due to the decentralization of a community caused by the internet - one feels more comfortable saying something to you online than to your face knowing that you can't immediately react and knowing that there would be little to no punishment for the behavior.

Another piece of the puzzle is the army of support someone who does this can gain from finding their way into the right parts of the internet - Hacker News, Reddit, 4chan, and other places where the idea of injustice has become perverted and festers in a village of people with access to all the right incendiary devices. In this way, the internet has actually been quite empowering for abusers.

And hey, before you think to yourself "we should de-anonymize the internet!", lemme tell you: it doesn't matter. I've received rape and death threats from people with their real names, their real faces, and even their work email addresses attached. The issue is less about anonymization and more about there being no consequences for online actions, especially in a community as large and globally widespread as ours.

A Warning to Other Possible Offenders

Having a public list would also fire a warning shot. "People who do these sorts of things are not welcome in our community and we will let people know." The logic goes that making a public example of someone means that others are less likely to do the same things, again, creating a sort of consequence to that behavior. 

If we compare this to another set of rules we expect people to follow - laws - research shows this isn't the case. Harsher penalties don't deter crimes anymore than the law existing. Much of this comes the fact that so many crimes are never punished.

Research to date generally indicates that increases in the certainty of punishment, as opposed to the severity of punishment, are more likely to produce deterrent benefits.1

A rational analysis commonly puts the perceived benefits of a crime greater than its perceived costs, due to a variety of criminal justice realities such as low punishment rates.2

Sound familiar? No group of volunteers could possibly capture information on every offense, especially considering many are never publicly reported for reasons I've already written about

In addition, the abused face a retaliatory effect: being themselves reported as abusers, which isn't unique to the internet. A recent example of this is the Twitter "Report Abuse" button, which allowed people to report specific tweets as abusive. One of the first victims of this was a twitter account that retweeted transphobic tweets to both raise awareness of how casual people are with their transphobic hate, but also to identify dangerous people to block or avoid. Those tweeting transphobic things turned around and reported the account, getting it suspended.

Meanwhile, the reports to Twitter about abuse by their users - including rape threats, death threats, doxxing, and other harassing behavior - are dismissed with alarmingly regularity, assumedly by either an automated system that isn't programmed well or a set of humans that aren't trained well enough on the intricacies of harassment. This, again, leaves all of the power in the hands of the abusers.

Keep Our Community Safe

Another reason to create such a list would be to protect the community. Running a conference or event, hiring, or adding someone new to your project invites a lot of risk. Bringing in the wrong person can damage not only what you're trying to do, but the reputation of the community it sits in. By removing past violators, the hope is that you are keeping them from repeating the same behavior and protecting the community.

The argument against this is that it doesn't allow people to redeem themselves (for the things that are redeemable, which I will not enumerate here). Some actions require only a sincere apology and not repeating the same behavior - a temporary tarnishment to their reputation. Others, due to the nature of the incident, require we weigh the safety of the community over the offender's want for redemption.

On a personal note, I personally block a lot of abusive and harassing people on twitter. The majority of the time that's the last I hear from them, but I have a few people that go out of their way to continue to harass me on twitter or in other mediums. In a couple cases, the people have come around and realized why what they were doing was wrong and sent me apologetic emails. If I feel they're sincere, I've unblocked them.

Provide Evidence of Occurrence

The Geek Feminism Timeline of Incidents exists for exactly this reason - in fact, you can read Why We Document on the Geek Feminism blog for a well-reasoned explanation.

It's important to note that the existence of this list is a huge source of anger for a lot of people. When people ask for evidence of the occurrence of these incidents and the ToL is provided, many react very poorly. I've heard it referred to as a "witch hunt", a "sex offenders registry", and arguments that it violates the "innocent until proven guilty" belief that many Americans hold. What they tend not to want to listen to is the frequency with which nothing is done about these incidents when they are reported to community members, conference organizers, business owners, and even law enforcement. The reality here is that not only are the majority of online harassment reports ignored (or even mocked), but so are in-person reports to people and organizations who have the power to create consequences for the offending individual.

We Have Lists: They're Just Not Public

Many that have faced this kind of behavior have a list. It's not necessarily written down and cataloged, but it exists.

When a friend is thinking about working for a company that allows homophobic slurs in the office, or going to a conference where they've done nothing after a report of abuse, or getting involved in a project where the lead is verbally abusive, we tell them.

We do what we can to protect each other through whispers and emails and private messages because the risk - legal, professional, financial, personal - is too great and the reward so small that doing so publicly isn't worth it.

The unfortunate side-effect of this taking place in private interactions is that only those that are connected and trusted will get this information. This leads a very large group of people vulnerable still because the network isn't large enough yet. We end up excluding the people who need this information the most - the people who have less power and reach, the ones that are most vulnerable to predatory behavior without any recourse of their own.

How Do We Fix This?

We allow this kind of behavior by not calling people out, by not having any real or lasting consequences for those actions, for not taking reports of abuse, harassment, and assault seriously. We don't believe the people who do report and in doing so further victimize them; on top of that, we outright abuse and harass the people who report. If anything, we've taught victims of this behavior to do nothing and say nothing, lest they encourage the wrath of the internet, forever associating their name - and not their abuser's - with something shitty or horrific that happened to them.

We don't take responsibility for our own actions, letting ourselves off the hook by comparing our behavior to someone that is degrees worse. We dismiss what seems to be a small push towards inclusivity (or even the bare minimal of civility) because there are "bigger battles to be fought".

So how does this get fixed? Truthfully, I don't know. The problem is so systemic in our communities. Fixing this is going to require buy-in from a vocal and powerful majority of people. It's going to have to mean people losing opportunities and their standing in our communities because of the things they do.

We cannot reprimand someone for their behavior while still allowing them to enjoy the privileges of their position without sending the message that we are somehow condoning their abusive actions.


1 Wright, Valerie (2010, November), Deterrence in Criminal Justice: Evaluating Certainty vs. Severity of Punishment

2 Robinson, Paul H. and Darley, John M. (2004), Does Criminal Law Deter?

I am not compensated for my writing

Consider supporting me via gittip. Your support helps me continue to write, speak at conferences on these subjects, and create more projects which further diversity in tech.