My wife Kat and I spent almost every moment of Saturday, June 7, 2014, lying in bed with our middle child Rebecca, who turned six years old at 7:24 that morning and died shortly before 7:00 that evening.
There is no way I can communicate what that is like to someone who hasn’t been through it. But I had tried, over and over, through blog posts and tweets. I had used the tools at my disposal to try to help people who knew me, or even people who merely knew of me, to understand even a little bit of what we were going through. From the day after Rebecca’s cancer was discovered until the days after it finally crushed her brain past the point of survival, I had shared what parts of her story and our story I could bear to write.
There were many reasons I did this—some selfish, some selfless, some unexamined. I wrote about why I wrote, at one point.1 Through it all, I stayed open to the world. I shared what I was learning about coping with a child’s illness, the travails of uncertainty and bureaucratic error, the surges and crushing of hope. I laid bare some of the most personal moments of my life, even as I kept silent about many others.
When I put up the memorial post for Rebecca,2 I included an invitation that someone had suggested: that anyone who felt comfortable wearing purple to Rebecca’s funeral should do so. This is perhaps a little unusual, since the dress custom for funerals tends to be “black and formal.” But honestly, a room of people all wearing black would have made Rebecca roll her eyes in boredom.
So I put it in there, as a message of sorts to those who were coming to the ceremony, that this was not just a moment of mourning but also a remembrance and a tribute to Rebecca. Online, a number of people announced their intention to change their Twitter avatars to purple, in sympathy and solidarity. The hashtag
#663399becca was coined, combining Rebecca’s name with the CSS color code for a very nice shade of purple. Matt Robin proposed that the web community get it trending on the day of the funeral.3
Jeffrey Zeldman, my friend and business partner at An Event Apart, wrote a post supporting and promoting the idea.4 Jeffrey has a significant online presence, and that post, along with his tweeting about it, really helped the idea take off.
The day of the funeral, my Twitter feed was a wall of purple avatars, and I believe the hashtag did indeed trend, at least within the US. At a certain point, I had to completely turn off Twitter notifications, because my phone was going crazy. Not that I was spending a lot of time looking at my phone that day, but I wanted to preserve the battery for necessities like phone calls.
Somewhere in the middle of all that, a suggestion was made to honor Rebecca by adding to CSS a named color equivalent to
#663399.5 The idea rapidly gained widespread support, not just from the community but also the browser vendors, and by June 21 the Working Group had officially accepted the proposal.6
And so, in a corner of the language I dedicated so much of my life to understanding and explaining, there is a memorial to my little girl: the named color “rebeccapurple.”
This was quite possibly the last thing I expected. When the proposal arose, I could barely make myself think, let alone objectively evaluate the merits of the proposal, which is why I stated that I was deeply honored by the proposal and would accept whatever decision the Working Group came to, pro or con.7 (I meant it, too. If the WG had declined to add “rebeccapurple” to CSS, I would have blogged in support of their decision.) I couldn’t really think about it in any coherent way, then or in the immediate aftermath. Besides adding the color to Rebecca’s memorial page on my website, I didn’t really think about it for a few months.
What pushed me to think about it was Gamergate.
Whatever you might think of Gamergate—and there’s a lot to think about it—I believe I can say at least this much without stepping on any land mines: it was when actor Adam Baldwin coined the hashtag
#gamergate, and greatly boosted the signal on two videos made by people critical of Zoe Quinn, that the whole thing blew up into what we now think of as Gamergate, no hashtag.
In the process, there were threats—vicious, horrible threats—against not just Zoe Quinn, but also Brianna Wu and Anita Sarkeesian. All three women had experienced such threats in the past, but not with the same frequency or intensity that was seen during Gamergate.
I wasn’t even that deep into the situation—this was August and September 2014, when I was still pretty grief-stricken, and at any rate I’m no more than an occasional gamer—and I was sickened and horrified by so many of the things I saw. I cannot for an instant imagine what it must have been like to be at the focus of it all, or even near to the focus.
As my brain often does when confronted with horror, I retreated to analysis, hoping to draw something useful and constructive out of what seemed like a pointlessly destructive situation. It was an instinct that had helped me through everything that happened with Rebecca, and it helped me cope with the nauseating details I saw before me.
From that analytic distance, I realized I was seeing something structurally similar to the
#663399becca campaign, but with such a different outcome.
In both cases, a situation that had already existed reached a sort of tipping point with the coining of a hashtag and a signal boost from a prominent personality. In response, groups of people organized to act.
#663399becca, a collective tribute gave way to a formal, digital memorial. With
#gamergate, a collective outrage gave way to terrible, real-world consequences.
We can look at these examples and say that the difference in outcome is the difference in input—that starting with positive intent leads to positive outcomes, and negative intent leads to negative outcomes. That’s certainly true. But it’s not the whole truth.
Some say that the medium in which these things happened is like a road, neutral to its uses—but roads are not neutral spaces, and neither is the internet.
While a ribbon of asphalt is neutral to its uses, we are not neutral to the uses of that asphalt. We decide where the road should go, which includes deciding who will have access to that road and who will not. We determine speed limits, caution areas, rules of the road. We don’t let people drive on whatever side of the road they feel like, or endanger pedestrians and other drivers. We establish laws and enforcement mechanisms. We even require licensing of the people who drive, to try to make sure that they understand at least a bare minimum of the rules before we allow them to use the road.
This is not an argument that we should license internet use. It’s pointing out that roads are not nearly as neutral as we all too often pretend. We set much lower speed limits around schools; we paint lines to indicate where passing is permitted and where it is not; we hire police officers and judges to penalize those who disregard the rules. Nothing about a road is neutral except for the raw material itself.
For that matter, almost nothing we do online these days is neutral. In the early days of the web, we were excited just to be able to look up some information and follow links from page to page—web surfing was a real thing back then, in the pre-search-engine era. Then we were thrilled to have great maps and the ability to have goods shipped to our houses. And back then, we were still trying to figure out what to share, and how and when to share it.
Now we share ourselves freely, continuously, one tidbit at a time but at a rapid pace. Social networks have emerged, and we share our thoughts and feelings with the world almost effortlessly, all the old hurdles of hosting and software installation outsourced to companies dedicated to making it simple. The web lets us do this without regard to time and distance, allows us to connect with people we otherwise would never have known existed. This should come as no surprise: humans have always longed to communicate, and most of all to be heard.
That’s how blogging came about. People starting putting their thoughts online, writing their own personal serial magazines, sharing themselves with whoever would listen. Some lost their jobs over what they shared, while others landed jobs. In every case, the motivation was the same: to be heard.
Now, anyone can follow that motivation, and millions upon millions do. Tweets, Facebook statuses, Tumblr posts, Instagram shares, Medium articles; these all exist to let us easily share what we think and feel and see with the world.
That’s one of the biggest differences between the early web and what we have now: that it’s easy to share ourselves. Entire business sectors have been built and vast fortunes made on making that impulse easier to satisfy.
The challenge now is in how those fragments of our lives are treated. This is as much a social question as a technological problem, but the two are not separable. What Facebook and Twitter and Instagram and every other at-scale social network does now—everything they make possible or impossible, everything they make easier or harder—will shape what we think of as normal in a decade or two. It won’t utterly control the way we use the web, but it will undoubtedly influence our online behavior at a deep level.
As an example, in 2030, will we think it’s acceptable to mute or block people who try to communicate with us? That seems like a ridiculous question to ask—of course it’s acceptable!—and yet, if networks make it harder to do so, or even if they make it easier to not do so, then the answer to that question could well change. Our grandchildren may think of the act of blocking as quaint and archaic, or even outright wrong.
Or, they might well think of our current situation as unthinkably permissive and damaging. If social networks make it easier to block harmful feedback and make attacks more difficult in the first place, then the answer to the question may change to the point that nobody thinks to ask the question anymore. The ability to mute and block and filter could become second nature.
If that seems too overblown, think about the differences between what constituted acceptable behavior when you were a child and what’s acceptable now. Not in the sense that “these kids today are disrespectful little punks, unlike like when I was a kid” (the inaccurate complaint of every generation), but in a social sense. Right now, kids think nothing of getting together physically to interact digitally. Looking at a mobile device while in conversation is something adults frown upon but kids don’t think of as abnormal, if they think of it at all. This didn’t happen because kids are less attentive to their friends. They just grew up in a world where that was possible, and they found it desirable. As they grow up more, everything they find on social networks, both content and capability, will seem just as normal. What they think of as risky or strange or acceptable or desirable will be profoundly shaped by their experiences.
Kathy Sierra, who has been targeted for harassment more than once, relates in her book Badass that the horse trainer’s mantra is: “Make the right thing easy and the wrong thing difficult.”8 Now consider the converse: what is easy comes to be accepted as the right thing, and what is difficult comes to be regarded as the wrong thing. That’s why I say what we do now isn’t neutral. Everything we do, from what we share to how we interact with our networks to how those networks are structured, is influencing the near future of our societies. Not just the hyper-digital developed world’s societies, but all societies everywhere, because what happens online will shape what happens offline.
And speaking of concepts that may make no sense in a couple of decades, consider the idea that there’s a distinction between online and offline. We often try to demarcate them, talking about the virtual and real worlds as if the internet is a different planet that we sometimes visit and then return home. That’s never been true, but the mobile revolution has made the fiction obvious. The internet is no more a separate, “virtual” world than are books or songs. We talk to each other directly, and share ourselves, whatever the medium.
We wouldn’t say that by making a phone call we enter a different world; when we go online, we aren’t going away either. Wherever we go, we take ourselves with us, and seek to be heard.
And so the question is, will our future be more like
#663399becca or more like
#gamergate? Will we see communities work together, or camps tear each other apart? Both will happen, of course, but which will become the norm? Will our societies and we ourselves become more constructive or more destructive?
Any one of us can and absolutely should make such choices for ourselves, but there’s more at work here than individual choice. How we build our systems of interaction will matter a great deal to the future. If we build them in a way that encourages positive collaboration and discourages destructive attacks, that will influence anyone who uses—and, more importantly, grows up with—those systems.
And so those who build the systems of interaction have a unique responsibility, because what they allow and forbid defines them. As Derek Powazek has said, “What you tolerate is what you are.”9 What networks allow, and more importantly what they encourage, defines them as well. A network where it’s easy to attack and difficult to defend makes a very different value statement than one where it’s difficult to attack and easy to defend.
Either way, the nature of a system says something very clear about the people who create that system and what they value; just as much as how we use those systems, and what we tolerate in the behavior of those around us, says something very clear about us and what we value. It’s a statement we make to everyone around us as well to everyone yet to come. This is our legacy, our message to the future about who we really are and what we truly value. We are what we build. It’s long past time we started building wisely.
Kathy Sierra, Badass: Making Users Awesome (O’reilly Media, 2015). ↩