Some words, like n****r, ch*nk, and c*nt, are so forbidden that we won't even spell them out here.
Are trolling, bullying, and flame wars an inevitable result of online communication? Does the anonymity and invisibility of cyberspace lead to toxic speech and behavior? How can we create more toxic-free environments online?
Without a doubt, the internet has revolutionized communications. It is an incredibly powerful tool that enables us to exchange ideas and information with people from all over the globe in an instant.
But it also seems to bring out the worst of people’s anti-social tendencies. Trolls were once fictional characters, nasty creatures living under fairytale bridges who only came out from their hiding places to harass those trying to pass over. Now trolls are real people in cyberspace who say things—often outrageous—just to get a reaction out of people. And they're all over the internet, not just in dark corners like 4chan. Look at the comments on mainstream sites, like Twitter or YouTube.
I mean, no! Don't look at the comments!! Never look at the comments!!!
It's possible to have fantastic discussions online too, where everyone politely listens and tries to understand one another, even if they don't see eye to eye on some issue. We can share norms of civility when we don't share the same beliefs. We can explore our disagreement with one another without making ad hominem attacks. We can try in good faith to reason with one another in the face of disagreement. And we can know when to let a disagreement go. This is all possible—but by no means guaranteed—in the cyberworld. It's much more likely in smaller networks of people who also interact with each other in the real world, not just online.
But when exchanges are between anonymous strangers who will never meet or know one another in real life, that’s when the discussion often devolves into trolling, bullying, and name-calling. Even worse, trolls and bullies can be quickly organized into mobs who target and harass people (often women) whose views they don’t like. Gamergate is a classic example of that.
Mobs are nothing new in human history, of course. It should be no surprise that in the age of the internet we get online mobs, a virtual version of a real-life phenomenon. Instead of pitchforks and torches, their weapons are computers and smart phones. Like the mobs of yore, cyber mobs seek to threaten and intimidate.
An important question to ask about all of this anti-social behavior we see online is whether the internet is simply providing a new way for bullies and trolls to act out, or whether the technology itself has created an entirely new beast. Has the internet made us into worse people?
Psychologists who study this kind of question have noted that people do and say things in cyberspace that they wouldn’t normally do and say in face-to-face interactions. This kind of behavior is so pervasive, they’ve given it a name—the online disinhibition effect. While we’ve always had mobs, we've never had such an easy and efficient way to organize them—free from geographical restrictions, insulated from the real damage they inflict on their targets, and escaping accountability or punishment. And this changes the human landscape significantly.
It's not just that the internet makes it easier for bullies to bully. That is certainly true. You could also say that the internet makes it easier for generous people to donate to charity. The internet is indeed a very versatile tool that can be used for good or bad, depending on whose hands it’s in. But that does not make it a neutral tool.
Sure, we could say, blame the bullies, not the tools they use for their bad behavior. But that’s a bit like saying guns don’t kill people, people do. Yes, people do kill people, but having guns easily available, especially assault style weapons, makes it so much easier for people to hurt or kill others that we now have an epidemic of gun violence in this country. It’s disingenuous to say that the tools of destruction and what they make possible don’t play a role.
Decades of research in social psychology has shown us that context and situation greatly influence how people tend to behave. Put people in certain social situations and you’ll see the same results again and again. Take something like the bystander effect. That’s when people are less likely to help out a victim if there are other people around. Similarly, people tend to behave in anti-social ways when they're interacting with anonymous strangers they never see face to face. Being able to bully others with practically no repercussions in real life brings out the bully in people that ordinarily would not behave that way. There can be a major mismatch between their online behavior and their offline behavior. That’s the online disinhibition effect.
Blaming “the internet” for people’s bad behavior, though, is not quite right. That’s too vague and nebulous a target. It’s more accurate to blame particular platforms on the internet that allow bullying, because there are sites, like The New York Times, where bullies and trolls are simply not welcome. They do a lot of work moderating comments to keep the discussion civic and productive. It’s expensive and time-consuming, of course, which is part of the reason why more sites and platforms don’t do it. But it shows that it's at least possible to create the kind of environment online where anonymous strangers can disagree civilly, if there is a will to do so.
But don't get your hopes up yet! Bad behavior online is often good for profits, and we know that's the bottom line for most of these sites. Take a look at this article from the Pew Research Center that summarizes a large-scale canvas of more than a thousand technology experts, scholars, corporate practitioners, and government leaders on whether they think that “public discourse online will become more or less shaped by bad actors, harassment, trolls, etc.” Overall, it presents a rather pessimistic forecast for online discourse.
So, how do we as consumers get platforms that do very little to curb bad behavior to change their guidelines, or, as is often the case, enforce their guidelines? Do we just stop using them or can we create change in some other way? And how should online spaces be redesigned so that there are real consequences for bad actors? How do we eliminate anti-social behavior to the extent that we can?
Tune in to this week's show with guest Michael Lynch, author of The Internet of Us: Knowing More and Understanding Less in the Age of Big Data, and share your thoughts with us here.