The 2020 election and startling events that followed show that the US is as polarized as ever. Not only is there fundamental disagreeme...
If you’ve been following the news, you’re aware that the Delta variant of the Coronavirus is all around us. While it only poses a very minor risk to the vaccinated, it is wreaking havoc on the unvaccinated.
At the time of writing, we’re back up to around 50,000 new cases a day in the United States, the vast majority of which are unvaccinated (around 99% of those hospitalized for Delta are unvaccinated). So with the easy availability of vaccines (at least in the US), why doesn’t the remainder of the population just get the jab?
There are many and various answers to that question. But (obviously) one major causal factor is misinformation, much of which falsely casts vaccines in a negative light. And it is fair to say—despite Facebook’s positive efforts to spread accurate information—that it is still one of the major conduits of misinformation as well. Perhaps the major conduit. It turns out that just twelve people, including anti-vaxxer Robert F. Kennedy Jr., are responsible for generating 65% of the vaccine misinformation that gets shared on Facebook. Facebook, effectively, is their megaphone.
This brings us to our main question:
Is Facebook morally responsible for the dissemination of misinformation on its platform?
Opinion on this is divided. The Biden administration has been laying blame at the feet of social media and on Facebook in particular. Facebook has argued that its positive efforts at spreading correct information let them off the hook.
What might a philosophical view of moral responsibility say on the matter? Here’s why this is a hard question. Facebook, in strange ways, combines personal communication among friends with widespread broadcasting of (mis)information: some posts end up reaching millions. That makes it very unlike media that have come before, for which we have adequate moral frameworks. To see the difficulty, consider two very different cases.
Case 1: The Paper Company
Suppose you and I own a paper company. We produce and sell paper, on which people can write whatever they like: letters, drawings, poems, histories, or . . . misinformation about vaccines, which may then be widely disseminated through mail or whatever. Let’s say lots of misinformation is being printed and shared on paper we produced.
If we pose our question in relation to The Paper Company, it is intuitive that the answer is no, The Paper Company is not morally responsible. The Paper Company’s job is to provide a medium for communication—paper—not to police that communication. The responsibility for spreading misinformation in this case falls solely on the individuals who are producing and disseminating it.
Case 2: The Traditional Media Broadcaster
Now suppose you and I own a company that broadcasts traditional media: we do radio and television news and talk shows. Let’s say also that lots of our guests and on air personalities are propagating misinformation about vaccines. Consequently, millions of people are seeing, hearing, and in many cases believing it.
Is The Traditional Media Broadcaster morally responsible for the spread of misinformation? In this case yes, absolutely. Traditional broadcasters are in the business of curating content that gets shared on their platforms. Nor would it be hard to weed out misinformation if we tried, since we are choosing and paying the people who go on our stations.
So what about Facebook?
These two cases help us see why the issue of Facebook’s responsibility is a puzzle. It’s not intuitively clear whether Facebook is more like The Paper Company or The Traditional Media Company. It is, in different ways, like both.
Facebook is like The Paper Company in two significant ways: (1) Basically anyone can put content on it, and (2) it is to a large extent used for individual personal communication. But it is like The Traditional Media Company in two other significant ways: (a) Any information put on it easily spreads to millions of people, and (b) there is some chance of curating the information that goes up (e.g., through algorithms that promote or demote content).
This amalgam situation might seem to make our problem insoluble. Fortunately, however, there is a philosophical framework that can add clarity.
John Fisher and Mark Ravizza’s now-classic Responsibility and Control: A Theory of Moral Responsibility proposes that what they call “guidance control” is what’s required for a person to have moral responsibility for happenings of various sorts (actions, omissions, consequences, etc.). Guidance control comes down to two conditions (here I paraphrase):
First, the mechanism that produces the behavior or outcome in question must be the agent’s own.
Second, that mechanism must respond to reasons.
So, for example, if I steal something, I am morally responsible because the psychological mechanisms that chose that path are mine and are responsive to reasons (had I had strong enough reason, I would have chosen something else).
How can we extend this analysis to the dissemination of vaccine misinformation on Facebook’s platform?
A lot of work would be needed to do the theoretical extension from individuals to corporate bodies, like Facebook. But I think Fisher and Ravizza’s two conditions do illuminate why Facebook is to some degree morally responsible for the dissemination of misinformation—much more so than The Paper Company would be.
First, the mechanisms of dissemination (the whole platform, basically) do indeed belong to Facebook, so Facebook satisfies the first condition in a way that a paper company does not and cannot (once sold, the paper is out of their hands). But note that, since it is designed as a platform, there are many other mechanisms (housed in the minds of users) that are also contributing to dissemination, which Facebook does not own. So this makes Facebook a joint owner in the collective mechanisms of dissemination, rather than sole proprietor.
Second, Facebook’s design of its platform is indeed responsive to reasons. They can change their algorithms if it suits their purposes, and they regularly do. And those algorithms were designed for reasons in the first place: the goal was to maximize the amount of time users spend on the platform.
So Facebook indeed has some guidance control over the spread of misinformation on its platform, unlike The Paper Company, and to that extent it has moral responsibility as well.
The only rejoinder I see here for Facebook is that the steps it would have to take to prevent the spread of misinformation would be too Draconian and contrary to the spirit of the platform, which is about people connecting with each other and allowing them to talk. But I’m not convinced. If Facebook's mission is indeed to connect people, there are two things it could do better that wouldn’t compromise that mission.
First, they could stop being a megaphone for misinformation super-spreaders entirely. This step could take various forms, but one obvious thing to do would be to have much higher standards for any page that has more than, say, 2,000 followers; doing this wouldn’t compromise what individuals can say to other individuals—it would just force liars with large followings to go elsewhere. Second, they could use their algorithms to comprehensively demote misinformation: basically, such improved algorithms would imply that if I posted, say, the Gates microchip conspiracy theory on my wall, then that would be seen in no one’s feed—only people who deliberately came to my wall would see it.
Both steps, I think, are feasible and would have an impact, especially as Delta rages around the globe. Since Facebook is, if my analysis is correct, at least partly morally responsible for the spread of misinformation that is stopping people from being vaccinated, it is also partly morally to blame for the preventable deaths that have followed.
Image from Wikimedia Commons