Big tech is known for its "disruption" of established industries and changing fundamental aspects of our lives from shopping and delivery to communication and transit.
Are tech companies really “making the world a better place”? Isn’t “disruption” just code for circumventing legal regulations and ignoring labor laws? Does Silicon Valley really believe its own hype? This week we’re thinking about “The Rhetoric of Big Tech.”
Silicon Valley may be full of so-called “techies,” but they also have a lot of storytellers in their ranks. Not that I’ve got anything against storytelling exactly, but there’s a big difference between, say, a good novel and the kind of narratives that are frequently pushed by Silicon Valley. We know that novels are fiction, not meant to be taken as literal truth. But that’s not what we’re supposed to think when we hear Big Tech talk about “making the world a better place,” “doing no evil,” and “bringing the world together.” We’re supposed to believe what they tell us is fact, not fiction.
There are some ways in which tech has made the world a better place, of course. Think how much information we have at our fingertips because of the internet, how quickly we can communicate with friends and family despite vast geographic differences.
When Covid hit last year, teachers used information technology to stay connected with their students and were able to teach classes from the safety of their own homes, something that would not have been possible just two decades ago. Sheltering in place and socially distancing for months naturally leads to feelings of isolation, but imagine how much worse it would be if we didn’t have technologies like smart phones, tablets, and computers at our disposal during Covid.
This technology also allows us to discover new communities beyond our local communities, like-minded people who share our interests, be that funny cat videos, cooking, yoga, or some political or social cause. Platforms like Facebook and Twitter have been especially important for people from marginalized groups who may feel especially alone and isolated where they live, but can now connect with others like them on social media, build solidarity, and organize politically.
But this very same technology has also been used by political extremists to share their various toxic ideologies, harass and bully others, and organize violent actions, including an attempted violent overthrow of a democratic election!
Internet use also leads to the development of many unhealthy habits. Children and adults alike are addicted to their screens, which has all kinds of negative down steam effects such as neurological complications, psychological disturbances, and social problems. And these have ripple effects on the broader culture and political climate.
Are internet companies to blame for all the ills of modern life? No, but it’s important to take a critical look at the role these companies play in making certain events possible. Take the recent events at the Capitol. It's reasonable to ask whether we should hold specific social media companies responsible for allowing violent extremists to recruit and organize on their platforms.
One view of the internet is that it is a neutral tool that simply facilitates communication across geographic boundaries. It is like print, telephone, radio, TV, or any other communication technology—it can be used for good or evil. We don’t blame the manufacturer when someone gets viciously bludgeoned to death with a hammer. So why should we blame internet companies for allowing extremists to organize a putsch on their platforms that directly led to the death of five people (plus two police officers who took their own lives after the attack)?
The technology itself might be neutral, but that’s not to say that anything goes for the companies that bring us this technology. For example, if our hammer manufacturer used images celebrating violent attacks with their hammers, created politically divisive messaging to market their product, and turned a blind eye when people known to have murderous intentions bought a bunch of their hammers, we might start to think differently about their moral responsibility when someone gets bludgeoned to death with a hammer.
Similarly for tech companies—we need to know what they knew, what they said, and what they did in order to determine their culpability for any particular event or phenomenon.
Take social media platforms, for example, where lots of disinformation is spread, and trolling and bullying occur frequently. It is no accident that this happens. Fake news and online harassment are big drivers of profit for these companies. They just prefer to call it “engagement.” When your entire business model is based on growing “engagement” and you know fake news and harassment drive engagement, it’s a bit harder to take seriously disavowals of responsibility for the consequences.
I was horrified to learn recently that Facebook not merely turned a blind eye to extremist groups spreading disinformation and hateful ideologies, and organizing on their platform, but they actively participated in the radicalization process by showing users with certain political affiliations fear- and aggression-driven advertising warning them to stockpile weapons and accessories. What could possibly go wrong?
The bitter irony of all this is that Facebook’s mission statement is to “bring the world closer together.” Apparently, that is achieved by stoking political paranoia and suggesting militarization as the answer.
Which brings us back to the topic of this week’s show—the rhetoric we hear from Silicon Valley, the stories they like to tell about what they’re doing, and how it fits with what they actually do.
There’s this whole mythology Silicon Valley has built around what they do, all these archetypes they appeal to, like the genius iconoclast with a bold vision for the future, the plucky little startup disrupting an outmoded industry Goliath, the inventor-entrepreneur who drops out of college at 19 to build a “unicorn” company. What this mythologizing does is distract from what they are actually doing. It allows them to dodge responsibility for all the problems to which they’ve contributed.
Our guest this week is Adrian Daub of Stanford University, author of a new book called What Tech Calls Thinking: An Inquiry Into the Intellectual Bedrock of Silicon Valley. Josh and Ray talk to him about all the ways Silicon Valley hides profit driven behavior with a veneer of shiny but philosophically suspect rhetoric. Tune in!