By acting as Silicon Valley’s conscience, Tristan Harris works to educate the public about interacting with social media and AI. A digital interview.
Tristan Harris campaigns as a digitalization activist and is co-founder of the nonprofit Center for Humane Technology.
Tristan Harris campaigns as a digitalization activist and is co-founder of the nonprofit Center for Humane Technology.
A former design ethicist at one of the world’s largest Internet corporations, Tristan Harris now campaigns as a digitalization activist and co-founder of the nonprofit Center for Humane Technology. He is connected on his smartphone via video call. It is plain to see that he isn’t stuck in a stuffy office. Harris looks relaxed, even cheerful. “If only everybody could see this fantastic view—the Arizona mountains, the sky and clouds. I can feel my batteries recharging right this moment.” Digitalization can be a great blessing. It offers greater flexibility and opens up new opportunities for organizing our lives and work. But—and this is the big “but” at the heart of Tristan Harris’ endeavors—if we are not careful, we’ll lose all this freedom before we even realize what we’ve gained.
For decades now, works of science fiction have stoked fears of a distant future in which artificial intelligence (AI) takes control of our society. “The other lesson implied by those scenarios is that it takes a hostile power to conquer us,” says Harris. “If I want to force my opponent into a certain corner, I must overpower him.” However, what Harris grasped as early as his childhood days performing magic tricks for fun, something that was cemented at the latest during his studies at Stanford University’s Persuasive Technology Lab, was that exerting control does not require nearly that much effort. “How do you create an illusion? All it takes is knowing one thing about the audience’s psychology that they don’t. And just like that, you can manipulate their behavior. There’s no need to attack them where they’re strongest. Just go for their weaknesses.”
According to Harris, that’s exactly the modus operandi behind most of the social media interfaces, e-mail programs and apps that are now so integral to billions of people’s daily lives. “What we all carry in our pockets wherever we go are not merely smartphones. They are robots that have a similar effect on our neurological reward system as slot machines in a casino.”
As a child, Harris learned magic tricks and performed as a magician at birthdays. That was when he realized for the first time how easily the human mind can be manipulated.
As a child, Harris learned magic tricks and performed as a magician at birthdays. That was when he realized for the first time how easily the human mind can be manipulated.
If we, as users, are unaware that our brains respond to variable rewards with a sure-fire release of happy hormones, then we are, of course, no match for the cybertech industry with its accumulated insights. As Harris explains, “With variable rewards, I’m pulling a lever and sometimes I get a juicy reward (ooh, exciting!) and other times I don't.” Let’s have a look at just one example of how our most primitive brain functions can be used against us. When checking our virtual inbox every few minutes, swiping up or down to refresh, what we’re always hoping for is a “reward.” Can you honestly call this a conscious use of communications technology? And are we really deciding for ourselves how we spend our time? If you ask Tristan Harris and many other researchers, the answer is a resounding no.
“Video streaming services, networking apps and news portals are all competing for our attention,” says Harris. This is the crux of the problem. After all, technology does not develop at random. Each innovation is one competitor’s response to another’s innovation. The rapid spread of fake news is one of the unfortunate side effects of this vicious cycle. “Anger boosts screentime far more effectively than contentment,” says Harris. We share the things that upset us with more friends, research them on more channels and go on consuming them obsessively. The relevant algorithms pick up on this. And keep feeding us more distressing content. While it need not always be fake news, one thing is for sure: It’s not the accuracy of the content that determines what appears on our pinboards and timelines. “Remember that a personalized newsfeed is not generated by people, but by algorithms,” adds Harris. “And they are not programmed to deliver what’s right or healthy for us, but instead what holds our attention for longer.”
Are there any solutions to this difficult dilemma? Harris believes there are. “For starters, we all need to gain a better understanding of our mind’s vulnerabilities so that we can resist unhealthy impulses more effectively.” The design ethicist is essentially calling for a second Age of Enlightenment—this time, a digital one. “What’s more, we need new models of accountability,” continues Harris. “That means making the decision makers in the control rooms of big tech companies aware of their responsibilities. And ensuring they answer for their actions.” Finally, Harris argues—and is even successfully winning over growing numbers of Valley CEOs—for a “true design renaissance.” While that means consumer protection should be a top priority, it’s also about empowering users or offering them a more meaningful use of their time. We need common goals. What do users genuinely desire from their daily interactions with social media? Being sucked into endlessly watching videos and participating in increasingly heated forum discussions? Or rather active support in structuring their valuable time away from screens?
It’s a “race to the bottom of the brain stem,” Harris told his astonished audience in his keynote address at the virtual Audi MQ Summit in 2020.
It’s a “race to the bottom of the brain stem,” Harris told his astonished audience in his keynote address at the virtual Audi MQ Summit in 2020.
For better or worse, artificial intelligence “optimizes” our behavior, knows our psychology, predicts and manipulates our desires. There is no doubt in Harris’ mind that it has long since outstripped us. And there’s no turning back the clock. Instead, we must now focus on implementing healthier values rather than simply trying to sell the maximum share of users’ attention spans to the highest bidding advertiser. Harris puts it like this: “Doctors’ and lawyers’ expertise also gives them knowledge superior to that of their patients or clients. But professional ethics require that they undertake to act in their patients’ or clients’ best interest.” Harris believes that a paradigm shift in the tech industry is inevitable. Paired, of course, with responsible, careful user behavior. With that in mind, Harris gives us a practical tip in parting: “Self-denial isn’t the right approach with social media. Rather, consciously give yourself permission to go offline at several intervals throughout the day. Spend the time doing fun, exciting, relaxing things.” After all, that’s another way of stimulating the reward system.