Bruce Schneier photoApril 2023. GrowthPolicy’s Devjani Roy interviewed Bruce Schneier, Lecturer in Public Policy at Harvard Kennedy School and Fellow at the Berkman-Klein Center for Internet and Society at Harvard University, on his new book A Hacker’s Mind, the future of AI, and his thoughts on regulating AI. | Click here for more interviews like this one.

 

Links: A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back (W. W. Norton, 2023) | Faculty page | Schneier on Security | January 2021 GrowthPolicy interview

 

GrowthPolicy: I’d like to talk about your brilliant, and timely, new book, A Hacker's Mind. In the book’s introduction, you write: “Security technologists look at the world differently than most people. When most people look at a system, they focus on how it works. When security technologists look at the same system, they … focus on how it can be made to fail.” Tell our readers what first made you interested in the psychology of security technologists and hackers? In other words, what is the origin story of this book?

Bruce Schneier: These threads have been percolating in my head for a while now. I started writing about the psychology of security around 2008. That quote is something I have been saying for decades. The notions of socio-technical systems and how they can be attacked are just as old.

In A Hacker’s Mind, I am taking what we know about hacking in the computer field and applying it to our broader world: laws, economics, politics, society. Our world is both complex and technical, and taking advantage of rules is common everywhere. (As an example, think of tax loopholes.) I am drawing out that idea, and adding notions of wealth and power.

 

GrowthPolicy: I loved your story of the medieval church’s use of ecclesiastical pardons and papal indulgences (in Chapter 16), using a limitless commodity as currency to prey on people’s fear of sin. In the present day, what should we know about technology’s capability to exploit people’s fears and vulnerabilities similarly?

Bruce Schneier: First, thank you for that. One of the most fun things about writing the book were the stories of hacking: from religion, from sports, from politics, from finance, from casino games … on and on and on. We humans have always been clever hackers. If there is a set of rules standing in our way, we try to get around them.

Technology—or, more accurately, people using technology—exploits many of our cognitive systems. Social media companies exploit how our brain decides what to pay attention to, and how we become addicted to things. Terrorism exploits fear. Modern politics exploit both authority and tribalism. It’s not that anything here is new, it’s that technology allows these exploitations to scale to an unprecedented degree.

I think of all of these as hacks of our cognitive functions. Our brains are optimized for the environment we experienced living in small family groups in the East African highlands in 100,000 B.C.E. We’re less optimized for Boston in 2023.

 

GrowthPolicy: In the book’s conclusion, you write about the perils posed by AI, stating: “The tsunami of hacks in our society reflects an absence of trust, social cohesion, and civic engagement.” In that vein, I’d like to ask about your views on the “Pause Giant AI Experiments: An Open Letter” (March 22, 2023), in which technology experts call for for a six-month moratorium on AI development and implication. What does the petition get right about the ethical concerns of Generative AI? Do you believe we need regulatory mechanisms in place for AI systems and AI labs?

Bruce Schneier: AI is going to change our world unlike anything we’ve experienced in our lifetimes, and it would be better if they weren’t optimized for the near-term financial interests of a handful of tech billionaires. The ethical concerns are huge, and important. Regulation is vital. Deliberation is vital. Understanding what we want from our future is vital. The six-month moratorium is a red herring—I don’t think any of the signatories expected it to actually happen—but it’s a useful starting point for a conversation.

I am advocating for an AI public option: a model funded by the government, designed in the open, and freely available. That will be an important counterbalance to the corporate models, and one that will become increasingly important as these AI systems start affecting how democracy functions.

 

GrowthPolicy: In what ways are chatbots, and GenAI more broadly, likely to dismantle traditional power structures in society? What implications do AI chatbots have for society as a whole that we will discover over the next century?

Bruce Schneier: That’s the question that: 1) no one has any idea how to answer; and 2) is desperately trying to answer. It’s clear that AI—ignore generalized AI—is going to dramatically change many aspects of our lives. And while we can point to a few of them, we really don’t know the extent of those changes. And we have no idea of the social changes that will result from those technological changes.

Will it dismantle traditional power structures? If previous technological revolutions are any guide: yes. How? We don’t know.

What I can tell you is to be cautious about what the current AI systems have to teach us about the future. We know a lot about the strengths, weaknesses, and limitations, of a particular large language model—ChatGPT. That tells us nothing about LLMs in five years, or next year, or even the end of this year. Research is moving so fast, and things are being deployed at such a breakneck pace, that it’s all going to change again and again. Never say: “the AI cannot do X.” Remember to say: “The AI I played with in Spring 2023 can’t do X.”

 

GrowthPolicy: You’ve written extensively on cyber-security throughout your career. I’d like to ask about your views on banning  TikTok on government devices and the recent Congressional hearing with TikTok CEO Shou Chew. What kind of precedent, good or bad, does banning an app set for the future? Do you believe users’ data is at risk, or is something larger at play here?

Bruce Schneier: We can’t ban TikTok. We don’t have the Internet censorship infrastructure necessary to enforce such a ban, and we’re not going to build a China-level surveillance system in order to do so. We could ban U.S. companies from doing business with ByteDance—the company that owns TikTok—but that won’t result in anyone not being able to use the service. (It would get the app off most peoples’ phones, so that’s a thing.) So largely, I view all the banning talk as posturing.

Nothing TikTok does is different from what Google and Facebook do. Even the algorithms pushing people towards extreme content: we see the same things out of the Facebook and YouTube (owned by Google) algorithms. We have built an Internet that runs on surveillance. It seems kind of late for us to start complaining about who in particular is doing that surveillance.

If we are serious about the risks of TikTok—and there are risks—I would like us to address surveillance capitalism as a whole. All of these companies are spying on us constantly, and all of them are using their recommendation algorithms to push us towards extreme content because that’s what makes them more money. The problem isn’t China; it’s the business models we’re allowing.