HKS Professor Latanya Sweeney, a pioneer in the fields of data privacy and algorithmic bias, works to ensure everyone is treated fairly by the technology that increasingly rules our lives.

Featuring Latanya Sweeney
31 minutes and 54 seconds

Harvard Kennedy School Professor Latanya Sweeney is a pioneer in the fields of algorithmic fairness and data privacy and the founding director of the new Public Interest Tech Lab at Harvard University. The former chief technology officer for the U.S. Trade Commission, she’s been awarded three patents and her work is cited in two key U.S. privacy regulations, including the Health Information Portability and Accountability Act (HIPAA). She was also the first black woman to earn a PhD in Computer Science from MIT, and she says her experiences being the only woman of color in white male-dominated classrooms and labs may have contributed to her uncanny ability to spot racial and gender bias, privacy vulnerabilities, and other key flaws in data and technology systems.

About the “Systems Failure” Series:

To kick off the fall 2021 season, we’re launching a mini-series of episodes built around a theme we’re calling “Systems Failure.” Our conversations will focus on how the economic, technological, and other systems that play a vital role in determining how we live our lives can not only treat individuals and groups of people unequally, but can also exacerbate inequality more generally in society. We’ll also talk about strategies to change those systems to make them more equitable.

Hosted and produced by

Ralph Ranalli

Co-produced by

Susan Hughes

This episode is available on Apple Podcasts, Spotify, and wherever you get your podcasts.

Ralph Ranalli (Intro): Hello, and welcome to the new season of the Harvard Kennedy School Policy Cast. I'm Ralph Ranalli, your host. Since we launched the new version of Policy Cast two years ago, we've brought you insight and groundbreaking research from Harvard Kennedy School faculty on a broad range of important issues ranging from the complex relationship between technology and society, to workplace bias, to pandemic responses, to the climate crisis. To start the new academic year, we're beginning this fall season with a mini series of episodes built around a theme we're calling, “Systems Failure.” Each episode will feature a different HKS faculty member with a unique expertise in areas including economics, technology, and healthcare. Our conversations will focus on how the systems that play a huge role in determining how we live our lives not only treat individuals and groups of people unequally, but can also exacerbate inequality more generally in our society. We'll also talk about strategies to change those systems to make them more equitable.

Latanya Sweeney (Intro): I found that these ads, when you typed in a person's name, if the first name was given more often to black babies than white babies, these ads started popping up claiming that the person had an arrest record even if they didn't, and even if the database from the company had no one with that name having an arrest record.

Ralph Ranalli (Intro): First up, we're delighted to welcome Professor Latanya Sweeney. Professor Sweeney recently launched the New Public Interest Tech Lab at Harvard, and she's a pioneer in the fields of algorithmic fairness and data privacy. A former Chief Technology Officer for the US Trade Commission, she's been awarded three patents and her work is cited in two key US privacy regulations, including the Health Information Portability and Accountability Act, better known to most of us as HIPAA. She has a degree in computer science from Harvard, was the first black woman to earn a PhD in computer science from MIT, and she says her experiences being the only woman of color in male dominated classrooms and labs may have contributed to her uncanny ability to spot bias, privacy gaps and other flaws in data and technology systems.

Ralph Ranalli: So welcome to Policy Cast, it's really great to have you here.

Latanya Sweeney: Thank you. It's great to be here.

Ralph Ranalli: I was reading over some of the descriptions for the course you teach, and it said that we live in a new kind of technocracy, a society in which technology design dictates the rules that govern daily life, and that algorithms influence healthcare, housing, insurance, education, employment, banking, and policing. How much do we really know about the extent of algorithms’ and AI's influence in our lives right now?

Latanya Sweeney: Well, the influence of technology in our daily life is just transformative. It's impossible to realize how influenced we are by technology without thinking about just only a few decades ago, a couple of decades ago, what technology we didn't even have. So if I think of the around 1990, there was no internet. There was no such thing as a personal computer, or a laptop, or a mobile phone. And now today it's really hard to even find a payphone if you're traveling around. That's how ubiquitous mobile phones are. So that's also true of all aspects of our lives, including our cars, which are run by computers and increasingly are becoming computers on wheels.

Ralph Ranalli: If you think of it like an iceberg, ahd what's above the water is the stuff we know about—the technology that is making decisions about us, and for us, and gathering our data. But what's below the water is what technology is doing that we can't see. How much is above the water and how much is below the water that we don't know about—and that we may need to be concerned about?

Latanya Sweeney: So the part that's above the water are the parts that we can see, the things that we engage with, the things that have our focus of attention. And I was talking to my 13 year old son, and we were talking about the places that you go on a website, or how many places can know that you visited a particular website. So his thinking is, "Well, look, I went to this website, who else would know that?" And we began listing all of the places and all of the ways and who would know a person visited a particular website. Everyone from the service provider for the internet, including various ad-ons that he had running on his browser, including lots of other kinds of companies that use cookies that are tracking him. And when we looked at the number that were tracking him, it really blew his mind. He had no idea, I think when we completed the list, that it was over 100 different entities that we could identify.

Ralph Ranalli: You're kidding.

Latanya Sweeney: ... that knew that he had visited this arbitrary website.

Ralph Ranalli: That's incredible. 100 different entities following you around as you just go visit this one website.

Latanya Sweeney: Exactly.

Ralph Ranalli: What is the scariest thing to you right now that can be legally done with our data? Can you give me an example or two of something that we really should be concerned about?

Latanya Sweeney: I think the things that come to my mind in the highest have most to do with life and liberty. So, to what extent and how the government uses personal information, how the government can track individuals. That I think is the scariest, because in the hands of a group—especially as we become more and more politically polarized—I worry that the data itself can become weaponized and used against people in ways that we only can think of through views like what they had in Nazi Germany and so forth.

So one, I do worry a lot about those kinds of situations. And then the other kind of data that I worry about a lot are all of the data sharing and ways to distort information that we don't think about and our ability to be impersonated all the time. So this disinformation in one way is where the information we receive is distortin; but also our own identities can easily be distorted online and can cause a catastrophe in common and government systems. So those are the things that I worry about the most. And then the second tier is that I worry a lot about disinformation, how the communication systems have really turned against us.

Ralph Ranalli: Right. I think you did a paper not too long ago on the government democracy piece, and something like 35 states had voter information sites that could be hacked and people's voter information could be potentially changed or altered. Can you talk a little bit about that?

Latanya Sweeney: Yeah. So in 2016, a group of students and I followed the election, trying to figure out ways that technology could be used to cause problems for the election. So we were among the first to find those bots on Twitter that were spreading misinformation. But later as we got towards the summer, we began hearing about situations where people would go to a primary to vote, and their information had been changed in the database. So if it was a closed primary, where in a closed primary Republicans get a Republican ballot, Democrats get a Democrat ballot, only Republicans were showing up and getting everything but a Republican ballot. They were getting a Green [Party] ballot or a Democrat ballot. So people were really alarmed because now they were basically disenfranchised. So the question became for us is: How could that be done at scale? Could it be done at scale? And we were shocked to find out that over 35 states and the District of Columbia had websites that allowed someone to impersonate an individual that identifies themselves as the voter online, and to change just the person's address. And then when the voter shows up to vote in person, because they wouldn’t know that they're supposed to go to some other polling place, their vote won't be counted, they'll start yelling and screaming, and they'll end up with a provisional ballot, but in most states provisional ballots don't count. So we found that for $9,000, you could do this at scale and shave off a few percentage points off of elections in each state.

Ralph Ranalli: Which could be enough to make a difference.

Latanya Sweeney: Right. Because look, how many states were decided within a few percentage points

Ralph Ranalli: Since that time, has there been any effort to close those gaps, those loopholes, those exposure points where those attacks could happen that you know of?

Latanya Sweeney: For most of those states, the vulnerability is still there. So when the 2020 election came along, realizing that the vulnerabilities are still there, we rolled out a technology called VoteFlare. And what VoteFlare does is voters can sign up for it and it will monitor your voter registration or your mail-in ballot and let you know if something changes on it. It's sort of like credit monitoring, but for voting systems. And it was used in the Georgia runoff election. And the people who signed up for it, we got a lot of email from them about how we got rid of a lot of their anxiety. So it was very effective.

Ralph Ranalli: And that came out of your work with your students at Harvard. Right?

Latanya Sweeney: That's right. That's right. I've just had this uncanny history of being the first to see an unforeseen problem, shed a light on it, and then get 1,000 great minds to follow. So I had done that with privacy and did that with the voting example and I also did it with algorithmic fairness by showing discrimination in online ads. But when you think about the experiments, they're not particularly that complicated. So when I left the Federal Trade Commission and came back to Harvard, I realized that I could teach students how to do this. How to look for the unforeseen, how to do a scientific experiment to shed light on it and change the world and literally have impact. So that's what we've been doing for the last few years. So we have a whole program in the government department called the Tech Science Program, where students have done the same kind of work and the work from those students have changed business practices, have changed laws and so forth. And now we're doing those courses at the Kennedy School as well.

Ralph Ranalli: So I remember you said when you first launched the new Public Interest Technology Lab, that despite all this great work on things like voteflare.org, and you have three patents and your work is cited in federal privacy regulations, including HIPAA, that you were leaving more problems unsolved than you could address. Can you talk a little bit about that and a little about what's different about the Public Interest Technology Lab and how you're going to scale up, I guess is the way to put it?

Latanya Sweeney: Yeah. So the students had quite a bit of success in the undergraduate program. And every year we, I teach a class the students call the “save the world-class,” and that's where we take five real-world problems and we do a scientific experiment. And then we sort of push for action using those results in the media and in government and in the business practices. So we've done this. So every year we do five problems in that class. But every year as I choose the five problems, I'm leaving more and more fantastic problems not done. So the number just kept getting bigger and bigger. So the question is, if it's this much low-hanging fruit and it's this many problems that really need addressing, how do we do this at scale? It can't just be one army of students coming through classes at Harvard College. So the Tech Lab, the Public Interest Tech Lab, I'll just say it, techlab.org, the goal of it is to do this at scale. That is to onboard other students from around the country, if not from around the world, as well as scholars to do the same kind of work so that we can do it at scale to really make a difference. So we sort of have three pillars. We have the projects that we do ourselves. We also have an equip people pillar where we work with startups, we work with individual students, and so forth. We give them ideas for projects to do. We provide resources for them, including technical know-how and technological tools that are needed so that they too can have an opportunity to help us shed light on some of these problems.

Ralph Ranalli: How does it help students, scholars, faculty that are going to be part of this scaled up effort to have that hands-on experience working directly with the technology, the algorithms, programs, the tools, et cetera. How does that make it more effective?

Latanya Sweeney: Well, it does on two different levels. So we have many brilliant minds out there thinking about problems around technology and society. But a lot of times a researcher, especially a social scientist, has to think about the technology as a black box. And while they think about the issues that come up and so forth, they're not really good about knowing how to look inside or what tweaks might make all the difference. So one of the things the tech lab does is it gives hands-on experience to scholars who are working on technology, society problems. How does that actually work? What are the pieces to it? And we make that accessible to them, and it often will give them insight on the inside of that black box so that they don't have to only think about policies on the outside. They can think about design changes that might be easier to make on the inside.

Ralph Ranalli: Can you give me an example of one of those situations where that direct interface with the technology led to that kind of deeper insight?

Latanya Sweeney: Yeah. I can give you some from my time at the Federal Trade Commission. So phones, when we walk around with cell phones, right now they're constantly yelling out a unique identifier in the phone looking for any wifi connectors that might be out there. And the goal for that is so that when I go to use my phone, I don't have to wait for it to scan the wifi networks, it'll immediately connect to a known wifi port or I can pull down the list of ones that it sees, and they're already there without a delay.

Latanya Sweeney: So that's just a wonderful kind of usability issue. But a lot of retail stores in other places like in malls often use those to track individuals. So as your phone is yelling out its identifier, they can take three of them and triangulate to where you physically are in the store so they can know what you're looking at, or what all did you spend the most time on, or what other stores did you go into or whether or not you were at the register and did you buy anything? So at the FTC, at that time, we were looking for what would or should be the rules for that. Shouldn't people have to opt in for that, or shouldn't people opt out for that? And it turns out that there are all these great technology alternatives inside of the phone. So one alternative which Apple now uses is that it doesn't have to keep the same identifier yelling all the time, it can change that identifier. Which means that it's harder for someone to track you by name, even though you still might enable their tracking. One could imagine a change in the phone that had a push button to immediately put my phone in airplane mode, which would therefore turn off the wifi while I'm roaming around. And while we were at the FTC, by looking at how a phone is actually designed, we came up with many other possible changes to the design that could allow an individual to have control as to whether they wanted to be tracked in this way or not.

Ralph Ranalli: I guess that leads to a question of if the solve for the problem is already in the phone, why wasn't the problem anticipated by Apple or whoever made the phone? Or the service provider if the solution was already baked into the technology before it was put out there? Is nobody thinking about this stuff on the sort of corporate end of this about privacy first? How do we bake more privacy into those kinds of decisions and not keep having to fix things at the back end?

Latanya Sweeney: Right. And this is a fantastic point because if, while designing the phone, you thought about any of this, you could have just made this easier and avoided this conflict for society altogether. So one of the ways we can solve our technology-society clashes is, at the time we're designing the technology, thinking about longer term consequences and what they might be. And one of the things we teach in the Kennedy School course is how to do that, actually, how companies can do that and that they should do that. But the reason it doesn't happen now, it's not so much the how, it's a matter of motivation. Some of it is the how, that we don't train technologists to do this, we don't train technology developers or managers to do this, and that it's important, and that they can do this so easily and effectively. 

The other piece of it too is that the way we design technology and develop technology is very rapid and it doesn't allow for a lot of time for quality control. In most instances, like a website, it just comes live and they're fixing it and tweaking it and updating it in real time and so forth while we're actually using it. So this sort of speed to market is another reason why there needs to be that momentary pause where you say, "What are the adverse consequences and how might we address it?"

Ralph Ranalli: Right. It's the sort of the building-the-airplane-while-you're-flying-it ...

Latanya Sweeney: That's right.

Ralph Ranalli: ... scenario. But how do you change that? Is that government intervention? Setting regulatory standards? Or is it better education, retraining, or training the next generation of technologists to take those things into account? What's the fix there?

Latanya Sweeney: I think the best fix ... in our class we teach this process called stakeholder design. And the idea is that this is a way that businesses can make a promise to society about how compliant or how non-disruptive the technology will be. I'd like to see us move towards this, like the way we have warranties on regular products. Let us know that this is the thing that the company is warranting that it's not going to have this privacy problem, or it's not going to have this other privacy problem. So part of this then becomes a kind... That might be what a regulatory space might look like. 

But in a non-regulatory space that just has market driven solutions, it has to do with a kind of backlash that's brewing. In the earlier years, about a decade or two ago, if you and I were having this podcast, people would not have their own sense of creepiness about the new shiny technology. Instead, they're just really enjoying all the benefits of it. But now when I talk to students who are in their 20s, they feel there is a creepiness to it, and they are much more aware of the problem. So for the viability of the technology itself, I think companies should be increasingly motivated to be looking for these clashes and make sure they don't occur with their products.

Ralph Ranalli: A lot of this, though, has to do with the way that the technology development is driven. And backlash is a good thing, but to have technology developed in the public interest, is there just an intrinsic tension between the notion of technology in the public interest and technology for profit? Isn’t there just always going to be that tension?

Latanya Sweeney: I don't know that I would call it a tension. So the public interest technology has, well, we could think about it as having sort of two major phases. One major phase is to build the technology. Let's say three phases. One is the technology that's for the public's purpose. So the VoteFlare example is such a technology. That's a technology that we built to say, "I want people to be comfortable and aware that if anything changes with their voter registration, they'll be the first to know, and that they can be proactive." So that's not something that a technology company had already decided that it was going to build. We didn't stop and say, "How are we going to make money off of this?" We said, instead, "This is something society needs right now and let's provide it." So that's one kind of way we could think of public interest technology.

Another way we could think of public interest technology or another way it manifests itself is the earlier part of our conversation, the unforeseen consequences, exposing problems that exist in the technology. That's also an issue of: What is the public's interest in the technology? And that we have an interest that wasn't thought about by the manufacturer when they were developing it. We have an interest that wasn't a part of the market choice that I could make, because this is the only product that does this, this, or this, but I still have these other consequences that are adversely affecting society or changing the rules that we actually live by. 

So by doing these experiments and exposing these problems, we do that in the public interest to make the technology more accountable to the public's interest. And then the third kind of category of public interest technology is a technology manufacturer themselves who says, "We're going to make sure our technology is compliant. And if it's not compliant, we're going to keep altering it to make it better and more compliant with society."

Ralph Ranalli: So we've talked a lot about privacy, and obviously that's a huge concern, but there's also a significant part of your work that has been about exposing and addressing how racial and other forms of bias are embedded into algorithms and AI and other technology systems. Like you, in your work, you've shown that AI facial recognition programs weren't accurately seeing dark skinned faces. And you also showed that Google searches for predominantly black names were more likely to yield advertisements associated with arrest record databases compared to searches for predominantly Caucasian names. How do racism and sexism and other forms of bias make their way into code?

Latanya Sweeney: Yeah, this is a great question. And also, but let me just give some context so that, so people can really appreciate the relationship between privacy and algorithmic fairness or bias in algorithms. So I did a simple experiment where I was able to show how a data that was supposed to be anonymous really wasn't, and this had a profound effect and sort of ignited the whole field known as data privacy, and laws around the world changed because of that experiment. A couple of decades later, I found that these ads, when you typed in a person's name, if the first name was given more often to black babies than white babies, these ads started popping up claiming that the person had an arrest record—even if they didn't . And even if the database from the company had no one with that name having an arrest record.

So ignited these issues around how algorithms can embed bias and inflict bias and jeopardy in society on their own and independent of what government rules are. Like in those ads, that's illegal. That's a violation of the Civil Rights Act. Joy Buolamwini at MIT was also able to show that face recognition software was trained primarily on white male faces, so as a result, they're horrible at detecting darker skin people or darker skin women would be the worst. And since then algorithm bias discoveries have just gone on and on. So then by 2016, I found those election website. So when I look at those three big experiments and the impacts that they had for privacy, algorithmic fairness, and elections, it says that every democratic value now is up for grabs. Privacy was just the first wave. Algorithmic fairness was the second wave. And now every democratic value that we hold dear is up for grabs by what technology design allows or doesn't allow.

Ralph Ranalli: That's pretty frightening.

Latanya Sweeney: It's unfortunately very true.

Ralph Ranalli: I'm interested in how these issues of bias in technology systems kind of were informed by your own personal history. You were the first black woman to earn a PhD in computer science from MIT. You also got your degree in computer science, your undergraduate degree in computer science from Harvard. Your PhD was in 2001, yet 20 years later we're still talking about racism and misogyny being endemic in the tech sector. What was your experience like your personal experience as a black woman making your way up in that world?

Latanya Sweeney: Oh my God. That's a session all by itself. So like you're pointing out, just to give people context. When I was at MIT, I would usually be the only black, or the only woman, in my class. And it created lots of problems for people who thought that to be like them intellectually meant that you had to look like them or be like them. So as a result, it was very difficult because you just didn't have the support structures that others had. It was much, much harder. When I came to Harvard as a teaching fellow and you'd walk into a classroom, you would just see a sea of white guys. And now you walk into a classroom at Harvard and it just seems like you walked into the United Nations. These people, these are the best and brightest from around the world. So time has changed things in many ways. But when we look at the faculties at Harvard or the faculty at MIT, we don't see that diversity. So then we still have a long way to go.

Ralph Ranalli: And I was interested in this area also, because you said you seem to have this knack for finding these problems and issues first. Do you think that the fact that you were so different from sort of the white guy hive mind that you were adjacent to when you were studying computer science, do you think that influenced your ability to think about things differently or helped you think about things differently?

Latanya Sweeney: That's a fantastic question. I don't actually know what to attribute it to. Why is it that I always see things in a different way? But I've been able to teach other people to see it that way too. So I don't know whether it's a systems kind of view—that by looking at the world in a more holistic way that there's an action, there's a reaction, what could be going on? Or whether it is related to my history?

I often think about my great-grandfather. I was raised by my great-grandparents, which is already an oddity, and then you had another oddity that they were born in 1899 and 1900. And they had lived all their lives in the south, through Jim Crow and so forth. And my great-grandfather had these rules about ways he had found to survive. And a lot of those rules came down to the benefits of anonymity in a society that is hostile to you. So I do think certainly some of my background plays into that, not just because of the privacy aspect, but also understanding the importance of democratic values and the ways in which people have to have the space to function.

Ralph Ranalli: That's fascinating how even more than a century ago, it was privacy that was a protection for people who were different. And maybe we've just changed a bit, advanced a bit technologically, but it seems like in a very fundamental way, what you're saying is that that's still true. That privacy is still a big factor in protecting people who are not part of the privileged majority for whom the algorithms are written.

Latanya Sweeney: And then that would go further. Normally when someone would hear that statement that you just made, “for the privileged” ... who now are the privileged? We're all like my great-grandfather. No matter our race, no matter our income, the privileged are the big tech companies themselves. They're the only ones who really are in control.

Ralph Ranalli: So what do you think is the best path to getting the big technology companies to address these issues and make sure that we all have those privacy protections that keep us from being taken advantage of?

Latanya Sweeney: I think it would be economic alternatives, and the reason for that is our government is not proactive on these slow burning problems. So if you look at climate change, we could go on and on, but we're not quick to respond and correct for problems and technology is like that. But technology is moving so fast that we can't hold out for some policy panacea that is somehow going to save the day. I don't think that's realistic. So there are things we can do in policy to help and we should be doing, and we should fight for those things. But I think the biggest overarching change has to be one that happens in the marketplace itself, where alternatives or newer technologies that come out are making guarantees about its fit for society or society is making demands on what's appropriate technology and people are voting with their wallets.

Ralph Ranalli: And how do you push things in that direction?

Latanya Sweeney: Well, we're trying. That's one of the initiatives at the Tech Lab. Like in terms of privacy, we have a project called MyDataCan where people would still use apps and web services just like they do now. But the difference is that a copy of all the data about you is stored in your own personal data can for your own use. So that makes you, one, aware of the data that's out there. Two, you have control and ownership of your copy of the data, so that you can begin to change the way companies operate in terms of data about yourself. Time will have to tell whether or not it becomes a market solution or a new direction in the marketplace. It's just a brand new project. It's only been out about a year, but it's already gotten a lot of traction. So we'll keep an eye on it and we'll see, but we expect other things like that. I know Apple has certainly been trying to market itself as a privacy company, trying to make a differentiation in the marketplace. So we may see more of that. And I think those are all good signs of moving us in the right direction.

Ralph Ranalli: Well, thank you so much. This has been a really fascinating conversation, and I really appreciate your taking the time to be with us.

Latanya Sweeney: Well, thank you very much.

Ralph Ranalli: Thanks for listening. Please join us for our next installment of the Systems Failure series, featuring Harvard Kennedy School Professor Jason Furman, who recently testified before Congress and called inequality the fundamental challenge facing the US economy. If you'd like to learn more about our podcast, please visit our page on the main Harvard Kennedy School website. And if you have a question or a suggestion, please send us an email at policycast@hks.harvard.edu.