A discussion about equity and what it requires to make reliable AI policy. This episode was taped prior to a live audience at MIT Technology Review’s yearly AI conference, EmTech Digital.
We Meet:
- Nicol Turner Lee, director of the Center for Technology at the Brookings Institution
- Anthony Green, manufacturer of the In Machines We Trust podcast
Credits:
This episode was developed by Jennifer Strong, Anthony Green, Erin Underwood and Emma Cillekens. It was modified by Michael Reilly, directed by Laird Nolan and blended by Garret Lang. Episode art by Stephanie Arnett. Cover art by Eric Mongeon. Unique thanks today to Amy Lammers and Brian Bryson.

Full Transcript:
[PREROLL]
[TR ID]
Jennifer Strong: The applications of expert system are so ingrained in our daily lives it’s simple to forget it’s there … But these systems, like ones powering Instagram filters or the cost of a cars and truck flight house … can depend on pre-existing datasets that stop working to paint a total image of customers.
It implies individuals end up being outliers because information– typically the very same individuals who’ve traditionally been marginalized.
It’s why face acknowledgment innovations are least precise on ladies of color, and why ride-share services can in fact be more costly in low-income areas. How do we stop this from taking place?
Well would you think a quote from Harry Potter and his wizarding world … might develop an excellent beginning point for this discussion?
I’m Jennifer Strong and this episode, our manufacturer Anthony Green brings you a discussion about equity from MIT Technology Review’s A-I conference, EmTech Digital. We’ll speak with Nicol Turner Lee– the director of the center for innovation at the Brookings Institution– about what it requires to make reliable AI policy.
[EPISODE IN:
Anthony Green: There’s a quote from Harry Potter of all places.
Nicol Turner Lee: Oh Lord, I, I, I haven’t seen the Harry Potter episodes since my kids were little so I’ll try.
[Laughter]
Anthony Green: Oh man. Uh, it’s a respectable one. No, it’s, it’s simply type of stuck to me throughout the years. I’m truthfully not even otherwise a huge fan, however, um, the quote goes, there will be a time where we should pick in between what is ideal and what is simple and it seems like that uses quite directly to how business develop these systems.. I think my concern is how can policy makers, right, begin to press the needle in the best instructions when it comes to beneficial results for AI in choice making?
Nicol Turner Lee: Well, that’s an excellent concern. And once again, thank you for having me. You might be questioning why I’m sitting here. I’m a sociologist. I’ve had the advantage of being on this phase for a number of conferences here at MIT. I got into this … And prior to I address your concern, since I believe the quote that you’re referencing points to much of what my coworkers have actually talked about, which are the sociotechnical ramifications of these systems.
Anthony Green: Mm-hmm.
Nicol Turner Lee: So I’ve been doing this for about 30 years. And part of the obstacle that we’ve had is that we’ve not seen fair access to innovation. And as we consider these emerging advanced systems, to your point, we need to consider the degree to which they have results on routine daily individuals, especially individuals who are currently marginalized. Currently susceptible in our society. That quote has a lot of significance since if we’re not mindful, the innovation in and of itself will arrange of speed up, I believe, some of the development that we’ve made when it comes to equity and civil rights.
Anthony Green: Yeah.
Nicol Turner Lee: Um, I’m gon na date myself for simply a minute. I understand I look a lot more youthful. When I was growing up I utilized to run house and see the Jetsons,. There were 2 animations. I saw Fred Flinstone, which if you all keep in mind, he rode around on an automobile with rocks and I viewed the Jetsons.
Anthony Green: Powered with his feet.
Nicol Turner Lee: I understand! You’re too young to understand about Fred Flinstone.
Anthony Green: Oh, Boomerang.
But, however if you see. You understand, Fred Flinstone is antiquated.?
Anthony Green: .
Nicol Turner Lee: The, the rocks as wheels does not work.
Anthony Green: Yeah.
Nicol Turner Lee: The Jetsons is in fact recognized. And part of the obstacle and the factor that I have interest in this work beyond my, you understand, PhD in sociology and my interest in innovation is that these systems now are a lot more typically purposed that they affect individuals when they are contextualized in environments.
And that’s where I believe we need to have more discussions that indicate your concern. Periphrastic method. I believe it’s actually essential that we have these discussions now, prior to the innovation accelerates itself.
Anthony Green: Hundred percent. And I suggest, you understand, all of that stated, right, policy making alone isn’t going to be the only option required to fix these problems. I would enjoy it if you can speak to how responsibility, particularly on the part of market, comes into play as well.
Nicol Turner Lee: Well, the issue with policy makers is that we’re not always technologists. Therefore we can see an issue and we really sort of see that issue in its results.
Anthony Green: Yeah.
Nicol Turner Lee: So I do not believe there’s any policy maker, or extremely couple of beyond individuals like Ro Khanna and others, right, who really understand what it’s like to be in, in the tech area.
Anthony Green: Sure.
Nicol Turner Lee: That comprehends how these results take place. They do not comprehend what’s beneath the hood. Or as individuals state, I’m attempting to move far from this language. It’s not truly a black box.. It’s simply a box.
Anthony Green: .
Nicol Turner Lee: Because there’s some, uh, judgments that include calling it a black box. When you believe about policy and those results you have to state to yourself, how do policy makers sort of take a natural iterative design and then enact laws or control it? And that’s where individuals like me who remain in the social sciences, I believe, are available in and have a lot more discussion on what they ought to be searching for. Um, so the responsibility there is hard.
Anthony Green: Yeah.
Nicol Turner Lee: Because no one is talking the exact same language as numerous of you in this space. The technologists are sort of hurrying to market. I call it permissionless forgiveness. Uh, as my coworkers at the center for technology development, Tom Wheeler has that excellent expression, “develop it and after that break it and after that return and repair it.” Well, think what takes place? That’s permissionless forgiveness. Cuz what takes place? We state we’re sorry when individuals have actually foreclosed, uh, home mortgage rates, remain in criminal justice systems where they’re apprehended longer since these designs determine those forecasts.
Anthony Green: .
Nicol Turner Lee: So policy makers have not rather, Anthony, reached the speed of development. And we’ve stated that for years, however it’s in fact real.
Anthony Green: Absolutely. I suggest, you’ve described this concern in the past as a civil and human rights concern.
Nicol Turner Lee: It is. It is.
Anthony Green: I suggest, can you kind of like broaden on that and how that’s kind of shaped your discussions about policy?
Nicol Turner Lee: You understand, it’s shaped my discussions from the perspective of this. I, I, you understand, outrageous plug, I have a book coming out on the United States digital divide so I’ve been extremely interested. I call it, uh, Digitally Invisible, how the web is producing the brand-new underclass. And it’s truly about the digital divide passing by the binary building of who’s online, who’s not, to actually thinking of what are the effects when you are not linked.
Anthony Green: Right.
Nicol Turner Lee: And how do these emerging innovations effect you? To your point, I call it a civil rights problem due to the fact that what the pandemic shown is that without web gain access to, you were in fact not able to get the very same chances as everyone else. You might not sign up for your vaccine. You might not interact with your family and friends. Fifty-million school aged kids sent out house, 15 to 16 countless them might not find out. And now we’re seeing the results of that.
Anthony Green: Yeah.
Nicol Turner Lee: And so when we think of expert system systems that now have actually changed what I call the death of analog. Change, uh, you understand, how we utilized to do things face to face we’re now seeing, in a civil liberties age, laws that are being broken. Which. in manner ins which I, I do not always credit to the impropriety of technologists. What they’re doing is they’re foreclosing on chances that individuals have actually combated hard for.
Anthony Green: Sure.
Nicol Turner Lee: 2016 election. When we had actually foreign operatives can be found in and control the material that was readily available to citizens. That was a kind of citizen suppression.
Anthony Green: .
Nicol Turner Lee: And there was no location that those folks might go to like the Supreme Court or Congress to state my vote was simply eliminated based upon the deep neural networks that were connected with what they were seeing.
Anthony Green: Yeah.
Nicol Turner Lee: Or the false information around ballot. We’re now at a state … when you remain in a city like Boston and an Uber chauffeur does not choose you up due to the fact that he sees your face in the profile. Where do you choose the kind of, um, you understand, the advantages of, of the civil liberties program that we have that was not based upon a digital environment? Part of my work at Brookings has actually been how do we look at the versatility and dexterity of these systems to use to emerging innovations. And we have no basic response due to the fact that these guidelines were not always established, you understand, in the 21 st century.
Anthony Green:
Nicol Turner Lee: They were established when my grandpa informed me how he strolled to school with the exact same set of shoes. Where the bottom was out since he desired an education. We do not have that today. And I believe it’s worth a discussion as these innovations end up being more common. How are we establishing not simply inclusive and fair AI however lawfully certified AI? AI that makes good sense that individuals feel that they have some retribution for that impropriety. I’ll talk a little bit about some of the work we’re doing on there, however I believe, you understand, there’s a cadre of people like myself, some of them here at MIT, that are actually attempting to figure out how do we go back and make individuals liable to the civil and human liberties of folks and not permit the innovation to be the fall individual when it comes to, you understand, why things trash havoc or go incorrect.
Anthony Green: Don’t blame the robotics.
Nicol Turner Lee: You understand! I inform individuals robotics do not discriminate. I’m sorry. You understand, we do and, and it’s something to be stated about that. We begin taking a look at civil liberties.
Anthony Green: I’m gon na go to the audience. Anybody got a concern?
Rene, audience member: Thank you a lot, Renee from Sao Paulo, Brazil.
Nicol Turner Lee: Hey!
Rene, audience member: There is a typical style on these last discussions. It’s about invisibility.
Nicol Turner Lee: Yes!
Rene, audience member: There are numerous methods to be undetectable. If, if you have the incorrect badge you are undetectable, like Harry Potter. If you are too old, if you have the incorrect type of skin. And there’s one really intriguing thing. When we talk, we discuss information and AI. AI is proposing features of information that are offered.
Nicol Turner Lee: Yeah.
Rene, audience member: But there are information that are totally undetectable about individuals who are unnoticeable. What kind of options are we constructing if you are basing on information. based upon information about all, constantly the very same individuals. How do we bring presence to everyone?
Yes!
Rene, audience member: So, thank you a lot.
Nicol Turner Lee: No, I like that concern. Can I leap right in on this one?
Anthony Green: Go for it.
Nicol Turner Lee: You understand, uh, my coworker and good friend Renee Cummings, who is the AI, uh, researcher in home at University of Virginia. She presented me to, a couple of months back and we did a podcast where she was included, this principle of what’s called information injury.
Anthony Green: Mmmm.
Nicol Turner Lee: And I wan na sort of walk you through this since it blew me away when I started to think of the ramifications and it goes to Renee’s concern. What does it imply, you understand, when we speak about AI, we typically discuss the issue advancement, the information that we’re training it on, the manner in which we’re analyzing the results or describing them, however we never ever discuss the quality of the information and the reality that the information in and of itself holds within it, the, the injuries of our society. I do not care what individuals state. If you are training AI on criminal justice, um, problems, and you’re attempting to make a reasonable and fair AI that acknowledges who must be apprehended or who ought to be launched. And all of us understand that specific algorithm I’m discussing. If it is trained on United States information, it is disproportionately going to overrepresent individuals of color.
So although my pals, and I inform everyone this, so you understand, like she’s not can be found in here, you understand, being upset. I inform everyone you require a social researcher as a buddy. I do not care who you are. If you are a researcher, an engineer, an information researcher and you do not have one social researcher as your buddy, you’re not being sincere to this issue.? Due to the fact that what occurs with that information? It includes all of that sound. And regardless of our capability as researchers to sort of tease out that sound or diffuse the sound, you still have the basis and the structure for the inequality. Therefore among the important things I’ve attempted to inform individuals, it’s most likely fine for us to acknowledge the injury of the information that we’re utilizing. It’s alright for us to recognize that our designs will be normative in the degree to which there will be predisposition. Technical predisposition, social predisposition, result predisposition and forecast predisposition, however we ought to reveal what those things are.
Anthony Green: Yeah.
Nicol Turner Lee: And that’s where my operate in specific has actually ended up being actually fascinating to me as an individual who is taking a look at this as, you understand, using proxies and making use of information. For me, it becomes what part of the design is a lot more harmful to participants and to results. And what part must we reveal that we simply do not have the best information to forecast properly without some kind of, you understand, run the risk of …
Anthony Green: Sure.
Nicol Turner Lee: … to that population.
Anthony Green: Yeah.
Nicol Turner Lee: So to your concern, I believe if we acknowledge that, you understand, I believe then we can get to a point where we can have these sincere discussions on how we bring interdisciplinary context to specific circumstances.
Anthony Green: We’ve got another concern.
Kyle, audience member: Hello Nicol.
Nicol Turner Lee: Hey.
Kyle, audience member: I’m grateful for your viewpoint. Um, my name is Kyle. I run … I’m an information researcher by training and I run a group of AI and ML designers and designers. Therefore, you understand, it frightens me how quick the market’s progressing. You pointed out GPT-3. We’re currently speaking about GPT-4 remains in the works and the rapid leap and abilities that’s gon na present. Something that you discussed that actually struck me is that lawmakers do not comprehend what we’re doing. And I do not think that us as information researchers ought to be the ones making choices about how to connect our hands behind our backs.
Nicol Turner Lee: Yeah.
Kyle, audience member: And how to safeguard our work from having unintentional repercussions.
Nicol Turner Lee: Yes.
Kyle, audience member: So how do we engage and how do we assist lawmakers comprehend the genuine dangers and not the buzz that is often heard or viewed in the media?
Nicol Turner Lee: Yeah, no, I like that concern. I’m in fact gon na turn it. And I’m gon na speak about it in 2 methods which I really discuss it. I do believe that lawmakers who work in this area, especially in those delicate usage cases.
So I inform individuals, I offer this example all the time. I enjoy looking for boots and I’m fine with the algorithm that informs me as a customer that I enjoy boots, however as Latonya Sweeney’s work has actually suggested if you associate other things with me. Uh, what other, uh, associates does this specific individual have? When does she purchase boots? The number of boots does she have? Does she examine her credit when she’s purchasing boots? What type of computer system is she utilizing when she’s purchasing her boots? If you end up being to make that accumulative photo around me, then we face what Dr. Sweeney has actually spoken about– these associations that develop that kind of danger.
So to your very first concern, I believe you’re. That policy makers ought to really specify the guardrails, however I do not believe they require to do it for whatever. I believe we require to select those locations that are most delicate. The EU has actually called them high danger. And perhaps we might draw from that, some designs that assist us think of what’s high threat and where should we invest more time and possibly policy makers, where should we hang out together?
I’m a substantial fan of regulative sandboxes when it concerns co-design and co-evolution of feedback. Uh, I have a post coming out in an Oxford University press book on an incentive-based score system that I might speak about in simply a minute. I likewise believe on the flip side that all of you have to take account for your reputational threat.
As we move into a lot more digitally innovative society, it is incumbent upon designers to do their due diligence too. You can’t manage as a business to head out and put an algorithm that you believe, or a self-governing system that you believe is the very best concept, and after that end up on the very first page of the paper. Due to the fact that what that does is it breaks down the credibility by your customers of your item.
And so what I inform, you understand, both sides is that I believe it’s worth a discussion where we have specific guardrails when it concerns facial acknowledgment innovation, since we do not have the technical precision when it uses to all populations. When it pertains to diverse influence on monetary items and services.There are fantastic designs that I’ve discovered in my work, in the banking market, where they really have triggers due to the fact that they have regulative bodies that assist them comprehend what proxies in fact provide diverse effect. There are locations that we simply saw this right in the real estate and appraisal market, where AI is being utilized to sort of, um, change a subjective choice making, however contributing more to the kind of discrimination and predatory appraisals that we see. There are specific cases that we in fact require policy makers to enforce guardrails, however more so be proactive. I inform policymakers all the time, you can’t blame information researchers. If the information is awful.
Anthony Green: .
Nicol Turner Lee: Put more cash in R and D. Help us develop much better information sets that are overrepresented in specific locations or underrepresented in regards to minority populations. The essential thing is, it needs to interact. I do not believe that we’ll have a great winning service if policy makers in fact, you understand, lead this or information researchers lead it by itself in particular locations. I believe you actually require individuals interacting and working together on what those concepts are. We develop these designs. Computer systems do not. We understand what we’re finishing with these designs when we’re developing algorithms or self-governing systems or advertisement targeting. We understand! We in this space, we can not relax and state, we do not comprehend why we utilize these innovations. We understand since they in fact have a precedent for how they’ve been broadened in our society, however we require some responsibility. Which’s truly what I’m attempting to get at. Who’s making us liable for these systems that we’re producing?
It’s so fascinating, Anthony, these last couple of, uh, weeks, as much of us have actually enjoyed the, uh, dispute in Ukraine. My child, due to the fact that I have a 15 years of age, has pertained to me with a range of TikToks and other things that she’s seen to sort of say, “Hey mommy, did you understand that this is taking place?” And I’ve needed to sort of pull myself back trigger I’ve gotten actually associated with the discussion, not understanding that in some methods, when I decrease that course with her. I’m going much deeper and much deeper and much deeper into that well.
Anthony Green: Yeah.
Nicol Turner Lee: And I believe for us as researchers, it sort of returns to this. I Have a Dream speech. We need to figure out which side of history we wan na be on with this innovation folks. And how far down the bunny hole do we wan na go to contribute? I believe what the achievement of AI is our capability to have human cognition involved these recurring procedures that go method beyond our wildest creativity of the Jetsons.
And that permits us to do things that none people have actually had the ability to carry out in our life time. Where do we wish to rest on the best side of history? And how do we wish to deal with these innovations so that we develop much better researchers?
Anthony Green: Sure.
Nicol Turner Lee: Not ones that are even worse. And I believe that’s a legitimate concern to ask of this group. And it’s a legitimate concern to ask of yourself.
Anthony Green: I do not understand if we can end on anything much better and we’re out of time! Nicol, we can go all day.
Nicol Turner Lee: I understand. I constantly seem like a Baptist preacher, you understand, so if I have energy about it …
Anthony Green: Choir, can you sing it?
Nicol Turner Lee: I understand. I can’t sing it, however you can do that I Have A Dream speech, Anthony.
[Laughter]
Anthony Green: Oh man. You’re putting me on the stand and I’m currently on phase.
Nicol Turner Lee: Yeah, ideal haha.
Anthony Green: Nicol, thank you a lot.
Nicol Turner Lee: Thank you so much. Value it.
Anthony Green: Absolutely.
Nicol Turner Lee: Thank you everyone here.
[MIDROLL AD]
Jennifer Strong: This episode was produced by Anthony Green, Erin Underwood, and Emma Cillekens. It’s modified by Michael Reilly, directed by Laird Nolan and blended by Garret Lang. It was taped in front of a live audience at the MIT Media Lab in Cambridge Massachusetts, with unique thanks to Amy Lammers and Brian Bryson.
Jennifer Strong: Thanks for listening. I’m Jennifer Strong.

GIPHY App Key not set. Please check settings