Skip to content ↓
MIT student blogger Allan K. '17

interview: joy buolamwini by Allan K. '17

hidden figure, poet of code, low-key actual superhero

i mentioned joy in a blog post a few weeks ago, when she won the $50000 grand prize in the search for hidden figures contest. i’ve been a huge fan of joy’s work ever since i read her blog post on inclusive coding, or “incoding”. she’s a fulbright fellow and a rhodes scholar, and now she’s at the center for civic media, working on projects that empower communities via tech education and highlight bias in algorithms.

i had the distinct privilege of chatting with her last week and we had a fun conversation about algorithmic bias, surveillance, and her work at the media lab. check it out:

* * * * *

learn more about:

the ALGORITHMIC JUSTICE LEAGUE

the CENTER FOR CIVIC MEDIA

the MEDIA LAB

* * * * *

some snippets:

(on algorithmic bias)
(and how facial recognition software won’t detect her face unless she’s wearing a white mask):

“…This was an issue I’d run into when I was an undergraduate. I went to Georgia Tech and did my bachelor’s in computer science, and I used to work in a social robots lab. And some of the projects I did there involved computer vision, and in that context…I had a hard time being picked up [by facial recognition software]. I would end up borrowing my roommate’s face to get the job done … So I started exploring that a little bit. Why was this happening? Was it just about my facial features, was it about the illumination, was it about the pose, what was going on?

And then I read this report called the Perpetual Line-Up, which talks about unregulated use of facial recognition in the United States, and it showed that one in two adults — 117 million people in the US — have their faces in facial recognition databases that can currently be searched unwarranted, using algorithms that haven’t been audited for accuracy. Here’s the catch: with some initial tests to see the demographic accuracy of some of these facial recognition systems, they saw that the systems performed worse for women overall, worse for people who were considered younger (under 24), and worse overall for people of color … so it’s not just me, having one bad incident! This is something a bit more systematic.”

 

(on why it matters)

“So let’s look into the realm of law enforcement–because here that’s where you have the potential issue of having civil liberties being breached, and also you start going into the realm of disparate impact. Is this technology that we’re using being targeted to a specific demographic more than another? What I was very concerned about was misidentification … if I’m tagged as somebody else [on Facebook], okay, maybe it’s funny, maybe it’s offensive, but perhaps it’s not as high stakes as if I’m misidentified as a criminal suspect.”

 

(about being at the center for civic media)

“I do not know if Algorithmic Justice League would have existed had I not been a part of the Center for Civic Media…just having the freedom to explore ideas that might not always resonate with the core technical leanings of a space like MIT. We’re definitely looking at the social impact of technology, so it’s not, “Can we make it bigger, faster, stronger, more efficient, smaller?” …But starting at, “Should we be doing this in the first place?” Starting at issues of power: who has control? who gets to decide and why, and what does that tell us about society?

And so in exploring this, I could have viewed my face not being consistently detected as, “Oh, this is a technical challenge” — but being in the space of the Center for Civic Media definitely orients me to [say], “This is not just a technical challenge … this is as much a reflection of society as other spaces where you see inequities that need to be addressed.”