top of page
Search
Writer's pictureKerry Mackereth

Meredith Broussard on Why Sexism, Racism and Ableism in Tech is 'More than a Glitch'

In this episode we talk to Meredith Broussard, data journalism professor at the Arthur L. Carter Institute at New York University. She's also the author of Artificial Unintelligence, which made waves following its release in 2018 by claiming that AI was nothing more than really fancy math. We talk about why we need to bring a little bit more friction back into technology and her latest book More Than a Glitch, which argues that AI that's not designed to be accessible is bad for everyone, in the same way that raised curbs between the pavement and the street that you have to go down to cross the road makes urban outings difficult for lots of people, not just wheelchair users.


Data journalist Meredith Broussard is an associate professor at the Arthur L. Carter Journalism Institute of New York University, research director at the NYU Alliance for Public Interest Technology, and the author of several books, including “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech” and “Artificial Unintelligence: How Computers Misunderstand the World.” Her academic research focuses on artificial intelligence in investigative reporting and ethical AI, with a particular interest in using data analysis for social good. She appeared in the 2020 documentary Coded Bias, an official selection of the Sundance Film Festival that was nominated for an Emmy Award and an NAACP Image Award.


She is an affiliate faculty member at the Moore Sloan Data Science Environment at the NYU Center for Data Science, a 2019 Reynolds Journalism Institute Fellow, and her work has been supported by New America, the Institute of Museum & Library Services, and the Tow Center at Columbia Journalism School. A former features editor at the Philadelphia Inquirer, she has also worked as a software developer at AT&T Bell Labs and the MIT Media Lab. Her features and essays have appeared in The Atlantic, The New York Times, Slate, and other outlets 


Reading List:


More than a Glitch: Confronting Race, Gender and Ability Bias in Tech


Artificial Unintelligence: How Computers Misunderstand the World


More than a Glitch, Technochauvanism, and Algorithmic Accountability with Meredith Broussard, The Radical AI Podcast. https://www.youtube.com/watch?v=23f6njQjHzE


Coding Needs to Get Beyond the Gender Binary https://time.com/6287271/coding-beyond-gender-binary/



Transcript:


KERRY MCINERNEY:

Hi! I’m Dr Kerry McInerney. Dr Eleanor Drage and I are the hosts of The Good Robot podcast. Join us as we ask the experts: what is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk, where we've got a full transcript of the episode and a specially curated reading list by every guest. We love hearing from listeners, so feel free to tweet or email us, and we’d also so appreciate you leaving us a review on the podcast app. But until then, sit back, relax, and enjoy the episode!


ELEANOR DRAGE:

This week we're talking to Meredith Broussard, data journalism professor at the Arthur L. Carter Institute at New York University. She's also the author of Artificial Unintelligence, which made waves following its release in 2018 by claiming that AI was nothing more than really fancy math.


We talk about why we actually do need to bring a little bit more friction back into technology and her latest book More Than a Glitch, which argues amongst other things that AI that's not designed to be accessible is bad for everyone, in the same way that raised curbs between the pavement and the street that you have to go down to cross the road makes urban outings difficult for lots of people, not just wheelchair users.


We hope you enjoy the show.


KERRY MCINERNEY:

 Amazing. Thank you so much for being here with us. It really is such a privilege to get a chance to chat. So just to kick us off, could you tell us a little bit about who you are and what you do, and what's brought you to thinking about gender, race, feminism, and technology?


MEREDITH BROUSSARD:

Thank you so much for having me. It's really a pleasure to be here with you today. My name is Meredith Broussard. I am a data journalism professor at New York University. I am also the research director at the NYU Alliance Republic Interest Technology, and I am the author of a new book called More Than A Glitch, confronting Race, gender, and Ability Bias in Tech.


ELEANOR DRAGE:

Fantastic. Thank you so much, and we love both this book and your previous book, Artificial Unintelligence, fantastic works that really inspired so many people and they're just constantly cited both in the pub and at conferences. So like a truly influential piece of writing. Can you now tell us what is good technology? Is it even possible, and how can feminism help us get there?


MEREDITH BROUSSARD:

One of the things that I think is so interesting is that we have this desire to make a binary categorization to say technology is good or bad. To say that artificial intelligence is good or bad, because then it feels like we'll be able to get some kind of handle on it, right? It feels like it's, some coping mechanism for dealing with the unknown. But the problem is that. Technology defies binary categorization, which is ironic because, our computers are binary computing machines, right? They're machines that do math and artificial intelligence, when it really comes down to it, it's just math. It's very complicated, beautiful math. So I think the easiest way to think about good technology, especially when it comes to AI, is to tie it to context. So we take something like facial recognition technology. Facial recognition we know has biases. Facial recognition is better at recognizing men than women. It's better at recognizing people with light skin than people with dark skin. If you do analysis of the intersectional accuracy, it's best of all at recognizing men with light skin. It's the worst of all at recognizing women with dark skin. Trans and non-binary folks are generally not acknowledged by facial recognition systems at all, right? So we know that there's bias in facial recognition. So one of the things I like about the new proposed EU legislation around AI is that it gives us a framework of high risk and low risk uses of AI, which I think is really helpful because then, when you're looking at the use, you think about the context.


So context for something like facial recognition might be low risk if it's used to unlock your phone. The facial recognition on my phone doesn't work half the time, it doesn't really matter, right? There's a code you can use for a backup. It's not a big deal. And so that would be a low risk use, but a high risk use of facial recognition might be something like police using facial recognition on time. Video surveillance feeds, right? Because that's going to misidentify people of color more often. It's going to get people of color swept up in police dragnets, more often, and this is already a community that is overpoliced and over surveilled, so that would be a high risk use of facial recognition and that would have to be regulated and monitored under these new EU guidelines, so that's a useful framework. I would even go further and say, It's a bad idea to use facial recognition in policing at all.


ELEANOR DRAGE:

I totally agree. I wanted to ask a quick follow up question about the difference between low risk use of facial recognition and a high risk use because I know people who were lobbying for Revolut and other companies that use facial recognition to do what we call one-to-one authentification. So when you log into your online banking, you use your face, and then you can get in.


And if that was under the EU legislation, it would mean that there would need to be human oversight for every instance of that happening. And companies like Revolut are saying that would destroy the kind of bottom line of this would be too expensive to do, it wouldn't be possible at all. So they're lobbying for exceptions to be made to these low risk use cases.


So what do you think about that slipperiness between low and high risk, the possible blurring of the boundaries between what might constitute something quite dangerous and something quite banal?


MEREDITH BROUSSARD:

So I'm not really averse to friction. Like a little bit of friction in a system is not the end of the world and actually a little bit of friction in the system is sometimes protective, right? If there is not a fully automated way to log into a bank, like if they have to still have tellers and customer service agents, I don't think that's the end of the world, right?


I don't think that we need to have this fully autonomous capitalist fantasy realized, people need jobs and computers don't work all the time, so it's completely reasonable to me that there would need to be customer service agents to fix the things that inevitably go wrong.


ELEANOR DRAGE:

I am really glad that you brought up friction because it's something that Kerry and I are really interested in and we're writing about it in our upcoming book


MEREDITH BROUSSARD:

Oh, fantastic.


ELEANOR DRAGE:

because disability theory or crip theory focuses a lot on the kinds of frictions that people with disabilities endure every day, and that aren't experienced by a lot of people.


So, friction for who? But also feminism and feminist methods have drawn a lot on friction as a way of slowing down, bringing up heat, of even in some instances creating a kind of pain to the system. Things that friction as a concept means to everybody so that we can better analyze what's going on. We can slow down, we can create energy, and go back against momentum and think, okay, what's going on in this process? How can we use the idea of friction to explore in a bit more detail what's going on here? Rather than just wanting things to move quickly and not really dwell on what's going on.


MEREDITH BROUSSARD:

And actually when we think about disability, it gives us a lens onto what could go wrong With the banking applications, right? So think about people who have atypical facial features or think about what happens when you get a wound on your face, or when you know you've had an operation, like some kind of situation where you know which your facial features are not are not inside the bounds of what's considered normal by the algorithm, then you know you're not gonna be able to get into your bank account and nobody wants that.


KERRY MCINERNEY:

Yeah, absolutely. And I think, it's so interesting to think about friction as maybe a corrective or as an important factor to be considering in response to what you've mentioned, this compulsive need for us to categorize and binarize which is particularly ironic with something like technology, which itself escapes these categorizations, but also itself has such an impulse to taxonomize. And these are the kinds of questions that I think are so central to your latest book More Than a Glitch, which I can see behind you in your background, which is very exciting, and yeah, we're absolutely thrilled that you have published this book and would love to hear a bit more about it.


Just to start, what brought you to writing about this?


MEREDITH BROUSSARD:

Thank you. And so this book came out of conversations that I had after Artificial Unintelligence came out. I did a lot of public communication around AI and the conversation just kept coming back to, issues of race and gender and disability. And I realized that there was just enough to explore in a book. One of the things that I had to learn the most about for the new book was disability. I felt like I had a handle on race and gender and the intersection with technology, but I didn't know that much about disability. So I was really grateful to the scholars and activists and regular folks who shared their experiences with me and allowed me to learn more about it. One of the things that was really a powerful concept for me was what's called the curb cut effect, right? So the curb cut is, called in the US I think in the uk it's called a dropped curb, and it's the little, the little ramp down at the edge of the sidewalk where it goes into the street. So in the US the curb cuts were put in were mandated as part of the Americans with Disabilities Act in the 1970s. And what happened once curb cuts went in is we realized, oh, wait, curb cuts are good for everybody. It's not just useful for people in wheelchairs trying to cross the street, right?


It's good for people using other kinds of mobility aids. It's useful for people pushing strollers, push chairs. It's good for people who are using a dolly to deliver things. It's good for people who are wheeling bicycles around. When you design for accessibility, everybody benefits. So this was a really powerful concept for me, and I think about it when I think about designing computational systems, because one of the things we do when we design technology is we design for ourselves, right? We all have unconscious bias. We're all trying every day to become better people, but we have unconscious bias. We embed our unconscious bias in the technologies that we make. And so when you have a small and homogenious group of people creating technologies, the technologies get the collective unconscious bias of those creators, right? And so when we design for ourselves and the people who are like us if the designers are a group of, Heterosexual, cisgender, mostly males, mostly light skin, mostly able bodied. That homogeneous group is not going to be thinking about bodies that are other than their own, and they're not going to be designing for maximum accessibility.


KERRY MCINERNEY:

That's really fascinating. And it definitely resonates I think with some of the experiences Eleanor and I have had here, particularly around this idea that accessibility has a huge kind of collective benefit as well as of course being really important to people who immediately encounter the ill effects of technologies and of cities that have been designed to be inaccessible. I know that here at the Disability Resource Center, when we work with students who have specific learning difficulties or staff with specific learning difficulties something which I do hear said is this idea of people with specific learning difficulties being the canary in the coal mine or the people who experience the first ill effects of, maybe in hospital teaching practices or ways of exploring and learning that aren't very inclusive. But when you bring in those simple kinds of changes, even just something as minor as making sure there's a good break in the middle of a seminar everyone benefits because actually we all like to give our brain a break, it's really helpful for us to be able to explore and process sensations and information in different ways.


MEREDITH BROUSSARD:

Absolutely. Absolutely.


You've got this concept called techno-chauvinism, and perhaps we can bring in the disability perspective into this because it's interesting to rethink what we mean by chauvinism too. I assume that you also are not necessarily thinking about individuals that hate women, but this culture that is set up to, to prefer men, that is set up to prefer the elites.


It's a question of power rather than of sexism per se. So can you tell us what techno-chauvinism is? And it's a great term, it feels like it's always existed. It's hard to believe that it's new. What does it do? What does it help us with when we're thinking about power and sexism and disability and racism in technology?


It's all about power when we're building technology, right? I started my career as a computer scientist and I quit to become a journalist. And when I was in computer science there was just, there was a lot of harassment. It was a very sexist, racist environment. Things are a little bit better in academic computer science nowadays. Not a lot better, but a little bit better. And so I have always been keenly aware as a woman of color, like I've always been keenly aware of these forces. And I, one of the things that happened when I was researching Artificial Unintelligence is I started thinking about how long I've been hearing the same promises about technology, the same promises about a bright technological future, and I realized, oh, wait, like I've been hearing these same promises for literally for decades. Which is something I guess you don't realize until you've been alive for several decades. I'm old now, so I've been hearing this rhetoric for a really long time.


I also happened to have started college at exactly the time that the web launched. So I'm on the bleeding edge of the generation that has had the web for their entire adult lives. Which again, I'm old now. Like the internet is no longer young and hip. It's middle-aged and there's nothing wrong with that. But, then you think about the way that, youth is valorized and the way that digital is portrayed as being like really hip and everything. And, you start to think about how these things are constructed, right? So I had all this going on in the back of my mind, and so techno-chauvinism is what I decided to call it techno-chauvinism is a kind of bias that says that technological solutions are superior to others. Techno- chauvinists say things like, technology is neutral. It's objective, it's fairer. It's more unbiased if you make a decision with a computer. And what I would argue instead is that we should use the right tool for the task, right?


So sometimes the right tool for the task is absolutely a computer. Nobody's arguing that we should go back to an era without video conferencing, for example, right? But sometimes the right tool for the task is something simple like a book in the hands of a child sitting on a parent's lap. One is not inherently better than the other.


It's again, about the right tool for the task. Life is long. Experiences are broad. The world's not gonna end if we don't use technology for everything.


KERRY MCINERNEY:

Yeah, and I think this is so important as well because I think, there's so many values deeply embedded in the technologies that we use on a daily basis, and Eleanor and I mainly work in AI ethics and we see this constant push to optimize for productivity, for example, like AI workforce tools.


And sometimes I feel like it's just so important to step back and say, not only do we need this tool, is it actually bringing a net benefit? Why is productivity as opposed to something like de-growth, for example, considered to be our like ultimate goal in this scenario. And yeah, I really love the way that you frame this as saying, how do we move away from a kind of easy, almost utopian pursuit of technology is the solution to all these different kinds of problems that we've generated and start thinking more broadly about what we want these technologies to be doing for us.


And something I think that you also explore really beautifully throughout the book is how sex and race and ability, bias and tech is not merely an accident or a glitch in the system, which I think too often is how this is portrayed. But it's actually a foundational logic that underpins many of these systems work. So could you give us some examples of where racism and sexism and ableism function as central features of tech systems?


MEREDITH BROUSSARD:

That's a great question. One of the things that that I always think about is when Google Images was found to be labeling images of Black men as gorillas, and what they said at the time was, oh, sorry, mistake. That's a glitch. We'll just, we'll fix it real quick. Let me fix it real quick. But they didn't really fix it. What they did was they took the label gorilla out of the set of possible labels, right? So they didn't fix the underlying racism of the system, they just put a patch on, right? So that's a pretty common strategy: not addressing the underlying issue because, the underlying issue is society, it's thousands of years of racism. And instead of going back and saying, okay, maybe we need to like, adapt our systems to to work on updating foundational beliefs, and guess what, society's going to continue to evolve and foundational beliefs are still going to change, the technical system is not fixed, they didn't do that, they just put on this patch. So I think that we need to start thinking about manifestations of racism, sexism, the ableism as indicative of larger problems. Yeah, there's sometimes there are glitches, but when a system is racist, sexist, or ableist, it's not just a glitch.


It's more than that. Another example comes from the way that we build AI systems, right? So that when we build machine learning systems, which are AI systems that are the most popular nowadays, it sounds like the machine is learning. It sounds like the machine is sentient, right? There's a brain in there. Not at all true. What we do, same thing every time we take a whole bunch of data, We feed it into the computer and we say computer make a model, and the computer makes a model. The model shows the mathematical patterns in the data, and then you can use that model to make predictions, to make decisions, to generate new text or generate new images, right? It's a very straightforward process and when we think about the mathematical patterns in the data, Then we can start thinking about, okay, what are the patterns of bias that we're probably going to see in the data? So there was an investigation by the markup, which is an algorithmic accountability journalism organization. And they found that mortgage approval algorithms in the US were 40 to 80% more likely to deny borrowers of color as opposed to their white counterparts. Okay? And in some metric areas, the disparity was more than 250%. So the mortgage approval algorithms, they're discriminating. Why is this, let's think about who's gotten mortgages in the past in the United States.


There's been discrimination against people of color in home loans and there's been redlining and there's been residential segregation in the United States. So all of those patterns are reflected in the data that was used to train the market approval algorithms. And so of course, the mortgage approval algorithms were being racist, right?


So if the designers had gone into it thinking I know I'm going to see problems in this system, I know I'm gonna see bias because there's a social history of financial discrimination and that social history is going to come out in the outputs. Then they could have put a finger on the scale and made it more equitable and caught the problems. But nobody's doing that, right? You have to have a totally different mindset. You have to reject techno-chauvinism in order to see the problems that are going to manifest in data-driven systems.


KERRY MCINERNEY:

Yeah, and that's really interesting, and the examples that you raised also reminds me of, a while ago, there was that really fascinating piece of research on Facebook and specifically how Facebook offered people a wide range of different kinds of gender and sexual identities that they could assume.


But then once you selected it, it turned out on the backend those weren't actually getting translated. There were still being put extremely binary kind of operational systems. Yeah. And I do think that there's something really striking about the parallels between people's unwillingness to treat sexism and racism and disability bias in these systems as structural rather than individual.


And the way that it's so hard to get organizations and get institutions to grapple with these forms of bias- it's not just the problem of one individual who, happened to say something deeply offensive, but rather getting those organizations to zoom out and say, look, this isn't about trying to really focus our attention on one person, it's looking at ultimately who's included and who's excluded in these places. So I guess with that in mind you've guessed to this a little bit in your last answer, but I'd love to hear a bit more on why do you think it's so hard to get people to do this zoom out and to move away from this idea of oh sex or race or ability bias is just a glitch. It's not systemic.


MEREDITH BROUSSARD:

I think that's a really complicated question. If you're asking why is it so hard, I would say, have you met people? This is a really complicated situation. I have multiple strands of my research, but I write code and I also write prose. But each of those things is so hard to do individually that I've found that I can't write both on the same day. So really if I'm writing a large scale program or writing, original code, it takes so much cognitive effort that I have to devote all of my brain power to it. And then when I'm writing an essay or something or writing a book chapter it also is really hard, takes a lot of cognitive energy. One of the things that I was so surprised when I started writing more about technology, I was surprised to discover that writing, simply writing in plain language about complex technical topics is incredibly difficult, right? So you just gotta give it the mental energy that it needs. So I've just discovered that, it's better if I don't write code and write prose on the same day. So I say that to acknowledge how difficult it is to write original code and how difficult it is to do creative work. And so I think one thing that happens is sometimes people are so focused on solving a technical problem that they use up all their cognitive energy on it and then they're not thinking about the social impact of it. And then the people who are focused on the social aspects don't necessarily have the context of what it's gonna take to make those technical changes, right? So it ends up being a communication mess. The other thing that I got really interested that I was thinking about relevant to your last comment on the Facebook situation is what it would take for Facebook to make that dramatic change. It's possible this has changed in the past couple of years, but the last time I looked when you signed up for Facebook it would ask you for your name and you had to commit to a gender, right? A binary gender. And then once you have an account, you can change your gender, right? But that's why the advertising system sells you as male, female, or null, right? That's the way that it was built in 2012. And in order to change that fundamental architecture, It's going to be really expensive and complicated and the effort required to get a multinational corporation to change their code base for something really fundamental is pretty dramatic, right? Especially if gender is stored as a binary. Which it often is. So when we first started, computing memory was really expensive. So what you had to do was you had to refactor your programs to make them really small, to make them compact. And one of the things you would do is you would think about how much space is going to be devoted to each variable, right?


So a word takes up like this much space and then a number takes up this much space. And then a binary, zero one, takes up this much space. So anything that could be a binary, you made into a binary because it's smaller, right? So gender, back in the day when people didn't generally understand that gender is a spectrum, they were like, oh, gender is a binary.


It could be a zero one. It'll take up this much space. That'll be cheaper. So that's how you know how people were taught to program for a really long time. So legacy systems are expecting gender to be a binary. They're not expecting it to be a number or a word. And you're gonna screw up a lot of legacy code interactions because if you try and feed a word to another program that expect is expecting a binary. Like it's just gonna break. It's not gonna work.


ELEANOR DRAGE:

I have a friend at the UN who was collecting information on queer subjects in, I probably shouldn't say exactly where, otherwise she might get told off. Anyway, she was doing really great work to collect interesting data that wasn't using binary gender in order to understand more about people's experiences who were gender queer in this particular place, and she was really proud of herself, we talked about it for ages and then she went to feed the information back into the UN's system. And of course she was like, oh God, I completely didn't think about this, but there is no space for the information that I have because we're feeding it back into a system designed to only accept and process information about binary gender.


And that happens all the time. All the time. So at NYU where I teach we just went through a really long process to update our student information systems so that students can change their gender without having to call and negotiate with somebody in customer service. So students can update their preferred name, can put it in their pronouns. This was a multimillion dollar multi-year effort. Because institutions have these really complicated computer systems. They were mostly set up in the 1960s. And so the computers that they were making in the 1960s were based on 1950s ideas about gender, about society, right? So anytime you have something that's outside of a very narrow, 1950s concept, like it pretty much is gonna mess with your legacy systems. And it's expensive, that's not a reason not to do it. Let's also think about, capitalism, like the urge to maximize profit is not the same as the urge to go back and fix your systems and update them to make them more inclusive.


ELEANOR DRAGE:

That's what Catherine D'Ignazio said when we interviewed her as well. She was so skeptical of this kind of change being made because she's said ultimately companies are liable to their stakeholders and paying out dividends, and the bottom line, which is such a shame.


MEREDITH BROUSSARD:

I love Catherine D'Ignazio's work. And she and Lauren Klein in Data Feminism. Absolutely. Yeah. Fantastic. Fantastic work.


ELEANOR DRAGE:

We love it when people come on and then just love everybody else. I love the good robot community of interviewees. I want to ask you just to finish off there's quite a harrowing story at the end of your book where you describe being diagnosed with breast cancer in 2019 and finding out that your mammogram had been read by a human doctor and an AI. And then you went on a deep dive to learn more about how AI is being used in breast cancer detection. So what did you find out about how AI is being applied for this purpose? Here at Cambridge, we have lots of researchers looking into how to use AI for breast cancer research and all sorts of things.


But did you feel okay about it, about AI being used in this context? It's quite an intimate context. Did you know how reliable it was? What kind of reassurance did it give you or not?


MEREDITH BROUSSARD:

So it gave me about as many questions as it did answers. As you said, I took my own mammograms. I ran them through an open source breast cancer AI detection mechanism in order to write about the state-of-the-art and AI-based cancer detection. And I think that it is absolutely conceivable that someday we might have AI that could that could help detect cancer reliably. Is that happening anytime soon? Probably not. So one of the things I learned is that I had a lot of misconceptions about the state of the art and AI-based cancer detection and that is actually pretty typical of the general public and even of doctors, right? So there's a lot of confusion and complicated expectations which is typical of what we see with AI in general, right? We all get confused with the reality of AI and what the AI is that we see in the movies and the kind of wild promises that marketeers make about the future of artificial intelligence. And if you read the media, you might think, oh yeah, like breast cancer detection with AI is right around the corner, it's definitely happening sometime soon. But actually people have been saying that for years and it's not really as close as you would think. And the people who are saying, oh yeah, it's really close, are mostly the people who stand to profit from it. Now saying it's really close is different than saying we want it right, because the people are saying, oh yeah, we want it. Are the people who are the people who are gonna make money off it, but also the people who care, right? They're the people who are amazing cancer doctors who want better diagnostic methods because it's going to allow them to save more lives. So it's a very complicated situation.


I was alarmed when I saw that an AI had read my scans because I thought, what did it find? What's going on? Who made this AI? Because I'm an AI researcher and, some AI is better than others. The AI that I used did detect my breast cancer. So good job, AI. It did not do it in the way I expected. Though I should say, I'm totally fine now. Like I had breast cancer, I got it treated. I'm totally okay. But thank you for asking and. So it detected it, but I thought that it was going to be bells and whistles. You know how when you send an iMessage that's like congratulations and you get like animation and it's all very exciting.


But no, all it did was it drew a red box around an area of concern and then gave me a number between zero and one. And I asked, is there 20% chance that there's something malign or malignant in this area? And he said, no, it's definitely not a percent chance that there's a malignancy. It is just a score between zero and one that is used for potentially for diagnostic purposes. And I was like, oh, that's really weird. But it turns out that there's this very complicated medical and legal environment where you can't have the AI give a diagnosis, right?


So all the AI does is just like draws a box or draws a circle on a flat scan. I thought it was taking in videos. I thought it would feed in, read my entire electronic medical record and look at my whole history and do some sophisticated processing. And then I realized, where did I get that idea? I got it from Hollywood, right? It was my imagination about what the computer was doing, and it was totally out of sync with reality. So this is what happens. Our imaginations are vast and powerful and interesting. So we imagine that AI can do more than it actually can. And one of the interesting studies that I found was about how doctors use artificial intelligence. So we have had AI-based cancer detection for many years since the nineties. It's been a kind of standard available feature. Not everybody uses it.


So in one study breast cancer doctors and lung cancer doctors were studied about their reactions to AI diagnosis. And so the way it works as this one particular hospital is that after the doctor enters in their diagnosis, then they get the AI's read. So the AI is not coming in early. It's not influencing the doctor. And so the breast cancer doctors in this study were given the AI results after they after they read their patient's films. And the breast cancer doctor said, oh, this is such a waste of time. This is really annoying. It's getting in my way. I don't need the AI to to tell me what I already know. So they were like, yeah, this is useless. And then the lung cancer doctors, the same hospital system said, oh yeah, we really like this AI. We really like this. We look at it after after we put in our our comments and it validates what we what we thought was actually happening. We really like that validation. So that was really interesting to me that there's this difference in human experiences of it.


Because I was going into it thinking, okay, this is gonna be all good or all bad. But as I said before, like that's not a useful way of thinking about technology. It's all about context. And for doctors also, it's about, how experienced is the doctor? What kind of day are they having, are they the kind of person who feels like they want this backup? Are they the kind of person who is so confident in their abilities that they don't want the backup? Are they having problems with their contact lenses that day? I don't know. Like it's about people and there's so much more variation than we usually think about.


KERRY MCINERNEY:

Absolutely. Oh, and I think that's a really wonderful and interesting note to end on because, I do think, again, this push to taxonomize and categorize within technology and within our own societies, within ourselves doesn't really account for just, yeah, the diversity of our experiences and the importance of being able to respond to context as you've said. But yeah, for our wonderful listeners we are going to attach a reading list to the transcript of this episode, which you can find at our website, thegoodrobot.co.uk. So there'll be links to Meredith's work, but also some of the people and some of the things we've mentioned throughout the episode. But most importantly, I just need to say a huge thank you for joining us.


MEREDITH BROUSSARD:

Thank you so much for having me. This has been a great conversation.


ELEANOR DRAGE:

This episode was made possible thanks to our previous funder, Christina Gaw, and our current funder Mercator Stiftung, a private and independent foundation promoting science, education and international understanding. It was written and produced by Dr Eleanor Drage and Dr Kerry McInerney, and edited by Dr Eleanor Drage.



100 views0 comments

Comments


bottom of page