In this episode, we chat to Sneha Revanur, who is founder and president of Encode Justice, a global, youth-led coalition working to safeguard civil rights and democracy. Sneha has a fellowship at Civics Unplugged, is a Justice Initiative Committee Member at Harvard Law School, is a Civil Rights Policy Fellow at The Greater Good Initiative, a school program leader at Opportunity X, and a National Issue Advocacy Committee Criminal Justice Lead for the High School Democrats of America group. All this, and she’s still completing high school. We discuss intergenerational communication, what feminism means to young activists, why Gen Z are particularly empathetic and concerned with issues of equality, and why young activists are in an especially good position to deal with ethical problems around technology.
Sneha Revanur is the founder and present of Encode Justice, which was born out of the Say No to SB 10 Campaign. Encode Justice is a global, youth-led coalition working to safeguard civil rights and democracy in the age of artificial intelligence through policy development, legislative advocacy, community organizing, technical workshops, and content creation. She has a fellowship at Civics Unplugged and is a Justice Initiative Committee Member at Harvard Law School; a Civil Rights Policy Fellow at The Greater Good Initiative; a school program leader at Opportunity X; and a National Issue Advocacy Committee Criminal Justice Lead at the High School Democrats of America.
Reading List:
Revanur, S. (2021) Artificial Intelligence in Policing Is the Focus of Encode Justice. Teen Vogue.
Schwartz, O. (2019) Untold History of AI: Algorithmic Bias Was Born in the 1980s. IEEE Spectrum.
Hardesty, L. (2018) Study finds gender and skin-type bias in commercial artificial-intelligence systems. MIT News.
Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016) Machine Bias. Pro Publica
West, S., Whittaker, M., Crawford, K. (2019) Discriminating Systems: Gender, Race, and Power in AI). AI Now Institute.
Kak, A. (ed). (2020) Regulating Biometrics: Global Approaches and Urgent Questions. AI Now Institute (specifically the 'Bottom-Up Biometric Regulation: A Community's Response to Using Face Surveillance in Schools' section)
Amoore, L. (2020) Why 'ditch the algorithm' is the future of political protest. The Guardian.
Transcript
KERRY MACKERETH: Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode ELEANOR DRAGE: Today, we’re talking to Sneha Revanur who’s founder and president of Encode Justice, a global, youth-led coalition working to safeguard civil rights and democracy. Sneha has a fellowship at Civics Unplugged, she’s a Justice Initiative Committee Member at Harvard Law School, a Civil Rights Policy Fellow at The Greater Good Initiative, a school program leader at opportunity X, and a national issue advocacy committee criminal justice lead at the High school democrats of America group. All this, and she’s still completing high school. We discuss intergenerational communication, what feminism means to young activists, why Gen Z are particularly empathetic and concerned with issues of equality, and why young activists are in a particularly good position to deal with ethical problems around technology. We hope you enjoy the show.
KERRY MACKERETH: Hi, thank you so much for joining us today. It's really very, very exciting. We can't wait to talk to you. So just to kick us off, could you talk a little bit about who you are, what you do? And what brings you to the topic of feminism, gender and technology?
SNEHA REVANUR: Of course, so my name is Sneha Revanur, and I'm the founder and president at Encode Justice. Encode Justice is an international youth-led organisation working to safeguard human rights, you know, democracy and equity in the age of artificial intelligence. And we have sort of built this worldwide movement across 30+ US states and 20+ countries. And so in terms of what brings me to feminism, gender and technology, I think that, you know, one of the core focuses that we have at Encode Justice, you know, alongside racial justice, social justice, economic justice, is gender justice, and LGBTQ+ justice. And I think that, that heavily informs our work. And we definitely do that through a more intersectional feminist lens, and that is a framework that we've adopted in our work. And so I believe that just reading stories of algorithmic injustice, reading stories of how, for example, facial recognition technology is more likely to misidentify? Black women, or seeing how algorithms used in hiring or other spaces have shown gender bias against female applicants. I think that that has really shaped my understanding and worldview around feminism, gender as it relates to technology. And so I think that for sure, that is a core part of who I am, and who, who I was when I formed Encode Justice, and also what shapes our work today. And so that's definitely a very pivotal part of our work.
ELEANOR DRAGE: I’m really excited about your take on our million dollar question, what is good technology, and particularly in the context of what you and your organisation Encode Justice are campaigning for. How do you imagine good technology? What should we be thinking about when we say these words?
SNEHA REVANUR: Yeah, so I think that a very interesting concept that I always think about in the context of this debate over what good technology is, is something called technochauvinism. So basically, it's sort of this concept or this idea that, you know, we've reached the stage at which we have this false mentality that technology is a solution for every single problem, or that, you know, technology is somehow a panacea or silver bullet for all of our most pressing challenges. And I think that that is obviously a very misguided mindset. And when we approach good technology, we have to break free from that mentality and recognise the misconceptions inherent to that, because, we shouldn't approach the world from this mindset, that, you know, artificial intelligence or algorithms are going to, you know, solve every single problem when oftentimes they just automate existing discrimination. They automate existing practices and re institutionalise them. And they oftentimes amplify existing social hierarchies. And so in my view, good technology is human centred. Good technology is specifically centred on sort of achieving liberatory purposes. And good technology does not seek to fundamentally just, you know, re automate or re institutionalise existing practices and problems, but actually seeks to break apart from those conventions and flip the script. You know, for example, I've seen the same way we've seen risk assessment algorithms and predictive policing in the realm of criminal justice. We've also seen algorithms be used for automated race redaction, which has allowed for more fair sentencing decisions. I've seen developers and researchers work on perusing algorithms that instead of reading the risk of defendants actually rate the risk of judges to violate the constitution, or to violate defendants due process rights, at least in the context of, you know, American jurisprudence. And so I think that, in that sense, those are very uplifting, liberatory uses of technology that not only actively counter and challenge existing hierarchies and systems, but also centre the people and the affected communities in their development and their, in their usage. And so I think that that definitely is what I come from when I think of good technology. And I do fundamentally have faith that we can move towards a reality in which that is much more commonplace; in which that is, in fact, you know, the standard for all technology that we use.
ELEANOR DRAGE: What kinds of future harm or discrimination are you worried about when it comes to AI that haven't already been spotted or properly targeted by other organisations?
SNEHA REVANUR: Of course, I think it's super important to recognise that when it comes to future harm discrimination, we have already laid out so much of the infrastructure and the groundwork for that to happen, you know, for example, when one of our main campaigns right now at Encode Justice centres on banning government use of facial recognition technology, and that obviously relates to feminism and gender because black women, for example, are, you know, among the most vulnerable groups being misidentified. And so, you know, in that context, I think about you know, there have already been three wrongful arrests due to facial recognition technology in the US and you know, that that is at that point it's only been, you know, arrests but obviously we can see that escalate to convictions, we can see that escalate to fundamentally violating defendants’ due process rights. And I think that when we get to that stage of AI nullifying due process and sort of reshaping our judicial systems as we know them, I think that that could be a much more dangerous reality. And so, to me, in my view, I don't think that there are any, you know, radically new harms or discriminations that we already have, that we already haven't seen aspects of, I think that, in fact, is going to be just moving further, further down the pipeline. And I think that that, for me is deeply troubling.
ELEANOR DRAGE: So why are young activists in a particularly good position to deal with these problems? And what new strategies, knowledges and modes of resistance can you bring to the table?
SNEHA REVANUR: Yeah, so I think that, you know, with our generation we are, we possess probably the highest rates of digital literacy, we have been sort of exposed to technology all of our lives, it's always been at our fingertips. And I think that that gives us a unique understanding of human computer interaction, and also sort of a unique understanding of our relationship with technology, and also how it shapes our worldview, and our interactions with other people. And so I think that we bring that to the table, we bring that unique understanding of technology and that unique level of digital literacy. We also bring to the table this raw grassroots activist energy, you know, we've seen, for example, young people rise up against gun violence or climate change, and that sort of unprecedented and I think that we can definitely convert that power and that grassroots energy and that activism, and that sort of fundamental mobilisation into something that could be a much larger movement in the case of algorithmic justice and algorithmic injustice. And so I think that we sort of have that level of energy that has been, you know, sort of unforeseen for future generations. And I think that, definitely, we bring that to the table as well. And we also have a unique ability to sort of see, see discrimination and see injustice through intersectional lens. And I think that activism at my age and youth activism has been uniquely intersectional, especially as we've seen youth, for example, come out in support of the Black Lives Matter movement. And so I think that in that sense, we bring those three things to the table, first off that unique level of digital literacy. Second off, I think that we have this raw grassroots energy that has been sort of unprecedented, and I think that that has converted into major mass action and mobilisation in the past for different causes. And also third, we also are uniquely intersectional, in our view of these technologies, and sort of have unique exposure to how they impact people, especially because we are predisposed to experiencing their harms and discrimination and you know, our everyday lives.
KERRY MACKERETH: I'm so fascinated by the moment when you decided to found Encode Justice, the moment when you said, ‘Well, clearly, this is such a necessary organisation, this is hugely important activism, and we’re the right people to do that’. So would you mind sharing just, you know, what was your thought process at that time? Like, why did you found this organisation? And, yeah, why did you and your colleagues feel like this is such a necessary thing at this exact moment in time?
SNEHA REVANUR: Yeah, of course. So about two or three years ago, I came across an investigation into an algorithm called COMPAS, and COMPAS is a risk assessment tool used to evaluate, for example, in oftentimes in place of cash bail, and the pre-trial system used to evaluate whether a defendant is at risk of committing further crimes or recidivating in the time period between their arrest and their, their sentencing. And so I think that, that sort of stuff, that was sort of my first, that was my first encounter with this realm of algorithmic injustice and sort of my first awakening to the, to the existence of AI bias. And so what I found out through that investigation was that the algorithm was actually twice as likely to rate Black defendants as high risk even when they actually weren't actually going to commit on going, going on to commit future crimes. And so I think that that disparity, the fact that the disparity was, you know, by a factor of two, that really woke, that really woke me up to this reality of you know, we oftentimes perceive technology as perfectly scientific, perfectly objective, perfectly neutral, but in reality, it's actually amplifying and encoding and sort of perpetuating existing systems of oppression. And that is seen in criminal justice, that is seen in healthcare, that is seen in education, hiring, housing, and so much more. And I think that, about last year last summer, I found out that there was a ballot measure in my home state of California in the US that would have expanded the use of the same sort of algorithms in our home state. And so I think that at that point, I was outraged to see that there was almost no youth involvement in fighting the measure and there was no organised pushback to the measure. And I think that from there, I decided to jump onto the scene and we formed Encode Justice and our first initiative was focused on fighting that ballot measure, California Proposition 25. And, you know, after dedicated organising and dedicated changemaking we were able to eventually defeat the measure by a 13% margin. And that was obviously a pretty energising victory. And from there, I recognise that, you know, this problem is one that transcends the US, it transcends California, you know, it's a global humanitarian challenge for the 21st century that we're going to have to reckon with. And so from there, I began to read more about other cases of algorithmic injustice, for example, you know, facial recognition, and how that applies to different realms and different manifestations of that. And so I think that from there, we began to expand to more states, more countries, we launched more campaigns. And I eventually saw that, you know, to meet the moment here, we're seeing that, you know, we are the most digitally connected generation yet, and yet, there's so little awareness of these issues of AI ethics. And I think that when we talk about AI ethics, it's oftentimes a conversation reserved for you know, PhDs and for people with, you know, advanced academic backgrounds, even though people who are actually being the most impacted by algorithmic decisions are oftentimes left out of those decision making spaces or are left out of spaces where they can actually have power and control and input over how those algorithms govern their lives. And so I think that that was what I first thought to dismantle. And I also wanted to sort of bring youth because I had seen, for example, how we had risen up against climate change or gun violence and wanted to sort of recreate that mobilisation for algorithmic justice given that it's this unique thing that we're going to have to face in the 21st century.
KERRY MACKERETH: Fantastic. And, you know, could you actually expand a little bit on that, that last point you were making, which is around the various kinds of barriers that you face when trying to get involved in relation to algorithmic justice and AI? And like, what Encode Justice does that you think is really effective for trying to break down those barriers?
SNEHA REVANUR: Yeah, so I think that largely, you know, by and large discussions about AI ethics are very inaccessible. And I think that is obviously very troubling and concerning to me, because the people who are most impacted by automated decisions are being left out of the conversation. And so what Encode Justice does to combat that is we have launched our own AI ethics workshop programme, that prioritised accessibility, that prioritises sharing these stories and these case studies of injustice, and we focus on, for example, our lesson plans span, you know, this, this range of intersections with AI. So we have AI ethics and healthcare lesson plans, AI ethics and policing lesson plans, AI ethics and hiring lesson plans, and so on. And so I think that we have a pretty large curriculum in that sense. And we've been able to teach over 3000 students around the world using that curriculum directly in high school classrooms, and also at libraries, hackathons, conferences, and so many more venues. And so I think that that has taken us a long way, in terms of reaching underrepresented communities, specifically Black, brown and low income communities to share that knowledge of AI ethics with them; to hopefully train the next generation of socially conscious developers and change makers in AI ethics, because we have to sort of be cognizant of those societal implications when we develop new technologies and when we discuss emerging technologies. And so I think that definitely inaccessibility is the biggest barrier when it comes to youth getting into the space. And we are actively trying to combat that with our own curricular offerings, with our own educational programmes, with workshops, events, materials, and things like that, that are specifically centred on accessibility, and on simplifying and expanding the audience to which this issue can be presented.
ELEANOR DRAGE: Awesome, thank you. As Timnit Gebru has emphasised there's still an overarching assumption of data science and generally across STEM more broadly that science is neutral. And I'm interested in whether you think that Gen Z are more open to understanding that science isn't neutral. And when you came to speak at CFI, you gave us a dose of optimism by saying that we should expect Gen Z to provide a new generation of ethically informed developers, which is quite exciting. So can you tell us why you think this is the case? And why you think Gen Z are potentially more empathetic, more concerned with issues of equality and more willing to transfer those feelings across to the kinds of work that's being done in data science more broadly?
SNEHA REVANUR: Yeah, of course. So as I said, you know, we are the most digitally, digitally literate, most technologically connected generation, I think that that has manifested itself in many different ways. So for example, youth are predisposed and most vulnerable to experiencing social media radicalization, pipelines, you know, these these algorithms that push people towards more and more extreme views that could potentially provoke feelings of hatred, that could potentially, you know, inspire violence and things like that. And so I think that youth, for example, are most vulnerable to experiencing the impacts of those pipelines and falling down those rabbit holes. I also I think that, for example, young women of colour are, you know, are most vulnerable to seeing people who don't look like them and aren't most vulnerable to sort of, to sort of experiencing feelings of inadequacy, to experiencing low self esteem, especially because for example, on social media, on TikTok per se, we say, we see that there's oftentimes a very certain archetype of, of an appearance that is emphasised, we see people who are lighter skinned who fit a certain body type who are able bodied. And I think that that definitely imposes itself, those standards definitely impose themselves and young people who don't fit those conventions and who are left feeling inadequate, or, you know, were left feeling insufficient as a result. And so I think that because we have unique experiences, because we've sort of spent our entire lifetimes with technology at our fingertips, and we're less likely to sort of see it as as entirely objective and neutral thing, because we have directly experienced impacts. And we have directly experienced, how that pervades our everyday lives, and how that can shape how we view ourselves, how we view our friends and our communities, and whether we engage in certain actions, including violence, and how we also perceive our own self esteem and our own confidence. And so I think that in that sense, Gen Z is definitely, there's definitely a reason to be optimistic about Gen Z and how Gen Z will enter the workforce as developers and technologists because we have that unique understanding that stems simply from our own exposure, that stems from our own experiences. And I think that for that reason, Gen Z, Gen Z definitely is less likely to fall into the trap of believing that, you know, technology's perfectly objective and neutral.
ELEANOR DRAGE: Absolutely. And I think that that knowledge is part of, and should be part of, the AI development pipeline. That is AI knowledge. It's such a crucial and important and a new form of understanding what is AI? What makes AI? So looking to the future, what kinds of technologies do young people want to see developed?
SNEHA REVANUR: The technology that I want to see develop should actively, not only should, should not only need to be neutral, but should actively seek to counter existing hierarchies. It shouldn't seek to operate within existing spaces, it should seek to challenge those spaces. So we want to see anti racist technology or technology that is actively feminist or that actively tries to correct, you know, historical patterns of gender injustice, it's in its existence, we want to see technologies that are unique, that are, you know, able to recognise and are cognizant of varying gender identities, people who are nonbinary, transgender. I think that, you know, if we think about technology and a binary concept, it's not able to process that, it's not able to process the diversity of genders. And I think that definitely stifles expression. And that definitely stifles people's identities. And so I think that when I think about technology it should not only operate in an in the existing world, it should not seek to be neutral, or it should not seek to amplify existing patterns and trends, it just needs, it just needs to counter those and actually liberate people from those trends. And from those systems and institutions that should be anti racist, it should be feminist then should be actively uplifting to young people, and people of colour and to LGBTQ+ youth. So I think that for sure, that's sort of what guides my understanding of technology and that sort of technology that we want to develop going into the future.
KERRY MACKERETH: Absolutely fantastic. And I love this idea of like developing technology that speaks back to power and sort of actively uplifting people. I think it's really hopeful and it's really inspiring. And of course, you know, you've mentioned earlier in this interview that you understand power, fundamentally in intersectional terms. So I was wondering, sort of in this like closing stage of this interview, if you wouldn't mind sharing a bit more about Encode Justice’s, understanding of intersectionality and maybe even just like giving us some examples of how you see that playing out in your work?
SNEHA REVANUR: Yeah, of course. So I think that as you mentioned before, you know, age is only one axis of oppression and there's so much interplay between factors like age and other factors like race, you know, class, gender, gender identity. And I think that we are uniquely aware of that in our own work. For example, when we talk about risk assessment algorithms, I see the fact that age is the single most predictive is most single most powerful predictive variable for those algorithms and is the single most powerful variable in determining what a defendant's risk score will be. And what that means is that it disproportionately penalises young men of colour, people who have experienced the super predator myth, people who have experienced the school to prison pipeline, people who have experienced decades and even centuries of racist practices. And that now feeds itself. And that goes hand in hand with, with age. And so I think that, in that sense, race and age are interconnected, and they create unique experience that harms and discriminates against young men of colour. And, for example, in the context of facial recognition technology, we've seen this long standing and, you know, pretty harmful myth that associates black women with masculinity, and that robs black women of their femininity. And that, that same, that same concept has been encoded into our technology, has been encoded into our facial recognition technology, which which, you know, disproportionately confuses black women with men. And that is able to sort of process unique gender identities beyond the gender binary. And so when we talk about intersectionality, we definitely consider all of those factors as they weave together, we consider the factors like race, gender, age, location, things like that. And I think that we have to recognise that all those things uniquely come together that you know, for example, for facial recognition, yes, women are more likely to be misidentified than men, but black women specifically have staggeringly lower rates of accurate identification than any other group. And that's what, you know, any liberatory technology in the future would have to dismantle.
ELEANOR DRAGE: So I'm really interested in your strategies for intergenerational communication. It's a challenge that we all have. And I'm thinking also about the conversations that we're having with our parents trying to explain the work that we do and why we care. So what are the challenges that we face when trying to communicate with people of different ages? And how can we overcome them?
SNEHA REVANUR: Yeah, so I think that, as you mentioned before, Gen Z is sort of the least likely to fall into this trap of believing in algorithmic neutrality. But I think that sort of older generations are more likely to fall into that mindset. I think that's one of the greatest challenges that we face right now. And I also think that we don't, when we approach intergenerational conversations, we have to make sure to contextualise them with our own experiences and stories. And so, for example, whenever I have conversations with older folks about, you know, algorithms and algorithmic justice, I make sure to cite the specific examples that young people are experiencing, you know, social media radicalization pipelines, risk, risk assessment tools, facial recognition technology, you know, how those algorithms and TikTok and other social media platforms actually reinforce existing beauty standards and influence our notions and perceptions of our own selves and our own levels of confidence. I think that by contextualising, technology and contextualising our experiences in the sense of sharing our own case studies and examples, I think that we're able to sort of better communicate to other people who are not impacted by the same examples, how it sort of feels to perceive and be on the receiving end of those algorithmic harms and discrimination. I think that that is the biggest way in which we can break down those barriers and facilitate more effective communication. I also think that in the sense of advocacy and lobbying, we've definitely encountered that policymakers and the people with the most power to shape technology regulation and technology policy, oftentimes have the most surface level or the most insufficient understanding of how those technologies actually work and operate, which obviously inhibits effective regulation, inhibits good governance, because the people in power aren't aware of the technologies and as a result, can't effectively regulate or govern them. And I think that, obviously, the band aid solution there is to just get young people into office more, and to, you know, give us more space for political power. But I think that in the context of today's day and age, we have to sort of not only expand, which is what we're talking about Encode Justice, expand her educational outreach initiatives to include older generations and make sure that we're able to make this content more accessible and digestible to them, but also facilitate that intergenerational conversation by sharing our own experiences of algorithmic harm to sort of contextualise that and demonstrate the need for effective technology regulation. And I think that all of that goes hand in hand. And so there are so when I think about this, I think about I think about that I think about you know, contextualising our experiences, I think about expanding our educational initiatives to sort of be more accessible to older generations. And I also think about how we can get more young people into office and get young people into spaces of political power and decision making rooms where they can ultimately actually have a say and actually have control on input over the algorithms that govern their lives. And so I think that those are the three things that come to mind in terms of how we can better combat that issue that we're currently seeing.
KERRY MACKERETH: Fantastic, this really excites me so much. It makes me so hopeful to think of all the amazing intergenerational conversations and activism that's happening in this space at the moment. Final question, what's in the future for Encode Justice? What's coming up next for you guys? What are you planning?
SNEHA REVANUR: Yeah, so we're planning to continue to expand our current initiative to ban facial recognition. So far, advocacy efforts have been fairly US centric, but we hope we are hoping to expand internationally now that we have, you know, this this international network of chapters. So for example, currently we’re working on a foreign policy research project, analysing how technologies like facial recognition are used by authoritarian regimes around the world, and how that influences human rights abuses worldwide. For example, we've taken account of how facial recognition technology is used and how that applies to the Israel/Palestine conflict or, for example, how it's being used in the current military coup in Myanmar, or how it's been used by India to crack down on former protesters, or how it's been used in Russia, in Moscow specifically, to crack down on people exercising their, you know, rights to protest, and we've seen how, for example, in Uganda, how it's enabled election interference and how it's suppressed, you know, the right to vote and the right to protest. And I think that we are currently exploring those different manifestations in our current foreign policy research project. So that is pretty exciting, because it allows us to expand our horizons and sort of look at different case studies on an international scale. So yeah, I think that when we talk about what's coming up for Encode Justice, definitely an expansion of our current campaign, we do hope to continue lobbying for federal legislation to ban facial recognition in the US and to govern algorithms more broadly. We also do hope to continue to expand internationally and to elevate diverse perspectives, especially perspectives from the, originating in the Global South, on AI development and technology development. And so I think that when I think about what's next for Encode Justice those are the main things I think about, but definitely we do hope to continue expanding dramatically and reaching more and more people with our programmes, especially our workshop programme.
ELEANOR DRAGE: Well, I so look forward to seeing what the future has in store for you. If anyone could do this, you can. And as always, it's a complete delight, and really inspirational to hear from you. So thank you so much, and hopefully we can speak again very soon.
SNEHA REVANUR: Awesome. Thank you so much for having me. This is an amazing experience. I love that you guys have this podcast. It's wonderful to hear these important perspectives on feminism and gender and race and technology.
Comments