top of page
Search

The EU AI Act Part 1, with Caterina and Daniel from Access Now

In this episode, we talk to Daniel Leufer and Caterina Rodelli from Access Now, a global advocacy organization that focuses on the impact of the digital on human rights. As leaders in this field, they've been working hard to ensure that the European Union's AI Act doesn't undermine human rights or indeed fundamental democratic values. They share with us how the EU AI act was put together, the Act's particular downfalls, and where the opportunities are for us as citizens or as digital rights activists to get involved and make sure that it's upheld by companies across the world. 


Note: this episode was recorded back in February 2024. 


Daniel is a Senior Policy Analyst at Access Now’s Brussels office and Emerging Technologies Policy Lead. His work focuses on the impact of emerging technologies on digital rights, with a particular focus on artificial intelligence (AI), facial recognition, and biometrics. While he was a Mozilla Fellow, he developed aimyths.org, a website that gathers resources to tackle myths and misconceptions about AI. He has a PhD in Philosophy from KU Leuven in Belgium and is a member of the OECD Expert Group on AI Futures.


Caterina is EU Policy Analyst at Access Now. She works on issues related to biometric surveillance, artificial intelligence, privacy, and her main focus is around the intersection between technology, borders and the rights of people on the move. Previously, Caterina advised in some strategic litigation cases challenging EU migration policies in the Central Mediterranean and in Libya. She worked and volunteered for several NGOs that supports migrants' rights, as the Platform for International Cooperation on Undocumented Migrants (PICUM), in Belgium, and Mobile Info Team for Refugees, in Greece. Caterina holds an Erasmus Mundus Joint Master Degree in Cooperation Studies in the Mediterranean region, from the Universitat Autonoma of Barcellona, University of Ca Foscari of Venice and Univerité Paul Valéry III of Montpellier.


READING LIST:


Access Now (2023) "Human rights protections…with exceptions: what’s (not) in the EU’s AI Act deal". Available at: https://www.accessnow.org/whats-not-in-the-eu-ai-act-deal/


Access Now on Artificial Intelligence - check out their spotlight on the EU AI Act here: https://www.accessnow.org/artificial-intelligence/


Keyes, O. (2023) "Automating Autism" In Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines (Oxford: Oxford University Press).


McInerney, Kerry, and Os Keyes. (2024) “The Infopolitics of Feeling: How Race and Disability Are Configured in Emotion Recognition Technology.” New Media & Society. https://journals.sagepub.com/doi/10.1177/14614448241235914?icid=int.sj-full-text.citing-articles.12


TRANSCRIPT:


KERRY:

Hi, I'm Dr. Kerry McInerney. Dr. Eleanor Drage and I are the hosts of The Good Robot podcast. Join us as we ask the experts, what is good technology? Is it even possible? And how can feminism help us work towards it? If you want to learn more about today's topic, head over to our website, www.thegoodrobot.co.uk where we've got a full transcript of the episode and a specially curated reading list by every guest. We love hearing from listeners, so feel free to tweet or email us. And we'd also so appreciate you leaving us a review on the podcast app, but until then sit back, relax, and enjoy the episode.


ELEANOR:

 In this episode, we talked to Daniel Leufer and Caterina Rodelli from Access Now, a global human rights organization that works in particular on digital issues. They are amazing people and they've been working so hard to ensure that the European AI act this massive new bit of EU regulation on AI, doesn't crush human rights or indeed fundamental democratic values. They us some great gossip on how the EU AI act was put together, what its downfalls are, and where the opportunities are for us as citizens or as digital rights activists to get involved, to make sure that upheld by companies across the world. Some of you may have been following the big Mistral scandal. So one of France has homegrown foundation model talents, so a company that makes state of the art AI models -a bit like open AI, but from France, put pressure on Brussels to make this act a bit more lenient on companies, developing foundation models. And I don't know whether you've been following, but they just got bought by Microsoft. So it seems as though in this massive grift Microsoft were using Mistral as a pawn in their plan to get a more lenient EU AI Act. We talk about this and much more in this episode. I hope you enjoyed the show.


AD:

   📍 The Future of America is in your hands. This is not a movie trailer, and it's not a political ad, but it is a call to action. I'm Mila Atmos and I'm passionate about unlocking the power of everyday citizens On our podcast. Future Hindsight, we take big ideas about civic life. And democracy and turn them into action items for you and me. Every Thursday, we talk to bold activists and civic innovators to help you understand your power and your power to change the status quo. Find us at future hindsight. com or wherever you listen to podcasts.


 KERRY:

Brilliant. Thank you so much for joining us here today. Just to kick us off, could you tell us a little bit about who you are, what you do, and what brought you to the EU AI Act? So Daniel, shall we start with you?


DANIEL:

Sure. Yeah, my name is Daniel Leufer. I'm a senior policy analyst at Access Now's Brussels office and Access Now is a global human rights organization that works to protect and extend the digital rights of users at risk around the world.


So we provide digital security assistance and really work globally, but our Brussels office mainly follows EU regulation. And we've been working on the EU AI Act since before the first draft. So our former EU Policy and Advocacy Director Fanny Hidvegi was on the high level expert group on AI that actually preceded, so back when everyone was still talking about AI ethics, and then have been following it since the white paper that preceded the draft and right the way through to the final deal.


KERRY:

Amazing, and what about you Caterina?


CATERINA:

Hi, everyone. My name is Caterina Rodelli. I am EU policy analyst and access now at the EU office as well. I started working on the intersection between the AI Act and other pieces of migration policies which are all full of AI surveillance technology applications.


ELEANOR:

That's fantastic because it's really important to know how the EU AI Act, this really influential bit of AI legislation interacts with different things that are going on around Europe and migration flows. And we'll get to that in a bit, but first we'd love for you to tell us. In the context of legislation, perhaps, around AI, what is good technology? Is it even possible? And how can great legislation, feminist legislation, help us get there?


DANIEL:

Yeah. It's a very tricky question and, A while ago, Caterina and I were both at a very security, very industry focused conference, and we heard someone on stage repeat that age old lie that technology is neutral. And I think most of your listeners will be aware of many reasons why it isn't.


One thing that's quite obvious, I think, is that regulation is not just about preventing misuses. Of technology, because negative consequences don't only happen when technology goes wrong, when it fails or when it's used explicitly to do bad things. And one thing that, I think will come up as we discuss the ins and outs of the AI Act a bit is that a minimum step, which is absolutely not a silver bullet, is really just the first step is transparency, and that's something that regulation can do, and that was a big part of discussions during the AI Act, because if we don't know what technology has been used, if we don't know anything about it, and we're scrambling to even find out is this thing being used, what are basic pieces of information about it, then the risk of people's rights being undermined is massively increased.


So I don't know if that gets us to good technology, but if we don't have that then we're certainly going to end up with a lot of bad stuff.


It's a very existential question that goes way beyond the AI Act. And as you asked it, it just made me think of how technology throughout these centuries has played a role into enabling oppression colonization.


And many other things that still bring us today at being at the level of, strengthening a specific group of people over the rest of the world. So I think what good technology could be is a technology that is just not reinforcing all of the above and all the forms of oppression which I think was like quite key in in our work.


On the act as access now, and as coalition is just like speaking to the actual problem, which is like existing discrimination existing harms and forms of violence and how yeah, in our society, technology is just playing a role in making these way more like stronger.


Yeah, good technology is just something that can help us breaking all of this.


ELEANOR:

Can I just say that it's so important what you've just said about good legislation, not just being about preventing misuse. It's about promoting good use and it's legislation that doesn't promote myths around AI, such as the one that you mentioned, that it is neutral.


KERRY:

Absolutely. Like I think when I first started out in this field, like I was willing to entertain this Oh, technologies can be neutral. It's what you do with them that really matter.


And then the longer and longer I've been in this field, the more I'm like, no, maybe it's just because I see of the worst as an AI ethicist or as someone who's like dealing a lot with these like very. poorly designed technologies that just simply are normatively and technically and ethically bad.


But also, like you both said, thinking about actually, it doesn't have to just be about these moments where tech fails, where they can entrench very deeply these existing systems of harm and power. So I want to come to the EU AI Act and we desperately wanted you both on, because I think you've really been at the forefront as you described at the beginning of this episode and mapping and shaping what this act is going to look like.


And actually, whenever I wanted to know what was going on, I used to go to Daniel's Twitter and be like, what's Daniel tweeting about the EU AI Act today? But for our listeners who might not be familiar with this big landmark piece of legislation, could you just give us a brief rundown what is it?


Why does it matter? And what are the kind of core principles or content of the act itself?


CATERINA:

It's hard. I can start it from my perspective and then you can just compliment it. But yeah, it started as this forced combination of internal market regulation and fundamental rights regulation, which I think it's quite a a key aspect of this law.


It's a law that had this double premises of promoting the use of trustworthy ai, but also at the same time protecting fundamental rights. And so this is the official premise. So it was how to ensure that in the European Union A., the use of good AI technology was promoted by Lena Waithe that was not violating people's fundamental rights.


But this premise is already poisoned at the very beginning because it creates this narrative that you can strike a balance between innovation and fundamental rights and that at one point you have to compromise one of the two and in a like for the European Union and the current society where we are living, it's not very straightforward that you would compromise market regulation over fundamental rights, but what we have seen the course of the last years is that policymakers are more prone to limiting fundamental rights to an ensuring that that market regulation is done in a way that does not stifle innovation.


And yeah, the the premises of this legislation was that these 2 things are incompatible somehow. That If you want to protect fundamental rights, you will have necessarily to stifle innovation, which is something we have challenged from the very beginning. And Daniel can certainly speak to that.


DANIEL:

And what Caterina pointed out is really important that the commission, I think, was always trying to satisfy very conflicting objectives. And, it's important as well that I know a lot of people find it hard to understand the difference between the different institutions, but it's even more complex because the commission is not a unified entity.


There are different DGs within the commission that are responsible for different things. Some of them are looking at protecting people's rights. Some of them are looking at security, etc. And so there's even debates happening before the law gets there. And so that the first text that you even see is already a result of compromises and then goes through this.


Another process. On, on that point that Caterina made about this problematic opposition between innovation and rights, like we've always been very clear that you're not balancing innovation against people's rights. That is not something that we do. There are of course discussions about balancing different rights and there can be situations where we need to balance freedom of expression against other rights, etc.


There are procedures for that. But just undermining people's rights for the sake of innovation is not something that gets done. And I just want to point out one thing as well at the beginning is that the word innovation, like the word artificial intelligence is very, Problematic, because it essentially just means, I think it's used in two different ways that are not mutually compatible.


It's It has a very neutral meaning, which is just something new. And then it has a more loaded meaning, which is something new that's good. And often industry will say, we need more innovation. But it's do we need innovation in chemical weapons? Do we need innovations in ways to undermine people's rights, etc?


Or do we need like socially beneficial innovation? I find that if you listen to anti regulation lobbying lines, et cetera, it switches between those two meanings depending on what's convenient. So I often like to say to them you need to pin down what you mean here. Are you using the word innovation in a value laden way?


Because then we're going to be saying that certain types of things, they're not innovative. Or it's neutral and then yes, we do want to stifle certain types of innovation. We want to stifle innovation in stalkerware. We want to stifle innovation in all of these things that literally in some cases undermine the fabric of our societies. And another point on that, I think, is that a narrative is often presented around AI regulation, which is that this is this incredible new technology that's totally unprecedented that policy makers don't really understand. And you're trying to put restrictions on it, but actually in many cases, it's not very new when these things have been around for a long time.


And, in some cases, We don't even really need to talk about AI that much to understand what the impact of a certain technology is, and an example of that, I think, that we'll come back to is the use of what people often term facial recognition, but in the context of the AI Act has the less digestible name of remote biometric identification in publicly accessible spaces.


So this is like machine learning based systems usually that can identify people from a watch list and, used in public spaces. So basically identify anyone, but applications of that, like that of AI and certain other applications actually undermine existing safeguards. And so if we don't regulate, then those developments in AI are actually destroying existing protections, people have existing laws, and so the idea that, we should wait and see, and then regulate later is a bit wrong, because actually, The ways the technology evolves necessitate rethinking how regulation is done, what protections are there, and so that's that's really important to keep on top of, and in the context of the AI Act it's important to have that frame because you push back against the idea that Ooh, this is an extreme form of regulation.


No, in many cases, these are extreme forms of technology. Like this is sometimes extremist tech that actually needs protections to be rethought.


ELEANOR:

Yeah. So you're battling against a lot of assumptions when you come to regulate. I'd love to know some stories, some anecdotes about what it was like to track the passing of the EU AI Act.


Could you tell us some little snippets? Because I don't think anyone thinks it was easy. The European Commission is renowned for being a very complex beast. So yeah, tell us some stories. Caterina, do you want to go first?


CATERINA:

Yeah. Okay. So I often I just started referring to this whole process as a very toxic relationship.


Very long dragging for years. Just like not accepting the fact that some of the people in the relationship are incompatible. And so I think, and I think it relates again to the fact that it's just try to combine two things that like on the wrong premises. So Balancing innovation, as Stan was saying, with fundamental rights so yeah, you might know that it lasted for a very long time, way before April 2021, which is when the commission proposed the law, then was, took a very long time to decide who in the parliament yeah, It would take responsibility over this file.


So I know how much people are aware of how this works, but there's the EU Commisson receiving all the inputs, writes the law and then it just sends it to what are called the two co legislators, who are the European Parliament and the Council that have to come up with their own version of this text.


And here is when it became like dramatic or funny depends from the perspective, the perception. So within the parliament, within the council, they had very different opinions on how, what their position should look like. So there were like internal battles, but then they had to come together and they are like on the opposite extremes.


So you have the parliament that is representing the European citizens. It's being elected. And after a lot of meetings with assistants and rapporteurs and shadows, the parliament came up with a text, which was It was quite good, it was not perfect, but it was way more people centered, it increased the bans, it added way more bans of prohibited systems it was strengthening the language when it comes to the uses of AI at the borders and immigration, so it was far away from perfect, because again, for example, on the immigration side, there were no bans, so the parliament clearly failed at recognizing that you have to protect every person.


So the AI Act at the very beginning had only four bans, the one of the most famous is what Daniel was referring to, which was the ban on remote biometric identification, so the capacity for someone, let's say the police, to identify you in a street if you were just passing by but it was very much limited on a number of other very dangerous systems, for example, predictive policing was not among the least, so the capacity again for law enforcement authorities to use systems that say that to be capable of predicting crimes.


So if you are likely to commit a crime but then also the parliament added a ban on emotional recognition. So systems that claim they can infer the fact if you are lying or not. Biometric categorization, so making assumptions on the base of your body to protected characteristics such as your age, sexual orientation or gender or ethnicity. But then we missed some bans on the use of profiling systems in migration procedures and also forecasting tools. So these are some of the bans that made it into the parliament text, which was very good, a big win for us. But then it came to the council, and the council is represented, is representing governments and government's interests.


So when you want to strengthen, make sure that fundamental rights are not sacrificed, there was the big battle. Because there was a complete different way of Perceiving priorities the council had the completely other take on what the role and the use of the eye by law enforcement and migration authorities should be.


Just to make one example, I did not agree at all with this long list of bans and transparency bids. So I think this was one of the big, we can say dramas, but it's actually very serious because you have completely different takes of what the society would look like, should look like.


ELEANOR:

Yeah, that's fascinating, isn't it?


It brings all these massive questions into play, which is what we're trying to do on the podcast, shows that legislation actually is about these bigger issues of what do people think is morally right? What should society look like? All of these things that we should all be participating in.


Okay, Daniel, two minutes, quick gossipy story for us about the passing of the Act.


DANIEL:

Yeah, so some gossipy stuff I can give you all, I'd also zoom out a bit and just say something about the process a little bit, but, one thing that I mentioned before that there's obviously already disagreements within the commission before the legislation comes out, but anyone who followed the AI Act to any degree will know that there were a lot of leaks involved.


And even before we actually got a draft of the AI Act, I think we had a, there was a white paper before the AI Act and a draft of the white paper was leaked. So someone internally wanted the public to see it, where there was discussion of a ban on facial recognition, because they knew that it wasn't going to be in the final public text.


So that's like interesting. And the same thing happened with the draft of the AI Act itself. We got a draft that had something that, someone knew was going to be removed from the final text and it gives us a hint, for example, that there is even support within the commission for this thing so we could argue for it.


And then this happens throughout the process. Someone doesn't like how internal discussion goes. So they hand something to a journalist and often one journalist in particular who then manages to, to leak the text. So there's a lot of that type of thing happening, but just one like important anecdote, I think about how we worked was that when we got the text initially, we saw that it covered so many things like Katrina mentioned, migration, security rights of people with disabilities, all of this stuff we realized there was a big danger that all civil society organizations would work in silos and have quite fragmented recommendations. And so we immediately put a huge amount of work, and I mean by Access Now but also European digital rights, so Sarah Chander and Ella Jakubowska, into building a coalition.


Because we knew that we had to get people who were experts on all the different Issues that the AI act impacted on together to formulate a kind of coherent position and then advocate for that as one voice to the extent that it was possible and we really did that. And I think one example of where that was effective up to, the level of effectiveness we could achieve was on emotion recognition, which Katrina already mentioned, because When we started advocating for a ban on emotion recognition, and that was, building on some work done by Article 19 and others, we kept hearing back from policy makers, 'no, but emotion recognition is really important for people with autism to help them recognize emotions'.


And thankfully, in our coalition, thanks to this great work done by Sarah and Ella, there was already contact with the European Disability Forum, and we talked to EDF to say, do you have members who represent people with autism, and then we spoke to them, and actually building on Os Keyes work, and the work of some others, talked to them and said, no, we don't need emotion recognition, and it's actually hyper ableist and, built on this totally flawed idea that our expression of emotion is the wrong one. And it needs to be adapted to some other version. And we were able to work with them to actually adapt our own position, strengthen it, and then bring that to policymakers.


It was even like within one of the political groups, a hearing that some people participated in one of those groups, and that, they completely came around, and so we stopped getting that resistance, and that was this false narrative that was presented by industry. And then when we got the people who were actually affected within the coalition to raise up that voice, it was very successful, at least in the parliament position.


I think what's in the final text falls short of that demand, that's a positive anecdote. I think about why coalition work like that is so important.


KERRY:

I think that's an incredible story about the importance of solidarity, which is something that Eleanor and I think about a lot, which is how do we create these coalitions across difference and engaging with difference rather than being reduced to, a single voice or broken into these silos.


And I'm also so grateful to hear that piece around emotion recognition technology was at least successfully brought forward to the parliament level. I actually have a piece coming out with Os quite soon, hopefully, specifically critiquing emotion recognition technology and these claims made around autism and disability and how ERT is meant to somehow be seen as a fixer in this situation, as opposed to, and again, that balls of these hyper ableist tropes.


So we've talked a little bit about some of the bans that were proposed and the ways that some of those bans made it through, some of them didn't. But I'd love to hear you reflect a little bit further now that we have the final text of the Act. What do you think of the limitations of the Act as it stands and what would you like to have seen changed?


DANIEL:

So Caterina mentioned this already that it's internal market regulation with some fundamental rights sprinkled on top and it's worth delving a bit into our dissatisfaction with the entire framing, because that really comes back in what the limitations of the final text are.


And, I always give this anecdote that when we started working on AI, the general data protection regulation, the GDPR had just been adopted, everything come into force and It's a joke that if you see an academic research project from 2016, it talks about big data. And, if you looked at it like, I don't know, autocomplete association, you put in data protection and you got regulation.


But then 'in about 2017, it's like artificial intelligence and ethics. And that's often like self regulation. And so you saw this shift to, ' no, AI is too complicated and don't regulate it. We have our own ethics guidelines, et cetera'. And the commission, I think, really got lobbied incredibly hard not to do GDPR again.


And the GDPR is fundamentally a rights based. regulation. And we have an article coming out actually in UCLA Law Review on this soon, because it provides people with rights in all situations. It doesn't matter if you process my personal data, I have rights over that personal data, whether it's, very impactful processing or trivial processing.


But the commission opted for what they call a risk based approach to regulating AI. And that was really in response to all of this negative industry lobbying following the GDPR. And in contrast to what I said about, giving you rights that are applicable in all circumstances. The AI Act only provides or places requirements on developers and deployers of AI systems when they fall into a certain risk category.


And that means inherently that anything that's outside of that risk category, categorization, has no requirements. But we know how contextual harms are. And that they can arise from like combinations of different AI systems together that maybe don't seem like they're risky, other data sets coming into play.


And the AI Act has this quite rigid list and heavily lobbied list of what is a high risk system and everything that's outside of just gets off scot free, basically, and that was always an issue for us from the beginning of this is so flawed as an approach, and it doesn't center affected people, like the word affected people was basically not mentioned in the original act, that there was always going to be massive shortcomings of it and they are really reflected in the final text, because that risk based approach was actually further destroyed, I think, during the negotiations.


And one of the key things that the AI Act should do, like an actual value it should have, is that if a system falls into the high risk list of use cases, and this is things like certain law enforcement uses, in public services, education, workplace, then it goes into a public register of AI systems. So you would know that the system is actually being used, which is a good first step, we actually know there's an AI system in use. But the law enforcement and migration authorities managed to carve out an exemption from that transparency requirement.


So that's one big problem. But the other big structural flaw that crept into the AI Act was around Article 6, Paragraph 2, which was a very unassuming article that no one talked about or paid attention to. And it basically just said something very simple, which was that, okay, the way the AI Act works, Is your system an AI system?


Okay, there's a whole debate about that's what the definition is. Okay. Yes then you're in scope, but you have no requirements. But do you do one of the high risk things listed in annex 3 and then you have this list of users. If so you're high risk, doesn't mean it's a bad system. It just means if things go badly, then there are risks.


So you have to follow transparency requirements responsible development practices, et cetera. That was all fine, but industry and I think, states, et cetera, lobbied to introduce a loophole in there. Which, to me, when I saw it I first saw it tabled at the Parliament, I couldn't believe it was like the worst amendment ever.


It said, you will be high risk if you're listed in Annex 3, and you yourself decide that you really pose a risk. Which is just, I thought it was such a joke that it could never possibly pass. And we really worked to try to stop this throughout because it popped up in the council text as well, and it ended up in actually making it through to the Act.


And here's an anecdote as well, which is that we found out at some point that the Parliament's legal service, so the Parliament have a legal service group of lawyers, they can consult on the legality of certain measures, amendments, et cetera. And they were consulted about this loophole and it came back, the legal service, with basically the same assessment that we had, which was this completely unworkable, destroys legal certainty, creates incredibly dangerous loopholes.


That was leaked, thankfully, because it wouldn't be public usually, and they ignored the legal services opinion and still kept it in. And so at the heart of the AI Act's flawed risk based approach, you now have another loophole with four criteria where developers of systems can decide that they're not actually high risk, and skip almost all of the requirements. There is like a little safeguard, which is that they have to note in that public database that they exempted themselves. But this creates huge work because now we have to go tracking down all of the people who exempted themselves. So that's like a deep structural flaw.


KERRY:

What a nightmare, sorry, I didn't know about this, as you can tell my area is not regulation, and I don't do as much work on the EU AI Act specifically as Eleanor does, so I was not up to date on this. And that's just like me deciding I'm just going to opt out of all rules and regulations, which I do in the office anyway, I just walk around and eat snacks, but that's absolutely wild that got through, and so was that just because of industry lobbying.


DANIEL:

Yeah, industry lobbying and then institutional inertia as well. I think at some point because it was in the councils, there was a version of it in the council's text, so their opinion on the AI Act, and then it got into the parliament text as well. And then lawmakers I heard them claim things like, we have no mandate to remove it.


But, you had an opinion of your legal service that said, this is unworkable, take it out. And then they didn't do it. Yeah, I think massive industry lobbying and I think also this. I saw it really as the success of years of op eds and everything against the GDPR, just this constant battering of the GDPR that had destroyed the Commission's confidence as well and the institution's confidence to really put their foot down and protect people's rights.


That even this low bar that the AI Act says, they were willing to lower it again and introduce an opt out for industry.


ELEANOR:

Just very quickly, inertia, in what sense? Were they just worn down by the lobbying? Eventually, did they not care anymore? What happened?


DANIEL:

I think Caterina makes examples of this as well on other issues, but there, there's so many compromises and like horse trading that goes on to get to the point of the Parliament having a text.


And to be honest, there was an incredibly skillful bundling of issues in that parliament text where, some maybe progressive lawmakers wanted a thing, but that was bundled with the thing that they didn't really like. And, to get their thing, they had to accept the other thing. And so you get this like mishmash of a text , it's really, it's a depressing bar, but they often say that the best EU compromises are when everyone is equally unhappy.


So everyone has given up to some extent as the thing goes on at certain points, but like that. Increased that's frequently people's rights that are being given up throughout it, and so it's often presented as oh, that's just how things go, but it shouldn't be because, as Caterina said the huge amount of protections in the migration context were just given up.


And there's a lack of like ambition, imagination there in, in that process of compromise that can seem very mundane, but it isn't. It's really like when you look at the sacrifices that are being made, they're hugely impactful.


KERRY:

Just to finish us off because we're running to the end of time and this has just been such a fascinating conversation. I wish we could keep you here all day, but you're both extremely busy people. As all of you listening to this podcast can tell you're both doing a huge amount of work. So in a minute or two or less, if you could have, like one magical regulatory wish fulfilled. The great AI regulation theory comes down. What would be the thing you'd really want to see moving forward? Caterina, shall we start with you?


CATERINA:

It's hard. I think it would just be all outside this box. We didn't get a chance to mention it, but like this regulation just legalized impunity for like police and migration authorities when they use AI systems. There is no transparency. This thing that Daniel was saying about you'll have to say when you think you're not high risk.


This does not apply when the AI system is used by police, there will be no trace whatsoever of anything. So my, my dream, regulatory dream is just that we completely changed the narrative and we start speaking about policies for enhancing protection for violent acts at the borders, for like acts of racism, and then there you include the tech facilitated element of all of it, but as Daniel was saying, there is a huge lack of creative power, imagination and policymakers are stuck into a very colonial also structure of like policymaking. My dream would just be that we just break all of these policymaking circle and we really focus it on protections because if we function like this like the act was a risk based regulation. So there is already an assumption that you decide not only what is a risk, but who is at risk, and who is worthy of protections. So the whole framework is just a recipe for not only disaster, but like violence. So I would just change the whole framework of it and make it really about protections.


DANIEL:

I can only plus one that.


ELEANOR:

Excellent. Thank you so much for joining us today. It's been incredibly interesting and we've learned a great deal.


ELEANOR:

This episode was made possible thanks to the generosity of Christina Gaw and the Mercator Foundation. It was produced by Eleanor Drage and Kerry McInerney and edited by Eleanor Drage.

16 views0 comments

Comments


bottom of page