top of page
Search
Writer's pictureKerry Mackereth

Cynthia Bennett on AI, Disability and Accessibility

In this episode we chat to Cynthia Bennett, one of the leading voices in AI and Accessibility and Disability Studies. She’s currently a researcher at Apple and a postdoctoral scholar at Carnegie Mellon University’s Human-Computer Interaction Institute. We discuss combatting the model of disability as deficit, how feminism and disability approaches can help democratise whose knowledge about AI is taken into consideration when we build technology, and why the people who make technology need to be representative of the people who use it. We also discuss the things that go wrong with AI that helps disabled users navigate their environment, particularly what can go wrong when using images labelled by humans.


Cynthia Bennett (she/her) is a postdoctoral researcher at Carnegie Mellon University's Human-Computer Interaction Institute and as a researcher at Apple, supervised by Jeffrey Bigham. Her research sits at the intersection of Human-Computer Interaction, accessibility, and Disability Studies. Her work spans from the critique and development of HCI theory and methods to designing emergent accessible interactions with technology. Irrespective of the project, her aim is to inform what she does as much as possible with the lived experiences and creativity of people with disabilities.


Reading List


By the Guest:


Bennett, C. L.; Gleason, C.; Scheuerman, M.K.; Bigham, J.P.; Guo, Anhong; To, A. (2021) '“It’s Complicated”: Negotiating Accessibility and (Mis)Representation in Image Descriptions of Race, Gender, and Disability' Systems (CHI ’21), May 8--13, Yokohama, Japan. ACM, New York, NY, USA, 19 pages. https://doi.org/10.1145/3411764.3445498


Bennett, C. L.; Peil, B. and Rosner, D.K. (2019) 'Biographical Prototypes: Reimagining Recognition and Disability in Design' DIS '19, June 23–28, 2019, San Diego, CA, USA.


Bennett, C. L.; Rosner, D.K.; Taylor, A.S. (2020) 'The Care Work of Access' CHI 2020, April 25–30, 2020, Honolulu, HI, USA.


Bennett, C. L; Brady, E.; Branham, S. M. (2018) 'Interdependence as a Frame for Assistive Technology Research and Design' ASSETS '18, October 22–24, Galway, Ireland.


Recommended by the Guest:


'30 Facts about Harriet ‘Moses' Tubman' https://blackhistorystudies.com/resources/resources/facts-about-harriet-tubman/#:~:text=Harriet%20Tubman%20was%20a%20disabled,had%20Narcolepsy%20or%20sleeping%20spells.&text=This%20was%20caused%20by%20a,was%20about%2012%20years%20old. (Guest's note: Here is a resource on Harriet Tubman, sharing that she was disabled, to show that disabled people have existed since the beginning of time, though they were often not called disabled in archives)

Mills, M. 'Hearing aids and the history of electronics miniaturization' https://nyuscholars.nyu.edu/en/publications/hearing-aids-and-the-history-of-electronics-miniaturization (Guest's note: Here is a page on Mara Mills’s research on the exploitation of disabled people as a proof-of-concept case of the miniaturization of electronics through hearing aid design in the early 20th century. While many technologies are possible because of this experimentation, hearing aids remain expensive and insufficient for people who are hard of hearing)


Mingus, M. 'Leaving Evidence' https://leavingevidence.wordpress.com/ (Guest's note: Mia Mingus' blog Leaving Evidence is instrumental in the research I have done on interdependence)


Transcript:


KERRY MACKERETH: Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE: Today, we’re talking to Cynthia Bennett, one of the leading voices in AI and Accessibility and Disability Studies. She’s currently a researcher at Apple and a postdoctoral scholar at Carnegie Mellon University’s Human-Computer Interaction Institute. We discuss combatting the model of disability as deficit, how feminism and disability approaches can help democratise whose knowledge about AI is taken into consideration when we build technology, and why the people who make technology need to be representative of the people who use it. We also discuss the things that go wrong with AI that helps disabled users navigate their environment, particularly what can go wrong when using images labelled by humans. We hope you enjoy the show.

KERRY MACKERETH:

Thank you so much for being here with us. It's really such an honour to get to talk to you about your work. So could you introduce yourself, tell us a bit about who you are, what you do, and what brings you to the topic of feminism, gender, disability and technology?


CYNTHIA BENNETT:

Yeah, thanks for having me. So I'm Cynthia Bennett, and I'm a postdoctoral researcher at Carnegie Mellon University's Human-Computer Interaction Institute. So I research the intersection of accessibility, power, disability and technology. So I'm interested in the ways that technology communicates messages about what does it mean to be a person with disabilities? What does accessibility mean? And I think about that in a lot of different ways, which I'll talk about later. What brings me to feminist and disability perspectives? I don't know the answer to that question, some things that I think help are: I am a disabled woman, and I haven't felt very represented, and one thing I like about the research methods of critical approaches, such as feminism and disability approaches is that they account for a lived experience as a legitimate way of knowing and a legitimate research contribution. And they also kind of account for difference as something that is important. And that really contrasted with a lot of formal training I had maybe in statistics, or, you know, other just research training that emphasised, you know, isolating variables minimising difference, and that, I noticed that when that happened, people like me were usually excluded. So I really appreciate that approaches, you know, like feminism, and disability approaches account for difference and honour lived experience, as you know, a legitimate and crucial contribution to research.

ELEANOR DRAGE:

So maybe that can help us answer our second set of questions then, which is, our podcast is called The Good Robot and that's our provocation. So can you help us answer? What is good technology? Can we have good technology? And how would we work towards it?


CYNTHIA BENNETT:

Yeah, this is such a hard question, because I think the words are good and technology are so ubiquitous, but I thought about it a little bit. Um, what is good technology? So from my perspective, and as you know, a disabled woman and somebody who researches with people with disabilities and inside communities of people with disabilities, I think of good technology, as you know, that which is kind of democratic. And what I mean by that is that the people who are meant to use the technology are the people who are developing it. And right now, that doesn't always happen. Often the people who have the most power to develop and scale technology and products don't really represent the world that we live in. But I think there's also an additional problem where often your technology is developed with not only an intent that people use it, but with an intent that there's a profit gained from that. And that kind of adds a layer of - a type of priority that can sometimes mean that the technology becomes distanced from the people that need it and use it. And so I'd say first, you know, good technology is democratic, the people that are meant to use it are the people who have the power to make it. And I, I think that this manifests in relation to disability and accessibility, particularly, I can give some examples in the ways that the technology that people with disabilities really use and appreciate is often you know, not always the newest, or the shiniest technology that's coming out. Often it's, you know, very low-tech tools that people have adopted and made to work for them so there can sometimes be a disconnect. And then I'd say the other thing that points to good technology for me is that it's kind of accountable and recognises its limitations. And so maybe it's not necessarily the technology itself doing it but the people that are selling stewards of that technology recognising - okay, this is maybe what it does well, and this is maybe what it doesn't do so well. So let's make sure that it's used for things that are good and can be well, and so I think good technology is limited. I don't think there's any sort of universalizing product or any, anything like that. So I try to think about how can we recognise the limitations of a tool and appreciate what it can be good for. So I don't know, I think that was pretty broad and vague. But that's a hard question.


ELEANOR DRAGE:

Yeah, it's so interesting. And I think that obviously companies have a vested interest in over-claiming and making claims that they can't necessarily substantiate that play into these demands for technology to do things on behalf of a human so we can outsource, so when people want to buy into those claims. And I also think that what you said about democratic processes intersecting with technology is so interesting as well. So there's so much to say on that. But I really wanted to ask you next about how disability is currently understood and imagined, in the field of human-computer interaction, and perhaps how it should be understood and imagined. And if you could explain a little bit about human-computer interaction as well for our listeners who may not know that much about it.


CYNTHIA BENNETT:

Yeah, so human computer interaction is, probably it started to grow as an official discipline in the 1980s and early 1990s. And it's kind of disciplinary parents are cognitive science and computer science, so kind of thinking about how do humans and computers come together. So, you know, just studying that relationship. And I feel like very weird defining this, because there are people who define the field who are not me, who I've studied and read, and maybe have worth-listening-to definitions, but that's the way I understand kind of the primary roots of human computer interaction, as I mentioned, are kind of, from the cognitive sciences to thinking about, you know, thinking behaviours, our brains and bodies, and how those interact with computers. And that, you know, what, what is computing technology has just ever widened, with you know, smartphones and wearables and smart home and devices moving through public space and whatnot. So and also, the disciplinary influences in human-computer interaction have also widened. So as an example, I mentioned that I really use a lot of feminist and disability theory in my work. And so those are ever being kind of introduced as complementary and important perspectives that human computer interaction should consider because in the case of feminism and disability, that's understanding humanity, and that's going to be key to understanding human-computer interaction relationships. So, disability is predominantly understood as a deficit, and that's not just happening in human-computer interaction, that's a pretty pervasive understanding. So disability is like a difference in your mind or body, often it's considered, you know, something that's diagnosed by a medical professional, you know, but not everyone has the access to get medical diagnosis. So there are these kind of brain and body differences that differ from what is, you know, called normal. And there's a whole history around like, what is normal having to do with kind of fabricated statistics in the early 1800s. As you know, the field of public health became important to kind of surveil the citizenry and kind of maintain a healthy public that would be in the best interest of the state. But that's going on a tangent. So largely disability is understood as a deficit or a difference. And in the medical field, you know, kind of cure is perceived as the best case scenario if we can erase or get rid of disability, and one way that manifests in human computer interaction is through technology development. So are there ways that we can develop technology that helps someone with disabilities to gain access and while that's very important, sometimes the underlying rhetoric can reinforce this idea that you know, how can they become less disabled? How can the technology make them more normal? More recently, human-computer interaction has started to recognise that actually honouring differences, including that disability can help us to just honour humanity more generally. And so there are kind of growing conversations and design work in human computer interaction that is meant to say, okay, maybe how the question is not how does technology change your body or your mind to be a normal person, but how does it you know, complement your strengths, or help you be the person that you want to be in whatever, you know, disabled way that is. But I would say that those perspectives are still growing and the kind of predominant perspectives of disability in human computer interaction are either non-existent - so the human in human-computer interaction is kind of non-disabled often, you know, when you learn about HCI, you don't learn about people with disabilities - and if it is recognised at all, there's a tendency that the focus is on, how can technology make you more normal? How can it make you behave more normal, look less disabled, that kind of thing? And how should disability be recognised I think, as I mentioned before, like, you know, maybe there are roles for design and technology to play in self-expression, or providing information access, without enforcing that someone change their behaviour, or change the way they think, to fit in with a normal standard. And also, I think there's a huge role to play for technology and design to play in helping non-disabled people to be better allies. So are there tools that can, you know, help? You know, people put the onus on non-disabled people to do some work to be more accessible and more welcoming for colleagues and friends.


KERRY MACKERETH:

Absolutely. And I'd love to hear a little bit more about the history of disabled people as creators, technologists and designers and how that history disrupts what Lilly Irani calls the design saviour complex - I know this is something that you do a lot of research on. So we'd really love to hear your thoughts on this.


CYNTHIA BENNETT:

Yeah, I think so I really love that work that Lilly Irani has done in naming the ‘design saviour complex’, that meaning that a designer, like a professional who could also include researchers or developers, or anyone kind of in this technological development profession, kind of has some sort of authority and special talent to develop interventions, you know, usually in the form of technology that is going to make somebody's life better, and that somebody is different from them, and probably, you know, experiences some type of oppression that you know, of course, that designer doesn't experience. And so there's all sorts of problems when you have this kind of design saviour complex, which is embedded in a lot of the tools and strategies that we teach in human-computer interaction, or what we might call user experience design (UX) or human-centred design. So teaching the designer that they have an authority to kind of evaluate a situation, decide what technology should be developed in response. And actually, in parallel, there's something called the ‘disability saviour’. So disabled advocates have named the disability saviour as the non-disabled person who comes in and helps and so in representation in the media, we often see these stories as like, Oh, look at this wonderful person who took a disabled person to prom, they are a disability saviour, they're helping this disabled person. So I'm really glad that you know, Lily Irani named this design saviour because it really parallels with with a more broad phenomenon on what disabled people experience where people probably well intended just want to help just want to, to save us, but it comes with this underlying premise that we need to be saved, that our lives are not worth living, just because we exist. And so, what is the history of disabled people designing? I feel like this is a cop out answer. But disabled people have been designing since the beginning of time, which is when disabled people started existing. A lot of times these histories can be very hard to trace because the language of disability, at least in English, in the US and the UK, really only started to arise with kind of the Industrial Revolution and there's a lot of different reasons for that that I don't need to go into right now. But one way I tried to be generous and thinking about how to recover the contributions of disabled people in history is to kind of come through archives with attention toward, How are people building their worlds to fit them, when maybe, you know, if they have a difference that is systematically unaccounted for, you know, with the predominant architecture or the predominant ways of living and thinking. And so, I would argue disabled people have been designing since the beginning of time. These ... this work is under recognised in favour of that design saviour complex because if you're thinking, you know, from a company’s perspective, or, you know, even an individual’s perspective on thinking about your design portfolio, or the records of your work, you have to, you know, make the work that you do look good. And unfortunately, when it comes to people who have historically been under-recognised and continue to be under-recognised, often they become the collateral damage in favour of telling these stories of Oh, I have this skill set, I'm able to save, you know, a disabled person with my technology innovation. So I actually the real stories of disabled people designing are directly in contrast, because they show in otherwise, that actually people may come to design and build in their own worlds through all sorts of mechanisms. Some disabled people get the formal training, just like professional designers, and it's really annoying, because then you read methods and guides, and you're always portrayed as the user and never as the professional designer, but some disabled people just design in their everyday lives, and kind of reading that as a legitimate form of design as a way to recover those stories and challenge the design saviour narrative. But as I have this conversation, I do like to caveat that a common response can be okay, well, you disabled people you are designing for yourself, let's leave you alone, you've got it covered. And so I think there's a double edged sword in and I've done a little bit of research on this, thinking about recognising disabled contributions with nuance, honouring the ingenuity and creativity that has been erased. But also recognising that that comes from structural ableism. This work is necessary, because as a collective we have, we continue to decide through our design through our architecture, through our policy and other things that disabled people aren't worth designing for. And so that labour of adapting their worlds to work for them falls kind of on them. So I think it kind of it's, you know, honouring these stories of design by disabled people and bringing close that work, is important and necessary to dismantle the kind of fallacy of the design saviour because that work is happening, disabled people are doing it, but at the same time, recognising that there are really important contributions that the people can can not be a design saviour and can still do work to be accessible, and help build a more accessible world. So I tried to think about that with a little bit of nuance, while dismantling the design saviour, recognising that there's a place for all of us in building a more accessible world whether we have disabilities or not.

ELEANOR DRAGE:

It’s so interesting how addressing structural ableism has to on the one hand, encourage people to design in the otherwise, to think, what does it mean to design differently, but also do that really important archival work, the recovery work, and feminism is really invested in that work. For the last project I worked on, there was a kind of fantastic part of it that was a kind of excavating of stories, of narratives. And as part of the research that I did looking into women's science fiction, I was also going back and trying to find science fiction written by women that use different pen names. And it's really difficult because of exactly as you say, the kind of language that's being used to describe an author makes it very difficult to find those stories. I wanted to ask you something slightly different now about companies that are apparently, that you talk about, co-opting technologies that are originally designed for accessibility purposes and using them for other things. So can you talk a bit more about your concerns in that area about companies co-opting technologies originally designed for accessibility purposes?


CYNTHIA BENNETT:

Yeah, thanks for that question. So I would point readers to work by Mara Mills. So she has traced the history of kind of making an argument that often disabled people will be positioned as kind of test subjects for revolutionary technology. And then that can lead to some negative consequences down the road. So as an example, she talks about kind of the miniaturisation of technology through the case of designing hearing aids. So obviously a really crucial access technology device for people who are hard of hearing and who have the interest in hearing. And yet, you know, you look at that research from like, the early 20th century, and now you have a situation where hearing aids, you know, might be really difficult for people to get, and maybe they, their insurance won't cover it, or they're very expensive, or they're hard to repair, because now these technologies are owned by companies that have pretty strict Terms of Service, that if you, you know, try to repair a device yourself, it breaks maybe a warranty. Whereas if you go through the formal channels to repair the technology that's either very expensive or even impossible. So, so that's a case. But I, I haven't specifically researched or traced how companies have done this exploitation through specific products per se, but my work has more focused on how disabled people have been co-opted by companies to tell specific stories. And so some of my work looking at how disabled people are kind of put on the line as the portraits of you know, companies successes is like, one of my papers kind of challenging the ways we think about empathy and HCI, we talked about the case of a very popular design consultancy called IDEO does that helping to Los Angeles, California to design a more accessible voting machine, and while there were some really important changes to the design of the voting machine that made it more accessible, what my team and I problematize was the way that disabled people were used to tell a story that uplifted the design consultancy as the design saviour, rather than honour the specific and important contributions of those disabled people in that design process. So I kind of find that, you know, companies may design something or promote a product that is accessible, but based on the media representation and the messaging, it's clear that the company is doing it to get recognition from kind of a wide, non-disabled audience, kind of at the expense of ... it's called inspiration porn. So they're trying to inspire non-disabled people that, you know, they this company is part of building a more just and humane world, when you look kind of dig into the archives, and when you talk to people who are actually involved in these projects you find out there, you know, was all sorts of exploitation, like, you know, maybe the company hasn't been systematically hiring disabled people, maybe the company kind of erased the names of contributors, and, and is pretty vague about about credit, you know, maybe the company in promoting a product now maybe that product is more expensive, as I mentioned, with the the hearing technologies. So I tend to advise designers and researchers that when they're - not only to understand their impact based on the things that they put out into the world, but the stories that they - that are being told about the things that they put out in the world have a huge impact on figuring things like disability or like who a disabled person can be in a design process or in a design studio. So it's a little bit different, maybe from the question you're asking, but I think my experiences are more understanding the stories that get told in design and the impact that that has on people.


KERRY MACKERETH:

No, that's really, really fascinating. And thank you for sharing that with us. And I was really interested in what you were saying as well around the forms of structural injustice and oppression that lead to these kinds of narratives. And so I also want to ask for your thoughts on how does ableism and HCI in the tech industry intersect and interact with other kinds of oppression such as sexism and racism?

CYNTHIA BENNETT:

Yeah, thanks for asking. And there, there are numerous, numerous examples. So there are some very practical examples in that - one intersection is just by erasure. And so we tend to have a focus on accessibility, on designing for the most privileged people. There's not a lot of systematic demographics collection done and accessibility research and just HCI more generally, and so that has been identified as just a form of erasure, understanding that oppression only occurs one system at a time and that it's somehow distinct from other systems. But I've been working on a project that explores this in a little bit more of like kind of a specific way. I am interested in justice-oriented approaches to AI and bias. And the reason I became interested in this topic is because I noticed that conversations cautioning the potential pitfalls of artificial intelligence and machine learning were not really considering people with disabilities as a potential group that might be systematically oppressed by this technology, even though we are structurally oppressed in society-at-large. And so I noticed that was missing from conversations on AI ethics. And on the other hand, I was noticing that a lot of accessibility researchers are kind of fast-tracking technology development that takes up AI as a form of access provision. So as an example, computer vision aided by machine trained machine learning models can do a lot like particularly for blind people like myself. If you train a model to recognise objects in your environment, that could be very useful. If you're trying to identify a product on the shelf, you know, maybe the label on the product, you can't see it. So using your smartphone camera, and you know, a model that can analyse pixels and make a predictive guess about you know, what that product is that can be very useful. But where that goes wrong in many … in one way, is when we start to identify humans. So I've looked at the case of image descriptions. So image descriptions are textual descriptions of images as images may not be viewable by blind people. And so these text descriptions kind of fill in like, this is a photo of whatever. And most images on the internet do not have descriptions. It's super important that people you know, write image descriptions because you're enabling access to people, you know, with blindness and visual impairments or even cognitive disabilities who benefit from those text descriptions. People don't do it, right, so AI, maybe automating the production of image descriptions, could scale the production of image descriptions and bring a whole lot of access to blind people. So where, where this case specifically intersects with other forms of oppression related to race and gender, is when you think about Okay, well, what should an image description say about people? Or should that description include information about someone's appearance? And if it should, how have machine learning models been trained to recognise people? And what we know from you know, literature is that often antiquated and even offensive classification systems are used to identify race and gender and if we just deploy those classification systems to be used in composing image descriptions that are then propagated to blind people. We're perpetuating those forms of oppression with the excuse of providing an accessible solution. So the project I worked on was I interviewed some colleagues and I interviewed blind people, so people who benefit from image descriptions but who experience marginalisation based on their race or gender, or having disabilities in addition to being blind. And we kind of heard overwhelmingly that these participants did not think that automatically generated image descriptions should be describing race and gender just because of the harms already shown to have been done by machine learning models that do classify race and gender - and disability has not been sufficiently studied in this space but I have a feeling that would probably be classified in offensive ways as well. And so I think that's a great example of what I'm seeing commonly in accessibility research is applying a technical solution to solve an access barrier like image - providing image descriptions for people who are blind or low vision without considering the, like, what information is that going to provide to someone? And what could be the consequences of that? And so, you know, in this case, we heard from participants that, you know, maybe there are respectful ways to describe someone's appearance, maybe if a human is writing an image description, but right now, it's it didn't seem, it didn't seem appropriate for them, that, you know, that appearance descriptions should be automated, based on the kind of systematic harm they experience due to being racialized and misgendered. So that's one example.


ELEANOR DRAGE:

Yeah, it's fascinating. And we’re obviously keen that people take an intersectional approach, because that's the only way that we can come to these very complex problems with a good understanding of everything that could go wrong while trying to come up with a good solution. And the politics of annotating pictures is obviously so interesting and, and very much speaks to the heart of what we do. I’m really interested by these ideas of collaborative access and interdependence and I wanted to ask you to explain what they mean and assistive design, and also how they relate to the strands of AI ethics that are dedicated to, for example, preserving human autonomy, whatever that means, you know, what does autonomy mean when we are looking for independence?


CYNTHIA BENNETT:

Yeah, thanks for this question. In other work I've done, with other researchers I’ve translated important practice from disability activism into research and so in a couple of papers, I argue that accessibility is often portrayed as a fixed achievement. So something is either accessible or not, but we know, you know, from literature, from, you know, from academia, we know from literature on like, social computing, or even just sociology, you know, we know that, you know, things are usually not fixed, there's usually a lot of factors. And specifically from disability justice activism, we learn about how accessibility is often collaboratively created, it's often interdependent, people kind of move in and out of having different needs, and people move in and out of providing those needs. It's not this one way relationship where a user with disabilities receives the technology, the technology provides them some sort of access and that's it. And so, taking up kind of this idea of interdependence, I appreciate your question, because what can happen is, and something I've noticed and also a bit guilty of, is sometimes we may learn new concepts, and just apply them to whatever we were doing in the past, right. So let's take the case of, you know, an automated or an AI kind of assistant that maybe a blind person is using, again as I mentioned, lots of potential benefits there. Maybe you're navigating and you come you're looking for a business or a street sign and you know, maybe your smartphone - you can work, you know, work with your smartphone interdependently you know, you’re moving your phone and the phone is providing information and you can learn a little bit about the environment around you. And that can be, you know, described as you know, a type of interdependency. But what can happen is, what I've seen is, it's like, oh, okay, well, um, you know, we honour that accessibility is collaborative and interdependent. And so, then, maybe we may not question - I think, part of what we learn from disability justice activism is, you know, interdependencies are not just this utopia and are not to go unquestioned, although they may kind of describe a type of relational way of thinking about accessibility that is a relief from you know, the one way relationships a lot of disabled people are boxed into, like we're saved by our accessible technologies. So while it does some work to honour the role that you know, we play and other people play in those interactions with our technologies to co-create accessibility, interdependence is not, is not interdependence if there are kind of misuses of power, at least as disability Justice argues. So when I think about, you know, preserving autonomy from the little bit that I've read about it, it's understanding that people need to be able to make choices. And so when I think about, you know, these proposed interdependencies that people could have with AI systems which may provide them assistance, I think the work on preserving human autonomy challenges us to ask, Is it really interdependent and collaborative, if a user is not really making a choice, or if they have to make choices to, you know, have their data collected and, you know, exploited or potentially be mis-identified or anything like that? Those maybe are not real interdependencies, but actually putting the user at quite a disadvantage. So I appreciate the question in terms of autonomy-preserving technology is maybe not about doing things by yourself but making sure that people have, can make an informed choice so that they go into these kind of interdependencies with people or with technologies, understanding what the cost of that interaction is going to be. So I don't know if that like made any sense. This was really my first time working through putting those concepts together. But I get frustrated when people maybe talk about this, you know, idea of like, Oh, you know, interdependencies are great, because now I just have another way to describe how my user would interact with, you know, with the technology. And I'm like, well, it's not, I don't know, if you don't fundamentally change the way that that person has choices to interact with that technology, it might not be as reciprocal as your words say that it is.

ELEANOR DRAGE:

That was a beautiful explanation. And, Cynthia, thank you so much for everything. You've sorry, I don't know, these have been fantastic responses to some questions that we struggle answering. And I think everyone will have a really nice idea of how we need to be orienting our thinking going forwards in it, as you say, as it demands these fundamental shifts that are extremely challenging. So thank you so much for joining us today. And we hope that we'll have you back very soon.



72 views0 comments

Comments


bottom of page