Responsible AI: Insights from an Ethics Expert
The What's AI Podcast Episode 20 with Auxane Boch: Ethics Expert
Hello everyone, Louis here with another exciting episode of The What’s AI Podcast. I had the privilege of engaging in an amazing conversation with Auxane Boch, an ethics expert, and PhD candidate, and the insights we uncovered are something I’m eager to share with you all.
This is a journey to demystify AI ethics, responsibility, and governance. Auxane provided a clear and insightful distinction between Responsible AI and AI governance. She painted a vivid picture of Responsible AI as a goal, a concept that aligns AI with cultural, regulatory, and social values. AI governance, on the other hand, is the roadmap, the practical steps, and strategies employed to reach that ideal state of Responsible AI.
One of the standout moments of our conversation was exploring the cultural dynamics of Responsible AI. Auxane highlighted that it’s not a universal concept but is intricately tied to the societal values and norms of specific communities. This revelation underscores the complexity and multifaceted nature of instilling ethics in AI.
We also touched on the delicate balance between innovation and regulation, a topic that’s central to the global AI discourse. Auxane’s insights into the contrasting approaches between the U.S. and Europe offer a nuanced perspective on this ongoing debate.
In a world where AI is rapidly evolving, and regulations are trying to keep pace, Auxane’s expertise offers a guiding light. She delves into self-regulation, the role of codes of conduct, and the impending EU regulations, providing a well-rounded view of the AI ethics landscape.
I’m excited for you to discover these insights and more. Join us for the full conversation on YouTube or tune in on your favorite streaming platform to The What’s AI Podcast. It’s a blend of depth, insight, and clarity that you won’t want to miss!
Full Transcript:
Louis: [00:00:00]This is an interview with Auxane Boch, Research Associate and Doctorate Candidate at the Institute for Ethics in AI. Auxane is also an educator and consultant in Responsible AI and Ethics for Video Games. She's also a member and the Munich City Lead for Women in AI. Her main research topics are in ethics and responsible AI and video games, as well as social robotics. In this interview, you will learn a lot about ethics in AI, governance in AI, as well as responsible AI. I hope this discussion demystifies a bit all those subfields of AI, as well as sharing the importance of building responsible technologies. Auxane also shares a lot of amazing insights to build more responsible AI products. If you enjoyed this episode, please take a second to leave a like and subscribe if you are on YouTube and leave a five-star review if you are on Spotify or Apple podcast. I hope you enjoyed this episode.
Louis: I just wanted to, [00:01:00] to start with what is Responsible AI versus what is AI governance? What are both of them and what's the difference between the two?
Auxane: Okay, that's a very good question, actually, because that's something we're being asked a lot, even by people whose job it is. So I have to say, we ask ourselves this question a lot as well.
Auxane: So Responsible AI is more of a concept, if I can say. So Responsible AI will be very much the goal you want to reach. It's like reaching freedom. It's, you, you want to get to this aspect of an AI that will be acceptable culturally speaking, that will fit within the regulation, that will also fit within social values of the community that's using it.
Auxane: That will be good for the environment that will, or at least not too bad for the environment, that will help to reach societal goals, environmental goals, SDGs in general, so UN SDGs. Responsible AI [00:02:00] is very much so a concept. AI governance. It's very much the application of the concept. How do we get there?
Auxane: What are the steps we need to take? What do you want to see technically within your AI system? What do you want to see within your company's processes as well? So One is the, the idea and the other is the way to reach this idea.
Louis: Oh, it makes more sense. So we ultimately aim for Responsible AI, but we don't work towards being responsible in the sense that we don't define steps to be responsible.
Louis: We just define steps to, for example, gather data ethically and. respect, like human rights and stuff like that, but it's not like we end up being responsible, but we, we shouldn't aim like, what are the steps to being responsible in some way?
Auxane: So that very much depends on your societal values. So responsible [00:03:00] first. My personal opinion on this is that an AI cannot be responsible because an AI is not a person nor a legal person. So it's, you're right when you say we have to be responsible. And it's responsible, probably practices, responsible behaviors that we have. And reaching, so putting steps into place, to reach SDGs or at least KPIs, right?
Auxane: So you can have smaller goals. that belong to making responsible decisions for AI. For example, your rights, fundamental rights assessment, for example, which fundamental rights is EU jargon for human rights. Same, same realm. So fundamental rights assessments that are being discussed, for example, for the EU act, as part of the high risk AI assessments that would be going towards Responsible AI by doing those specific things. I cannot tell you that there is a [00:04:00] perfect recipe, or at least we haven't found it. And I don't believe personally that there is one worldwide perfect recipe, because it will very much depend on cultures for, to reach Responsible AI.
Auxane: But we also are very able to identify what is not Responsible AI. So I think in psychology, we do that a lot in, psychology research where we figure out, okay, someone has something that doesn't work properly in their behavior or in their cognitive processes or whatever, and that's how we learn how this part of the brain works by knowing how it doesn't work.
Auxane: So I think in a way there is that aspect also in Responsible AI, when you see a facial recognition for policing purposes, for police purposes. Well, in Europe, that is not very, that's not what we would consider responsible, but on the other hand, there is countries where culturally, this is very much accepted. So again, we go back to the [00:05:00] Responsible AI is very cultural.
Louis: And so it's. Just like AI is, it's very iterative, like we deploy new technology and then we try to fix what doesn't work, basically.
Auxane: Exactly. Exactly. And I guess you also have two very different world's views in the West about that. So when I say the West, I mean North America and Europe.
Auxane: You have what I would personally call the wild west, which is the U. S. and Canada, but rather the U. S. Actually, Canada is very much about regulating as well, or at least having paths that are defined guidelines. That is, we, we put it out and we see, and then we will adapt, but first we try it out, right?
Auxane: And we see now that they're adapting, they're trying to get some regulations in. In Europe, We are very more, I would say, conservative about those technologies, which I mean, I personally align with this being a [00:06:00] bit more conservative, but I understand this is very cultural as well of saying, let's regulate now, then we'll adapt the regulation if we need to, but at least let's put a frame, right? Let's, let's try to not go too far. But regulations take forever anyway. We have 27 countries to align on those, so that's taking already a long time.
Louis: But do you think this can slow down the progress of the field?
Auxane: That's the big discussion right now, isn't that? It's innovation versus regulation. I, I know there is multiple opinions about that.
Auxane: There is a full discourse on innovation is more important than regulation and we can regulate. Also, kind of internally in a way, so companies can be asked to self regulate and things like that. On the other hand, you have the very conservative arguments of you need the governments and the governmental organizations to take care of this because you cannot really trust individuals and companies, [00:07:00] stand because companies are still made of humans.
Auxane: I don't know exactly where I. I know I believe very strongly in standards and certifications. That's something I can definitely align very much with, so soft laws, the work from ISO, the work from IEEE, that, those kind of certification standardization. I don't believe this is perfect either, but that would be where I would tend to focus the most.
Auxane: And then general regulation, I think there is a need for at least a frame, yes, from, from the government, at least a frame and then specify it with certifications and things.
Louis: How does a company can self regulate? Is it like, is it, can, can it be biased or do they, for example, they will hire someone like you to double check what they do, but is this risky as in like the company is managing it so it, they basically can decide. If it's responsible or not, or like, how does it [00:08:00] work?
Auxane: I'm not a business specialist. I'm going to tell you what I know of this. There is probably much more to the story. Companies can declare codes of conduct and make them public. They can also, for example, for a long time, you could do your ESG, which is, similar, it's economical, social, and I'm so bad with acronyms.
Auxane: Anyway, same as SDGs about, assessments, and so figuring out how close you are to those markers. Are you on your supply chain? So companies could do those assessments and display it to the public and tell the public, look, this is what our assessment says. And this was not mandated legally. At least not as far as I know.
Auxane: And companies were still doing it because that's, they wanted to show that they had good intentions, right? They wanted to show that. What they were doing was good and not just for [00:09:00] reputational purposes, even though there is always this aspect in the background. But it's not just that. I'm pretty sure my personal opinion is that humans don't wake up thinking that do something bad.
Auxane: There is always a kind of a good intention behind what people do. And it's the same for AI. You can self regulate by having those codes of conduct by taking the step to. Go do some certification, some extra certification, to apply some standards that you're not imposed to apply, but that you choose to apply so that your technology is better.
Auxane: And also maybe why not develop your own standards and then make them public and make them accessible, make them state of the art. We've discussed with a lot of companies over the years. I was on a project to develop an accountability framework for AI systems. And so that was very much at the center of those topics of self governance and governance in general by AI for companies.
Auxane: We had a lot of workshops, [00:10:00] discussing with actors of different big and smaller companies. And I can tell you they didn't have to be there. They did not have to be interested so much in it. And they did. And they were very much wanting to know more and seeing how to do that. Telling us what the problem is so we can fix it for them.
Auxane: So I think the intention is very good and that's what self regulation is. So it's internal regulation to your company that ideally you make public because then it actually makes sense for the public to know. All the good you're doing.
Louis: So I assume that the public thing is quite important just to ensure that, that you are doing the right step is just like, in bodybuilding, when you take blood tests or whatever, in whatever sport competition, when you like, take a blood, a blood test on your own to prove that you are not on steroids or, or whatever. So I guess it's like relatively the same, just taking tests or in this case certifications to, to show that [00:11:00] you are doing the right things.
Auxane: Exactly. It's, it's very much to show that you are accountable in a way to show that you're taking responsible decisions. And yes, you were asking who develops those codes of conduct. Usually it's the company themselves. And they can ask for third parties to work on that with them. I guess you can always ask for a third party to work on that. And I have rarely seen ethicists as a position in a company. I know some companies have those, but, there's always compliance people.
Auxane: There's, you know, those legal teams slash ethics team, compliance teams. Those would be the ones that usually would work, on those. And of course they can ask. For third parties. I would, yeah, I would advertise that we can do that. Of course. But to be fair, it's it's a company's decision to do it.
Auxane: And if anyone wants to sell a product in the EU in the next few years, they will [00:12:00] have a lot more problem to think about than their code of conduct with the new EU regulation. So...
Louis: and I know that you offer such a service of like helping to build AI more, more responsible. And, and so my first question is that the company needs to be interested in doing that. Like they will reach out to you and they will. Is there any case that. A company is like forced to work with you or forced to, to double check if they, they work responsibly?
Auxane: No, there is, in no case a forced, forced working or getting anyone, me or someone like me, in the team. That's very far from, I think, what will happen even in the future, what you're forced to have is a lawyer, not an ethicist.
Auxane: So Companies come to, if they come to me or to my colleagues, [00:13:00] it's always with the best intentions, and usually out of, out of interest as well, because it's, they hear it a lot, they see that that's something they should be doing, or they want to do, they're not sure how, well, it's our job, so we can actually explain it to you, properly and define also the step by step, because I guess you're also going to ask me, How do you do that?
Auxane: It's a case by case. So we have general guidelines. We're very much aware of human rights, fundamental rights, ethical frameworks within Europe, ethical frameworks within different countries. So we have different specialties. We can Also, we're researchers, right, so if there is something we're not sure to know, we can figure it out and go and look for the information.
Auxane: We're aware of the legal framework, but we're not jurists, so a compliance team is always a good support. And knowing those normative, that's something we call normative frameworks, so the non legal ones, [00:14:00] but rather the value ones. Then we define, okay, where do you want to put your product? What do you, what do you want it to do?
Auxane: Why do you want it to do that? What's the aim? What's the goal? And once we have this frame together with the company, you can say, okay, now we know why you're doing it, what you're doing for whom you're doing it, who it's going to impact. And we can define if we want to do a market study. So do I want to go to those populations and figure out what they want specifically, or how they would welcome it, what are their specific values that we need to abide by, or do we just want to keep with the more general frameworks, which sometimes are more than enough?
Auxane: And further, I'm also a psychologist. So I have More than one methodology, my, my colleagues as well, we all do together. We can do all the methodologies necessary. And also I have a more individual understanding. So my [00:15:00] specialty in our team would very much be the human and figure out, okay, what do, what does your human want in the end with the product?
Auxane: And then we figure out the path of, okay. Let's make a list of all the stuff you need to abide by. Let's make a list of the values. Let's make a, let's try to figure out the trade offs as well, because we understand that. The ideal thing cannot be reached ever, technically speaking. So what can we reach?
Auxane: What can we not reach? Why and how? And where do we put the limit? So yeah, that's a lot of work, but it's always very fun. It's like a puzzle to crack each time to make it the best possible for everyone.
Louis: Is there any reticence when you, you do that? Like do companies often don't want to apply what you, what you highlight? Or is there anything that happens that Yeah. Company don't agree with.
Auxane: So usually because they come to us, it will be more of a discussion. [00:16:00] So it's rather defining it together more than me being very much like you have to do that. No, actually, you don't have to, legally, nothing abide you to do abide you to do that.
Auxane: So it's, it's much more of a discussion. Usually it's quite smooth. Actually, I would say all the discussions I've had were. positive, trying to find the best solution together. Yeah.
Louis: And do you usually talk with the, the high managers, the founders, or also with the developers and researchers that create the algorithms?
Auxane: Yeah, usually I don't talk to one person. I talk to like three, four people, five people. It's also, I think for us, it's a, it's necessary. Because I never have one person that has the question to all of my, the answers to all of my questions. And that's something I figured, actually, I saw for a long time before I worked with companies closely [00:17:00] because I'm very much in academic.
Auxane: So, before I started working with companies in my research, I was a bit far from that. I saw I would have like the easy and where people could give me like a book of everything about their tool. And I could figure it out. That's not at all how it works, very much not. So, yeah, we talked to a lot of different people.
Auxane: And always, you have to adapt your language as well, based on whom you're speaking with. I guess that's the thing I've learned the most. It's to adapt the language.
Louis: I don't know, I assume that it's like a large case of, for Responsible AI, of the startups, especially the startups, they, like, it's a big case of we don't know what we don't know.
Louis: Like, they don't know that they are not being responsible, so they won't really reach out to be more responsible. And so is like, how, how can the companies be more conscious of that? Or how, how do they know that they need to improve? Like, how [00:18:00] do they, should they be more educated about that? Or is there, what needs to be done to help startups wanting to aim to be more responsible?
Auxane: So if we talk about startups specifically, we also need to talk about the money startups don't have. Yeah. And that's something we're facing a lot right now with the EU AI act coming up, it's this big question of startups, small and medium companies, and how are they going to have the money to adapt to this.
Auxane: So EU is trying to have a contingency plan for them. But in any case. This is the problem of funds, it costs a lot of money to be responsible and it's, it costs a lot of money to be compliant lately as well, apparently. So, going back to the question of education, I think in Europe we're in a very specific ecosystem because first the question of ethics, societal acceptance, et cetera, is very much present.
Auxane: And [00:19:00] on top of it, now with the EU AI Act, I mean, I'm always surprised when I meet someone that works in AI and hasn't heard about it, because this is very much everywhere, and now you see everyone reacting to this, and you have China with their new regulations doing their sunboxing, and then you have the U. S. doing now a few states coming up with regulations, you have, and I think even the White House is coming up with regulations now, which It's quite rare, I have to say, in technology, I've rarely seen them do that. And you have in the East and in the West, more and more discussions around that. And this stems from EU saying, let's regulate like we did for GDPR.
Auxane: And I don't know a startup that's not aware of GDPR. So, so it's the data protection, law, just in case. So, I feel like the idea of being uneducated on at least the idea of Responsible AI will be far away from us [00:20:00] in, in a, in a year or two, because everyone hears about it out here. So in Europe, we know, we just, a lot of companies don't know what it is, that's fair, but they know they should think about it.
Auxane: Usually what happens is that they don't realize that their shield is very much high risk according to the regulation and also in general. Or that their application specifically is high risk and so they don't realize that, yes, it's a bit more than what you thought of simply kind of checking out that it's not going to kill anyone or, you know.
Auxane: Yes, you have to look into more details of everything on the entire life cycle. Now, if you leave Europe, it's a fully other discussion, where, yes, I would say there is a lot of unawareness and in general, I think education is more than necessary. I'm a big advocate of imposing in curriculums for engineers, computer scientists, data scientists, anyone [00:21:00] working in the technical aspect.
Auxane: To have some ethics classes, even just to, you know, let them know, just give them the first, the entry tool, the one on one of ethics in technology, and then at least they're aware and that will stay with them. And then when they get to that point of having their startup, they at least have a basis to build on because I'm not expecting everyone to be to want to, first of all, but also to be able to pay for services dedicated to something that's not mandatory.
Auxane: But I do hope that everyone takes those steps of learning more if they cannot pay stuff for someone to do it for them. Also, because I don't believe no one can do my job. I believe everyone could do my job. I just don't believe people have the time to do my job. So, but if a startup wants to do that, like themselves, for their own product, they can. It's just a lot of work, but they can.
Louis: And what would be such a curriculum like [00:22:00] for AI ethics? What can be taught to developers and creators of those AI models if you want them to be built responsibly and act responsibly? Like, assuming that developers are ethical human beings that act responsibly in the society, shouldn't the things they build Be responsible by nature too?
Auxane: Again, you're talking about societal values here. So this is first, this is very much cultural. Of course, I don't think people will say, well, that's hurt others. But I think what hurting others mean is very much culturally defined unless you're talking about physical hurt. Also Responsible AI is not just that Responsible AI is, has, I like to model it in three layers.
Auxane: You have the first layer, which will be your individual that uses your tool. Then you have the second layer. There's the direct community of this person. When I say community, that can be the family they live with. That can be the friends that can be [00:23:00] the entire community, like a neighborhood. Or a city, and then you have society, and society is a country, or it's EU, which is a region of the world, or it is the world in general, depending on your use case.
Auxane: So already, it's not just about not hurting that one user, it's about being sure that your tool is adequate for the user, the community of the user, and then the general vision. That's also why environmental considerations actually are very important. They belong to the third, to the societal aspect. Or even if it takes a lot of energy, then you think about your user that will have a big bill to pay, and then the community that will maybe have a shortage depending on the circumstance.
Auxane: You know, it's like every question, every point comes with a lot of different considerations. So, of course, there is some initial values that we have in the West that we imagine people have, by essence [00:24:00] and that, therefore, they will rationally think, Oh, I'm going to develop a tool and I want it to not do that.
Auxane: My personal experience tells me a lot of people don't think about that. So, it's not even that they, they think against that. It's just, it doesn't come to their mind that this tool that they're developing and very much give a lot to, a lot of their heart, a lot of their time. Could do something wrong, except something technically wrong.
Auxane: They don't think about the social impact it has or the human impact it has. And you know what's fair enough, it's not their job. So I can imagine that because they were not taught to think that this should also be part of their evaluation. And they don't think about it. And that's where I think a curriculum approach is a good idea.
Auxane: And not just for within educational systems, but in general, I think it would be great to have free open curriculums about [00:25:00] Responsible AI 101. I'm, I'm sure there is some out there. Actually, I know there is some out there because we have one at the institutes I work for at the TUM, with our global AI ethics network.
Auxane: There, there is an online course that is free. But it's not business oriented. It's very much academic. It's very much about the discussions. But I would love to see free, accessible, classes online in multiple languages, not just English. I love English. It's an easy language. But I was in Brazil to visit some of our colleagues in the conference with them there.
Auxane: And we had all this discussion around you need to have it in Brazil to be behind it. Because that's a problem, and I mean, you're a French native, I'm a French native, we both know that French natives are terrible in English. So, it's also this question that a lot of languages, a lot of countries don't always deepen up their English as a second language skill, and it's very fair.
Auxane: So [00:26:00] we should definitely make everything accessible. Whatever accessible means. That's also a big issue. I see.
Louis: And I will also assume that such a curriculum will be different for the owner of the company versus the managers versus the developers and like everyone's involved differently with the algorithm or, or the, the product they're building.
Auxane: Yeah, I think that's also a very good point you're raising. We cannot speak the same to everyone. We do, in the end, need to give them the same information, but, or at least partly the same information, the same basis, and then everyone can go into more specifics for their own work. But you cannot, I cannot give the same class to an engineering classroom and to a management classroom.
Auxane: And I do teach both. I can tell you this is very different and that's fine. That's the beauty of AI as well. It's [00:27:00] multidisciplinarity. I also think we need much more disciplinary teams within the product building team. It's not because you can't code that you shouldn't be involved. You know, there is the stakeholder engagement is very important.
Auxane: That's also something. We can do, but you know, that would be part of like explaining to people what the responsible process would be. But anyway, so yeah, building classes, building. building workshops. I think the practicality of Responsible AI also usually flies by people. They don't realize that it's a very practical topic, actually, not it's spoken about as very high level. But the truth is that it's very much down to the tasks you're going to put into place to reach this responsibility.
Louis: Here in Canada, we have I don't even remember how CSST, but it's like a safety and security for, for workers. And basically the [00:28:00] larger companies have like a, like a team that. ensures that everything is safe in the manufacturer or things like that.
Louis: And, and so of course, people think about being safe, but they are like, they are taught by the team. They follow like small courses inside the company. And it's, it's mainly the team, the safety team in the company that tries to make the whole company safer. And so you do think that this could be done with ethics or should it's really come from everyone in the process?
Auxane: Both. So I do believe that big companies and companies in general will end up having a ethics officer in house. Because, or AI, not just compliance, that's what I want to say as well, it's not just an AI compliance officer, it will be AI compliance and more, obviously. And this person, this team of AI ethics will be [00:29:00] asking all of those questions and will have to be involved in the life cycle of the AI.
Auxane: Now, I do believe that... It's also about empowering the team, each person in the team, then to teach everyone about it. And I think a lot of people don't realize the power they have. And just, you know, Spider Man being a great movie "With great power comes great responsibility". And that's very true. An engineer that builds an algorithm model.
Auxane: And doesn't consider the risks of the algorithm or doesn't even analyze for them and just puts it out thinking, well, my model is cool without warning people, even that's someone that didn't take their responsibility. In my opinion, the problem is that they didn't know they had to, or they should. I can't really be mad at them, right?
Auxane: You cannot, that, that's fair. So, [00:30:00] I think it's, it's also good empowerment of those positions that are more technical than to tell them you are able and you are allowed to also have those ethical questionings on what you're producing. It's not just about the high level thinking people, whatever. It's not just about the academic aspect or very much governance aspect, policy aspect.
Auxane: It's also about, do you feel like you're not so okay with what's happening here? Do you have this gut feeling as an engineer when you're building something that something is wrong? Well, you know what, let's give those people tools to evaluate those gut feelings. And people to get up to, to go and discuss it with, I think that's very important.
Louis: I want to circle back to something related that you mentioned earlier in the conversation that some startups are more high risk and some products as well are more high risk. So what's, what's being more high risk? What, what is risky? Is it [00:31:00] like an, a model that is involved with sensitive topics or like do the founders know, should know that they are high risk? like, do they know by default that they are, they are considered high risk?
Auxane: So it depends if you're asking me about the regulatory definition of high risk in the EU AI Act, which in which case I would tell you there is a list. One of the appendix is dedicated to this so you can evaluate yourself if your tool is high risk.
Auxane: Or if you're talking about. anywhere else in the world. I think high risk is very much a concept that's cultural. If it's not linked to life and death, right? So if we're talking about life and death, yeah, we know this is high risk. Autonomous cars, they can kill people. [00:32:00] dying That's high risk. Anything in healthcare that can lead to people.
Auxane: That's high risk. Now there is also some points that will be much more. Cultural and morally societal. So, in Europe, we banned some types of AI. So in the regulation, we have a short list, but still a list of things that we want to never see on our land. Or uses we never want to see on our land. And for example, the social credit scoring AIs.
Auxane: That's very much something that has been number one of the banned AIs. There is other countries in this world where this is very far from being societally inacceptable. And that's cultural differences right there for you. So high risk is also very much cultural. Now, one of the conversation right now is to say, Is [00:33:00] an AI in education by essence high risk?
Auxane: Well, that's a big question. My personal opinion is that a lot of those should be taken case by case. But I can understand if you want to say anything that touches a child is high risk because a child is a vulnerable population. So, you know, it's those, those balances, those trade offs, that's always the problem with ethics, right? It's that it's, it's gray. It's always gray.
Louis: Yeah. It's just like psychology. Well, you, you are in this field, but it's like, it's different to all companies and it's, so it's super complicated. You cannot just build a path or a way to, to be responsible, as you said, you, you need to, to adapt to the company and the company needs to like reflect on its own products and on what it does.
Louis: And then think of what could be wrong and then do something wrong and [00:34:00] fix it. And like, it's, it's a very long, complicated and iterative process that. You cannot just automate for anyone, for everyone.
Auxane: That's the thing. You cannot automate it. You're very right. That's a very good term. We do have a list, though, we have a 101 list, and it will be published in a open white paper in the coming days.
Auxane: So maybe by the time this podcast comes out, we'll actually be out there. And it's on our accountability framework projects that we've worked on for a couple years. And we do have On the life cycle, the steps that you should at least consider, and this is holistic, this is sector blind. This is AI blind.
Auxane: This is a model blind. It's everything blind. It's the starting starter pack, kind of, for Responsible AI to be accountable as a company. And to be sure your model makes sense, [00:35:00] ethically speaking, or responsibly speaking, and your processes make sense responsibly speaking, this is very unprecise on some stuff.
Auxane: For example, it will say, check out the laws of your sector, check out the laws of your region on this specific thing. But that's what it's stupid to write it down. But actually, it's not because that's part of your process. That's where you should actually start to do your compliance work first. Don't start your model until you have your KPIs.
Auxane: Are you sure where your data are sourced? Have you unbiased everything? Are you sure that the bias that are present are okay? You know, like There is a lot of different steps. So anyway, so there is a starter pack, but then the details, the application of this starter pack will be very much asking for more research specific work specific. So we do have a starter pack that will be made available to everyone. And then it's a [00:36:00] case by case.
Louis: By working with companies that is there something that you've seen that is. Like often, not a problem, but like companies do that are not responsible and they, they don't know about it. Like, is, is there something that you see that happens often?
Auxane: Usually companies are very compliant with the law and I think the biggest issue I faced, is the lack of understanding of everything else. So, of course, it's not just ethics and Responsible AI as a topic or whatever. It's very much, it goes down to, did you even ask your population if they wanted that?
Auxane: Or are you just putting something out there that no one needs? And they're like, well, no, but my idea is very good. Cool, I'm sure there was a ton of very good ideas out there that never worked, because it wasn't the right time, it wasn't the right target, it wasn't, [00:37:00] and that's the psychologist in me kicking out, kicking out rather than the ethicist, even though there is an ethical aspect to not producing something that doesn't need to be produced, but it's much more the psychologist saying, did you ask people?
Auxane: Do you, do you know what people want? I think that's the bias that companies have very often is that they think they know better and the companies that succeed from my experience are the companies that adapt to what people want. It's not always about creating the need. Sometimes it's also about answering the needs.
Louis: Would you say that this may be because, because the new technologies are so accessible? For example, when you are working in hardware or, or something that is very applied. And that requires high cost when starting, you definitely start by doing a product market fit. And you, you check all those boxes to be sure that there is a problem and that your fix [00:38:00] works for this problem.
Louis: But is it because it's so easy now to just like use OpenAI's API and build some kind of cool application so you don't even ensure that it helps with an existing problem.
Auxane: Yeah, I guess there is that. There is definitely this, this facility, everything is facilitated now when you don't have a hardware product. I won't be precise about that because hardware directly costs money. But if you can do it on your own or with your buddy and just make it happen, it doesn't answer. A question. It's just there. And if you're not expecting to make too much money out of it, honestly, do it. If it's fun for you and it's something you want to do, that's great.
Auxane: But if you want to make a real product, I guess it's business 101 to have your population, kind of opinion on this first. We did a research or friends did a research on, on, the [00:39:00] that's acceptance of autonomous driving in a specific region in Germany. And, because, you know, in Germany, we have the law already there for autonomous driving.
Auxane: Like, we're the one country that has a law ready, but the technology is still not there. So we have the technology, it's just not implemented, at the full autonomy. And, they were, they were doing this research, this market research to say, why are people not ready for this? And it was so interesting.
Auxane: And I was very surprised to see that there was almost no other type of data available out there. That we're that precise when this research didn't take us so long, you know, that's, I'm thinking people should do that much more, but then that's just my personal experience. A second big issue I see, and that's more of the psychologist again, taking over.
Auxane: It's all those mental health support bots when we don't have standards and regulations and anything on this. And usually, [00:40:00] you might have one psychologist in there, but where are your psychiatrists? Where are your post implementation testing? Did you, you know, did you ensure the safety of patients? Are you even allowed to have patients?
Auxane: You know, all of those things. So I think there is also this big blurriness where people want to do good, because we're lacking a frame for it. They think they're going to do good, and actually... I'm not sure they will, I have to say, I'm very unsure how, how good on the long term those are if unsupervised.
Louis: What is one easy thing that assuming most listeners are students or researchers, developers, like what is? One thing that they should be aware or that they should implement in their work process to be more like to improve AI governance or just to be more responsible.
Auxane: I think the starting point of Responsible AI is to sit down with [00:41:00] yourself and maybe with your team and ask, why are we doing that?
Auxane: What's the point? What are we trying to reach? Why? If you can answer this, why very precisely. And with all the good intentions and seeing that you want to do good in this world, because I, again, I don't believe people want to do bad. I guess they're always trying to fix a problem while you're already on a very good starting point.
Auxane: And then from there, Google stuff. Google your regulation, right? Look online. For all the rational stuff that's come up to your mind that you should look for, ask people. But first and foremost, ask yourself why, what for, and how.
Louis: Yeah, that surely makes sense. I just wonder if, like, obviously, [00:42:00] it takes time and you need to take the time to do it.
Louis: And most people... Especially in this field are like in, in super heavy competition, trying to release as fast as possible. And you can break things, but just fix it after afterwards. And it might be a bit negative, but I assume it, we may require to. To force people to ensure they are being responsible just because it's, it's so competitive and they are like working 70 hours per week just to, to release their product as fast as possible. So they won't take, they won't be taking the time to, to second thought their, their work. I think.
Auxane: That's, that's a big problem. And I mean, if you go to this Responsible AI process. A lot of it is sitting down and asking the right questions and that's hard. I, I'm sure that it's very hard to code [00:43:00] and to come up with this unbugging.
Auxane: I've heard a lot from a lot of engineer friends and coding friends, all the hassle that's coding can coding can be, but I think the hard part, the other side of this is very much sit down and also sit down with everyone. And not just the leading team. Get the opinion from your intern, because your intern might have a gut feeling.
Auxane: You don't know. Gut feeling comes from experience and knowledge. So maybe it's irrational as the intern says it, but maybe there's something there that needs to be dug. So, always take everyone's opinion because ethics is not about diplomas. It's very much about experience and values and it's moral. You know, it's try to be the most moral possible. Everyone has a morality somewhere so.
Louis: And yeah, I will assume it's just like training your algorithm. You want [00:44:00] the most diverse set of data possible to represent ,
Auxane: depends,
Louis: the population and be as general as possible. So that's pretty much the same way. You, you want opinions of all kinds of people just to ensure you, you like, you respect everyone.
Auxane: Exactly. That's very true. And it's also just the same as when you train your algorithm, when you're working on something very specific, having very specific data sets also makes sense. So if you're stuck on one specific thing, go to expert, there is experts for everything out there.
Auxane: Go to experts. I do experts workshops a lot now where that's a lot of my work to talk to people actually and get their opinion and then look into actual science, collect data on from other ways and then try to mix all of that together and so get the experts opinion as this general public according to survey research, other type of research says this, [00:45:00] the literature says this.
Auxane: Where do we meet? What's the middle ground? What's the, how do we make it work? And I think that's the hard part, but that's also the very fun part. I mean, I can be sure that as a product owner, if you crack that code, You cracked the code, you know, it must be so nice, such a big release to say, okay, we figured it out and take everyone's opinions, but also take valued opinions and take opinions that you wouldn't usually value.
Auxane: That's called stakeholder engagement. And that's a very good process for, Responsible AI. Engage also the people that will use your technology. And not just in the UX part, in the UX testing part. Involve them in the ideation part. Because they will bring some stuff to your table that you're not expecting.
Auxane: And that will make everything just much better. Also, I guess Responsible AI to finish on that. [00:46:00] It's very much about humility and that's the hard part, it's to say you're entering that room without an ego and you have to take everyone's opinion at the same value as yours. That's very hard to do for anyone. So, yeah, it's a lot about humility. It's a human experience, I would say.
Louis: Basically, is this a complicated field because of the technology being so new and so different? Or has this always been complicated for all industries that we've ever had?
Auxane: I think it's neither one nor the other. I think it's the first time that we have something that's so global.
Auxane: And that can, it's not creating problems, but it's enhancing them. So as much as it's enhancing the good, it's enhancing the bad, and not just in one place. Yeah, and scaling them as well. And scaling them very much. And I think that's the, [00:47:00] that's what makes it. Such a big point of now you need people with expertise in this.
Auxane: But if you look at health care, health care forever had ethical questioning. The core of doctors training is ethics. In psychology, I figured that once I changed field and went into multidisciplinary research, we are built. And formed to be ethical, to have an ethical thinking because we're working with humans.
Auxane: Our field is humans. And I guess it's the same in every human sciences. So in the end, in healthcare, that's a sector that in itself has a lot of ethical frame, not just regulations, but also duties. So I think we have a lot to learn from healthcare actually on this. Very, very much a lot [00:48:00] because that's scaled as well.
Auxane: That's, that's problems that can scale very quickly as we've seen with COVID. But so, yeah, I think there, there was a field before, but I would say AI is very much about the issue never stops kind of, and it becomes bigger and bigger. That's why ethics it's so important in it.
Louis: When you try to help other companies, do you base yourself, basically, how do you find solutions or ways to make their work more responsible?
Louis: Do you base yourself on other sectors like healthcare, as you mentioned? Or is this something so new and different that you, you just have to study the company and think about it and like, just try to do research on this. A very specific use case, how, how do you usually find solutions or ways to, to be more responsible than what they currently are?
Auxane: Both. So [00:49:00] first you learn from the sector they're in. So if a company comes to me and is from healthcare, well, that's an easy one. That's one of my innate specialties. I'm very much aware of also the ethics of, medical sciences. So medical ethics is a part of ethics in itself. and very much aware of it.
Auxane: Then AI ethics in general also has its own frameworks. So you start with that. The, the first thing you start with is, okay, what's my, what I'm, what's my, playing field where, where am I situated? What's the tool? What's, what does the company want, but also what's the tool. And then it will be a lot of research on.
Auxane: How, what do we know about the impact of such type of tools or tools that do that kind of stuff? And once we know the renowned consequences, or the academically validated consequences, if we can say, positive and negative, and the [00:50:00] different factors that changed those consequences, Well, then I can come back and say, okay, now I have a view of the playing field.
Auxane: I have a view of your tool and it's what in the past similar tools, you know, situations have done also non AI stuff. So we have to also see that AI usually automates things. But those behaviors were there before. So okay, what do we know about this activity? And once I have all of that, I know what we're talking about very precisely from a scientific perspective, and now we can get to the business aspect.
Auxane: And what I call the business aspect is very much, let's discuss, and then I'll tell you what ideally should be happening, and what not. We're also developing some specific tools, I'm not sure how allowed I am to speak about them. But there is a tool that will come out soon [00:51:00] that will also be usable by companies without us to at least have a starting point as well.
Auxane: And when I say we in this context, it's my colleagues in consulting and I, so not the, University, and that will give also already a first idea to companies.
Louis: I'm just thinking back about the fact that people are now developing a lot of software based products that are very, not easy, but much easier to get into and release.
Louis: And I was wondering if since a lot of them, as you even mentioned in, in this episode, that, a lot of the people creating those products are students or like very young people, are there a lot of ethics related problems? that are not even related to AI just because they are like young people that are not familiar with the industry they are tackling.
Louis: And so they are [00:52:00] not being responsible considering the industry and it's not even because they are in new technologies and using AI at all.
Auxane: Very much. Very, very much. I would say most of the issues with AI actually don't come from the AI. It's the use case. It's, it's how do you use it? Yeah. Why do you use it?
Auxane: Because there is some one example personality, test, on AI, actual clinical personality tests that can be run by AI systems. It's literally just questionnaires that can definitely be done. The calculation behind them can be done as well very quickly by the AIs and very accurately. Those are clinical tools and research tools.
Auxane: They should not be misused. But if you take that AI and you put it in a hiring process, you're giving far too much health data to a company without them. Well, with the [00:53:00] company knowing they're doing something wrong, I'm pretty sure, but without the participants to this hiring process knowing. This is misuse.
Auxane: So misuse happens a lot. People have ideas thinking, oh, it will make everything easier. Probably would, but we live in a world where people have to be respected their privacy as well. So let's not jump into this. Let's first check the use case is acceptable. Then, yes, the... The problem that we see a lot is also the diversity of a team is necessary.
Auxane: Responsible AI cannot be reached in isolation. It has to be with involving a bunch of people. And that means if you have a team and everyone looks the same, speaks the same, comes from the same background, whatever the background we're talking about, your bias is strong. So whatever bias it is, and that's a big issue as well in developing an AI in [00:54:00] general.
Auxane: So diverse teams, use cases. And that's not even touching upon the legal aspect that a lot of people are not very much not aware of, at least not enough. And I understand for a startup, it's very hard to have a compliance team or compliance person. So there is all of those aspects. Lack of experience plays a role, of course, but I believe that a young person that is dedicated and wants to do it right can definitely do it right.
Auxane: So it's a lot of small things that at the end, when you add them up, it turns into AI that can do crazy things in crazy contexts that shouldn't happen.
Louis: And so the main thing for anyone wanting to create a new project or a new, a new product would be to start by thinking of the reason why you are doing it and how you will do it.
Louis: And just try to brainstorm [00:55:00] about that before and while doing the technology.
Auxane: Yes. And also once you have your use case very well defined, don't wing it. That's a very important point. Don't try to wing it. Don't say, Oh, we'll see later the details. No. Plan your details. Plan everything. Your planification process should be much longer than what they currently are from our understanding.
Auxane: Don't wing it. Take the time because maybe you're losing time at the beginning, but that's a lot of time that you won't have to fight into a court of law. And that's a lot of money that you're not losing if it's not according to law just because of reputational impact. Yeah. So there's a lot of different reasons that you can lose money as a company because of a product.
Auxane: But there is not so many ways to make your product better than what it is, not just technically, but also acceptably speaking.
Louis: [00:56:00] Thank you very much for joining me. I just have one last question and it's the simplest for you to, to answer, but it's just Okay, no, I just lied. I have another question that just came out, but right now, where should people learn more about AI ethics?
Louis: Should they go, should they go on, your platform or is there? You said that there were some responsible resources that existed. Do you have any names or maybe I can add them in the description? Or like, where can, where, where's the best place to, to learn about that right now?
Auxane: So the best place, I don't know if there's one best place, but there's definitely my favorite place.
Auxane: And that would be the Institute I work for's website. that has a lot of resources. I'll send you the link. I'm trying to find, the, the course I was [00:57:00] talking about before that. I think, yeah, there you go. AI ethics course. I think this is, yeah, fantastic that this exists. There's people from all over the world talking about AI ethics in panels and presentations or else very accessible.
Auxane: Everything is in English, I believe, some stuff might have been translated and I, I think that we might also have some subtitles. But yes, so that's a fantastic place to start. Then we also have any research institution will have a website where usually they would have accessible content rather than just academic content.
Auxane: Academic papers are cool, but honestly, they're quite boring. So you can find some much better things if you're just interested in reading about it and don't want to go. into too much of the details. I'll send you also the link to our research briefs page. That is [00:58:00] where we actually publish some accessible things that are not academic, but very much applied.
Auxane: Actually, on YouTube, you can find a lot of very cool conferences and very cool talks. Yeah, I very much like YouTube for that. So just Google Responsible AI. And I'm sure a lot of stuff will come up. I know I have a few videos out there on YouTube, where I was giving talks and things like that. So I'm sure you can find much more of those talks out there.
Louis: Yeah, my last question was someone, for example, if, if there's a student or anyone developing a product right now, when should they reach out to you and how should they reach out to you?
Auxane: So they can reach out to me on my personal website. So first reach out to me on my personal website and when the ideal answer is the moment, you know, you're gonna, you're gonna develop something before even you start your full ideation process, you just have your first idea, you can answer that why and what question, well, then you [00:59:00] call me because now we can start talking and implementing step by step.
Louis: I, I think it's a very important field and that is obviously not discussed enough, even if there, there is some content on YouTube, but it's, I don't know why, if it's the people that are not really interested in this part, because it doesn't make money or, or just because it's not like accessible enough or spread enough, you said yourself that the content is already available and there are YouTube videos, but I'm sure that most of them have like 10 or a 100 of views and they are not Like seen by a lot of people, even if they are there.
Louis: So I don't know what we can do for that.
Auxane: And also they're not well referenced at all. It's not big YouTubers talking about it. It's not people with a lot of visibility. It's usually, it's very much academic. I think the content is not kind of camera ready yet. The things are not pretty, they're not accessible either.
Louis: Yeah, how would you [01:00:00] make that camera ready and exciting? I feel like, even for me these days, since November 2023, lots of... Like YouTube videos are all about how I made the $10,000 with ChatGPT or whatever like all the how I used ChatGPT to do eggs and I don't do those videos at all and just I saw my My reach get lower and like people just want to see those ChatGPT things videos So how how do you think we could make like ethics more exciting or more interesting?. Like I'd be interested in trying to help and make a video about it, but how, how can we make something that is like, Oh, nice. Or like, I want to know about.
Auxane: Well, I think it's a use case thing. So rather than talking about the concepts of ethics, having that as a second.
Auxane: In the background thing, definitely, but I think we would need to start [01:01:00] with, let's take the stuff that don't work and talk about them, talk about why they don't work, or this, and, and what questions to ask, because every time I'm meeting with my students, I, at the beginning of the semester, I always have a bunch of people that are there because they're there, you know, and I'm like, okay, what's interesting to you?
Auxane: I do social robotics ethics. I'm and they're like, yeah, I don't know. I just thought that this would be a seminar to take in whatever. Okay, so let's sit down and start with what's your favorite robot in pop culture? Ah, it's starting to spark a bit of enlightenment, a bit of questions. And then we talk about the pop culture aspect and then we talk about what's in the books as well and how can they relate that to what they know and from this, once they're interested in the technology and the application of the technology, then we can get into ethics, because then we have something to think about as well.
Auxane: [01:02:00] So I think that's where we need to start. It starts with the thing people know And make it relatable for them what ethics means, because ethics is easy, in a way. It's hard, of course, but it's easy. It's very much everyday life stuff. People don't realize they do ethics all day long. Should I answer that text now or should I answer it later?
Auxane: Are you going to be consequentialist about this? Are you going to be the ontologist about this? Are you going to be a teetarian? What's going to be your decision making process? So it's very much making people relate to it much more. I think that's what's missing.
Louis: Yeah. Well, thank you very much again. And it was fun to, to learn more about this.
Louis: Yeah, I really love talking with you. So it was definitely worthwhile.