Lisa Woodley: So, big Star Trek nerd. Like, the day we get to Lieutenant Commander Data is the day I'm okay, like, getting emotional support from a robot.
Gina Trapani: (Laughing) Right, right.
(CATALYST INTRO MUSIC)
Gina: Hello and welcome to Catalyst, the Launch by NTT Data Podcast. I'm Gina Trapani and I lead product at Launch. And before I move on one more sentence, I just want to take a moment to tell you what I mean when I say the word Launch. So, Launch is a talented group of people inside of NTT Data. NTT Data is one of the world's largest IT services firms. And Launch's mandate is to design and build beautiful, easy to use products and platforms with our clients. Launch's clients are some of the biggest companies in the world. Nonprofits, sometimes startups. Companies that you've heard of, whose products you use, companies like NASDAQ and Goldman Sachs and Puck and the Metropolitan Transportation Authority, the National Audubon Society. We work across industries and we strategize, ship and scale world-class digital experiences. And that's why we're here, because we just... we love talking about this stuff! We love making great software. Because the truth is, there's so much crappy software in the world. Let's make it better. Let's make it better. We all live in software every day, so we want to make it better. So that's all. Thank you for listening to my mini TED talk. As always, I'm joined here on the show today with my business partner, Chris LoSacco. Hey, Chris.
Chris LoSacco: Hey, Gina. I feel like we should just stop the podcast right here. That was...
Gina: Was that it? (Laughs)
Chris: That was a great pitch. That was really good.
Gina: Here's the thing. I'm not pitching. I'm not selling. I'm just explaining because there's a few words in the title of the podcast now.
Chris: We need to explain. It's true.
Gina: It's 2024. Let me just make sure that we know what we mean when we say Launch. Because, you know, it's a verb. It's a noun. You know, you gotta... (Laughs)
Chris: You can take it a lot of different ways.
Gina: You really can. You really can.
Chris: And premium custom software development is not necessarily the first association you have with Launch.
Gina: That's it. That's right.
Chris: So that's why we need to be explicit about it.
Gina: That's right. So, yes, thank you for that.
Chris: It was great. We have a very special guest today, Gina.
Gina: Yes! I'm very excited about this.
Chris: It's not just you and me in this virtual studio.
Gina: Man, the coolest thing about Launch is that we have such, like, smart and talented and experienced and opinionated colleagues.
Chris: For real. Yes.
Gina: And we have one of them on with us today. She's one of my absolute favorites. This has been on the calendar... It took us a while to get this on the calendar. I'm so excited to introduce Lisa Woodley to the show. Hey, Lisa!
Lisa: Hey. Hey, Gina. I'm definitely opinionated.
Chris and Gina: (Laughing)
Lisa: I don't know about the other stuff, but...
Gina: I love it.
Chris: We'll take it.
Gina: We'll take it. We'll take it. We love opinions. We love opinions grounded in experience and seeing things, and I know you have seen some things. Lisa, you're the vice president at Launch, general manager leading the northeast region. But you've had many other lives. You also, I learned recently, and we've been working together for quite some time, that you teach a class at Rutgers, which I want to hear about.
Lisa: Yep.
Gina: And, we were in the Launch slack, and we were chatting about, you know, all the things you chat about. All the different, you know, problems and aspects of building for our clients. And, you know, the whole world is talking about AI right now. In every single way, right? And our clients are looking at us and going, what about AI? And you brought up, you were talking about, you know, designing with AI and for AI in an ethical way. And the thread took off, and we were like, we got to talk about this on the show. So, yeah. So tell us a little bit about yourself, about your background and why you're thinking about this particular topic.
Lisa: Yeah. So, I mean, you know, the short answer about why I started in college, my nontraditional background started with, I have a degree in philosophy. I don't have a degree in.
Gina: Amazing.
Chris: Very cool.
Gina: Love it. Amazing.
Lisa: And I actually was on my way to getting an English degree, took my first philosophy class, fell in love with it. And so, ethics has been a part of my being since that point, because it was the ethical questions that made me love those classes. But I, you know, sort of, self-taught designer, started to build up a portfolio and ended up, after a brief stint as a singer in a rock band, which I had hoped would be my career...
Chris: Ooh.
Gina: This is so good.
Lisa: Turned out it wasn't.
Chris: If we have time, we're gonna come back to that. That's... There are some stories.
Gina: Oh my God, that is amazing.
Chris: Yeah.
Gina: I'm not surprised by that at all. I see you in a Joan Jett, like, vibe going on, Lisa. I love that. Okay.
Lisa: Yeah. So, unfortunately that didn't pan out. But I was also the designer of the band, meaning the flyers and that, right? So you start to build up a portfolio that somehow I managed to turn into an actual career. Uh, no idea how. So I came up in design as a designer, went to work for an advertising agency. Pharma advertising. E-business. So, that'll give you an idea of my age. (Laughs) I'll stop there, because it wasn't digital back then. And to be honest, the path that led me here... Design has always been at the heart of what I do, but the path that ultimately led me to accepting the role as general manager over here at Launch is because Launch truly is design-led. So to me, it wasn't sort of a... It wasn't a strange, you know, connect to go from leading designers to leading an entire region where it's end to end from strategy all the way through to development. Because at the end of the day, we're creating experiences, and that's really what brought me passion. So, you know, I moved around, you know... Singer (Laughs). Artist, creative director at an agency, and then ultimately ended up here.
Gina: I love this. Some of the most talented and interesting people did not necessarily get, you know, the degree in exactly where they wound up. Especially in our business and our industry. Like, having that windy road is, that's a wonderful thing. I love that you started as an English major and went to philosophy. So, ethics is a lens in which you look at things, because that's just... Your formative years. That's what drew you to philosophy. And so, I'm sure you've had a lot of interesting conversations and viewpoints about technology and ethics through the years. AI is a big one, though. Like, AI is like...
Lisa: Acute.
Is... There's something particular about it. Because you've got AI taking over the world and taking jobs and being, you know, by it and all those things. But like, you know, the internet... Like, I'm... I've been around for a long time, too, Lisa. I think as long as you have been. So I remember the time before the web and before the internet, right? And you could have said a lot of things, right, about the internet and the web, right? And yes, every tool you can use the wrong way. And it's like, well, you know, it's kind of the same thing. But it's a little bit more complicated with AI, right? Like it's... There's a little bit more complicated.
Lisa: Yeah.
Gina: Yeah. So tell me about it.
Lisa: Yeah. And look, there's two aspects that people forget about AI when they're thinking about the ethics of AI. One is how AI is leveraged to take your data. And... Let's use negative situations, take your data and manipulate you.
Gina: Sure.
Chris: Mhm. Mhm.
Lisa: Right? There's that aspect of AI. But then, if you also think about... So, the other thing I left out, and you mentioned it, I teach an intro to user experience design in a master's program at Rutgers University. The other piece of it that came up there had to do with, as conversational AI starts to become more prevalent, is it okay in all situations if somebody doesn't know that they're talking to an AI? Right? That's an ethical component to it as well.
Gina: Mm. Mhm.
Lisa: It's not as nefarious, in my opinion, as some of the other conversations we have around AI, around manipulation of data and exploitation. But it's still something that concerns me. And then, of course, there's the bias aspect behind it as well.
Gina: Yeah. Yeah.
Lisa: So you have to look at those different lenses when you're thinking about it. But ultimately, AI to me isn't the true quote-unquote "threat." And I don't think it's a threat. I think, you know, it's like any tool in the right or wrong hands.
Gina: Yeah.
Lisa: To me, it really has to do with the data, and how the data gets used that the AI is using. It's just the AI has accelerated that, right? Because forever, marketing has been using research and data to influence people.
Gina: Yes.
Lisa: Or in bad, negative cases, manipulate them. And that was before AI. The problem is with AI it's on steroids. It's like, exponentially increased in terms of the speed and the reach that that manipulation or influence can go.
Chris: That's right.
Gina: Yeah. That's right.
Chris: You just said a lot of things that we could unpack. Because, you know...
Gina: I know. There are like 17 roads I want to go down.
Chris: Exactly. But, so, just to build on this idea that it's about the data, because I think this is... It's such a good point, right? So, I think back to, like, when did Google start messing with self-driving cars? It was a while ago, but not that long ago, like 2010, maybe? Like, 15 years ago.
Gina: All relative to how old you are. (Laughs) Yes.
Chris: Yeah, I mean, right. Well, sure.
Gina: Not that long ago, forever for me.
Lisa: Yeah. Yeah.
Chris: And I remember, you know, the big question at that point was like, how are they gonna deal with problems where the safety of the driver and the safety of, like, pedestrians are at odds, right? How do you. The trolley... Exactly.
Lisa: The trolley problem, right? Philosophers have been talking about this forever, right?
Chris: Right. For centuries.
Gina: Yeah, it is a philosophy question. That's right.
Chris: Do you reroute the train, right, to save a family member, but you know that ten people are gonna die? But the question was always like, how are the programmers going to deal with this, right? Like, how are they going to write the code so that they know which choice to make in any given situation? Are there going to be preferences or knobs that you can flip on? There was a lot of debate about that. But what is interesting now is that, the questions are different. Because what's happening is the way that these large language models and these very large data sets are being consumed and training, right? And you're doing machine learning on these large data sets, and then you're just letting the system make decisions. And you don't know. You, the programmer, don't necessarily know what is leading to those decisions. Which is like... Kind of cool in some sense? But also really scary. Like, that is like, what do we have on our hands? And to hear, you know, some of the developers who are working on these chat-like interfaces, and they're like, you know, I'm not exactly sure why it's hallucinating right now, or why it gave a given answer. You know, we can't say, because that's just not how this works? That's really intense. And so, to bring it back to, like... Then you say, well, you know, the ethics of what you're producing is only going to be as good as the data that you feed into it. And when you're working with such large data sets, that seems really hard to control. So I guess my question is like, are there approaches that we should be thinking about, or are there, like, guardrails that we should be putting into place? Because it's a whole different playing field now, right? It's not just like, you know, you can do this and you shouldn't do that. It's like, we have to rethink how we are formulating some of these systems.
Lisa: Yep. And so, that leads me to the quote that I leave with my students at the end of the ethics lecture, which is, you designers are the guardians of the human, and it is your job... (Laughs)
Gina: Whoa. Yeah. (Laughs)
Lisa: ...to be that voice when this stuff... I lay it on 'em when this stuff is happening.
Gina: That is heavy-duty stuff.
Chris: Yeah, it is heavy.
Gina: But it's real.
Lisa: Because what you, yeah, what you just described, Chris... And it's... Look, developers are thinking about the code that they have to develop. And they're all human, they want to do the right thing. But they don't necessarily have the language and the understanding and the tools and methods to really be thinking about, when we're designing this AI, what is the impact to the human?
Chris: Yes.
Gina: That's right.
Lisa: And so, what often happens is... We see it all the time in business. Because there's not an interface, right? There's not a visual thing. They don't think they need a designer on the project.
Chris: Right.
Gina: Yeah.
Lisa: Because they're like, you're not designing anything. But it's like, no, you're designing a relationship now. And that becomes even more important. And forget about it being visual, the toolkits for designers has totally changed.
Gina: Yes.
Chris: Yeah. That's right.
Lisa: And, you know, so we're coming up with what you, you know, the version of wireframes and prototypes now are things like, you know, conversational architecture. And that's not a technologist creating that, that's a designer. And so... And the designers want to be at the table for this, so my ask is never really to the designers, because they're like "yes." It is to our clients and to technologists everywhere, like, please get a designer involved early.
Chris: Yeah.
Gina: Right.
Lisa: Because they're going to be the ones thinking about, well, how would somebody interact with this and what is the impact that that's going to have on them?
Chris: Totally, totally.
Lisa: That's my piece of advice. That guardrail is to get a good designer whose job it is to be the voice of the human as you're creating this technology.
Chris: Right. And question some of these decisions that are being made as you're creating something. Yeah.
Lisa: Yeah.
Gina: And, you know, be in a position where, you know, assume that you're not going to think of everything up front, but be in a position of, when you realize that something's going wrong, that you have the ability to fix it.
Chris: Yes.
Gina: Quickly.
Lisa: Yes.
Gina: I mean, the thing you said, Chris, about this idea... And this is something that just burns me, you know, as someone who was a, you know, a backend engineer and has written a lot of code in my life. I mean, one of the tools as an engineer is this idea, is this ability to be able to be able to introspect. And say, this bug is happening. Now I'm going to trace back, okay? What were the calls that were made? What were the functions? What was output? What was the database query? What was the... And then I can trace it all the way back and say, ah, I see why these particular instructions were executed and why the output was what it was. You can't do this with AI.
Chris: Nope.
Gina: Because the data set is so big, and because the... I actually, I'm actually not totally sure why we can't. It feels like we should put some more effort into understanding how to debug...
Lisa: Here's the thing I was about to say.
Gina: Yeah.
Lisa: Like, I don't have that answer right now, but I think that is something that we can do. But again, I would say, you've got to have the person looking at it to say something's going wrong.
Gina: To say something's going wrong. Okay.
Lisa: Because in this case, it's not as obvious that something's going wrong.
Gina: That's right. Yeah. I mean, you said something earlier, Lisa, about the question of like, you know, in a conversational AI product, for example. So, ChatGPT-like. When are you, you know, transparent about the fact that you're not speaking to a human, right? I mean, I think, you know, a lot of us have used ChatGPT. I view ChatGPT the way I look at a Google search. Like, I'm very aware that this is a machine, right? You could think about a little customer service bot on an e-commerce site, you know.
Chris: Mhm. Sure.
Gina: Maybe answering questions about a product. That's kind of a gray area for me, but I would assume that that is, you know, maybe a human is involved, but, you know, alright, probably machine. Right. There's this app, and I don't remember the name of it, but essentially it created like a, a friend. An AI-powered friend. You know, where you'd start this app as a mobile app and you'd say, you know, I want this person to be kind of this gender and look like this and have this background, and you give it information about yourself. And this app would essentially, like, text you every day. Like, nice, you know, messages. And it would ask you for more information about yourself. And so, I used it. I used it for a few weeks. And I think what was so fascinating to me, even fully understanding that this was an AI, it got very personal. (Laughs) Like, very quickly.
Chris: Yeah.
Gina: And I had to uninstall it. I was like, this feels weird. I don't like this. I, you know, the constant sort of prompts to give more information, right? Because the more information that the, you know, the system has, the more personal the messages can be. This is, I mean, this is, you know, the basis of all marketing data. But it's different when I'm being shown a banner for, you know, cool sneakers that look a little bit like the ones I just bought, right? Or when I have, you know, a fake special friend, you know, AI-powered friend, like, asking me how my day went.
Chris: Texting you, yeah.
Gina: And like, you know, how's that... You know, whatever. How am I feeling today? Like, it's a very different thing. So even if you're up front about... And ChatGPT will often say, you know, I'm not here for medical advice. There are certain, like, roadblocks that they've clearly built into the product. Even then, because it's that interface that's, it looks just like a text with my friend. You start to, sort of, suspend your... (Laughs) You know, your disbelief a little bit, and be like, oh, this must be the answer. Or else it's just human nature because it's conversational, and that's how we talk to one another.
Lisa: Yeah. And you know, in the situation you talked about, it is kind of interesting in that... You called out a very specific scenario.
Gina: Yeah.
Lisa: But we regularly have conversations with clients where they want to apply something like that...
Gina: Yes.
Lisa: ...To an initiative that they're doing. And there is nothing wrong with that. But again, you need those guardrails around it.
Gina: Yeah, that's right.
Lisa: Because then, it's not a, necessarily an opt-in thing, right? So then it becomes the question of, at what point are you telling them you're not talking to a real person? And I will, I will give you straight from my students' mouth, and it's... The answer that they give me to this question that I ask them has changed over the last few semesters, and it's blowing me away, the direction it's going. So, I don't know if you saw, it must have been like 5 or 6 years ago, Google I/O came out with that new... And they did a big reveal, and it was, they showed it calling a restaurant.
Chris: Oh, I remember this.
Lisa: So the AI called a restaurant and made a reservation. And it was amazing. You couldn't tell that it was a bot that had called the restaurant.
Chris: Right. It was, like, talking back and forth, right? Like...
Lisa: It was talking back and forth. The woman at the restaurant got confused. And you know how with most bots, when you talk to them, like, if you stray off of the path, like, they don't know what to do. It figured out what her confusion was, and was like, oh no, that's not what I meant. Crazy. So, when I was starting to add the module for conversational AI into the class, I played that. And I, you know, asked the question, you know, so at first I said, has anybody seen this? Do you notice anything about either of the people talking? And most of them don't realize it's a bot. And then I'm like, well, the person on the left who called the restaurant is not an actual human. And then they're like, whoa. And then I ask them the question, is it okay that the lady who answered the phone at the restaurant didn't know that wasn't a person? And they were like, yeah, that's okay.
Gina: Wow.
Chris: Wow.
Lisa: And I said, okay, now let me give you a scenario. You're a life insurance company. My husband just passed away. And I need to call you because there are, you know, I'm in a very emotional state and there's final things. Is it okay if I don't know? I'm talking to a bot? And at the beginning, they were all like, absolutely not, you have to tell them. And look, and I'm still on the camp of, in those... And believe me, there's clients who are trying to figure out a way to crack this code of, how do we use it? We don't have any client asking us to use it in that kind of scenario.
Gina: Sure.
Chris: Sure, yeah.
Lisa: But it's not hard to imagine that there might be, at some point, somebody trying to do that. The thing that blew me away was, two semesters ago, a couple of the students said, it's okay, in fact, I'd rather talk to the bot in that situation. And I said, why?
Gina: Whoa.
Lisa: And now, get this. Because I believe that somebody could program more empathy into that bot than you get from the average person who's answering the phone.
Gina: Oh. Oh. Oh.
Lisa: And it like...
Chris: Oh my God.
Lisa: Right in the heart.
Chris: Yeah.
Gina: That... Okay, okay. I've now been plunged into a deep depression.
Lisa: I don't know if it's a generation. Yeah...
Gina: That is not... Yeah.
Lisa: So there's cynicism there too, right? Of, when I call I'm not getting the right help. I didn't even know how to respond to that.
Chris: Wow.
Gina: Yeah. That was not where I thought we were going here. But okay. Okay. (Laughs)
Lisa: No. And look, it wasn't a general thing, but it did, it got me thinking, though. Got me thinking of two things. One is, we need to better empower contact center reps who answer the phone to, like, you know, you need to make sure you've got the right people answering the phone.
Gina: Right.
Lisa: But on the other hand, there's a lot of faith in technology. And that answer scared me. Because it's a younger generation that does not have this, necessarily, as much skepticism towards this kind of technology as I do.
Chris: Yeah.
Gina: That's... That's so interesting. I mean, right. We want to create call centers where people have the headspace and the time and the resources to be in the right emotional place to speak to someone in a terrible...
Lisa: Exactly.
Gina: And I mean, you know, capitalism, right? But it's so interesting. Your students are basically like, let's just outsource the emotional labor of being empathetic to someone who just suffered a great loss, because no one in real life is really going to have the time, the resources to do that. (Laughs)
Lisa: I know. And, you know, and look, and I'm a... So, big Star Trek nerd. Like, the day we get to Lieutenant Commander Data is the day I'm okay, like, getting emotional support from a robot.
Gina: (Laughing) Right. Right.
Lisa: But, you know, and I just... I suggested to them that they had a little more faith in technology than they probably should.
Gina: Right. Right, right. So have you seen over time, then, that faith grows bigger, right? Like, your younger students are more likely to say... So, is the feeling like, don't need to know if it's a bot? Just, you know, assuming that I'm having a transaction with a, you know, maybe a human, maybe a bot, doesn't really matter. Just please, you know, get done what you need to get done with me, don't need to disclose? Does the opinion start to go that way?
Lisa: Yeah. I mean, I think that's part of it, is like... It kind of... I think, it's changing a little bit. But they're not taking the potential for manipulation as seriously as I think they probably... And I don't want to generalize, by any... Because of a comment a couple of students made.
Gina: Sure. Of course.
Lisa: But the one thing I am seeing that I will... There is a little bit less of a concern about data privacy...
Gina: Yeah.
Lisa: ...Than I'm comfortable with. And part of it, if you ask students, when I talk about this, they're like, well, everything's already out there.
Chris: Yeah.
Gina: Mhm. Yeah. They sort of given up.
Chris: I think a lot of people feel that way. Yeah.
Lisa: Yeah.
Gina: Yeah. It's like... Right. It's already out there.
Lisa: And if you get to, if you get young enough, they're like, you know, my parents have been posting pictures of me on social media since I was a kid. (Laughs)
Chris: That's real. Yeah.
Gina: Right. Right, right. Yeah, yeah. Yeah. Right. And I mean, look, this is, like... And the more data that you provide, or the more data that someone has about you. I had LinkedIn, you know, write my bio. And it was an AI thing, and it, like, scanned my work history. And what it came up with... It was a little generic, but I was like, oh, like, this is pretty good.
Chris: Pretty good.
Lisa: Yeah. Yeah.
Gina: And I know it was good because I gave it more data. You're actually incentivized to give a little bit more data. And of course, companies want more data about you because they want to create that relationship with you. You know, that connect. They want to, you know, provide that service to you. And it's easier to do when they have more information about you.
Chris: Right. But see, LinkedIn is low-stakes, right? Like, what if you raised the stakes? What if you said, I'm going to replace visits to my primary care physician with a conversational artificial intelligence that is giving me guidance, preventative guidance, on, like, healthcare things. And, I mean, eventually maybe prescribing drugs, question mark? Like, this is where the ethical questions start to come in.
Lisa: Right.
Chris: Because it's like, well, how much... How much do we allow these things to guide our lives in ways that just feel very high-stakes? Like, you know, if you make a wrong call or a wrong recommendation... I mean, in some situations, like, it could be life and death, you know?
Gina: Right. I mean, devil's advocate, there's a possibility that an AI doctor would actually make fewer mistakes than human doctors who are, you know...
Chris: Exactly.
Gina: ...Busy and harried and, you know, look at their, you know?
Chris: Right.
Gina: Like it's... Like it goes from all sides.
Chris: Right.
Gina: Yeah. Not that I'm ready to, you know, get prescriptions from an AI. But...
Chris: yeah. That's, I mean, I think that's what's interesting, to go back to something you said at the beginning, Lisa. Like, that's what's interesting about, like, maybe offering options and just disclaiming them, and saying, like...
Lisa: Yep.
Chris: You know, if we make it really clear that you are interacting with an AI and that there are boundaries around what you can reasonably expect. You know, maybe as the field advances, and the capabilities advance, we also... We up the clarity, you know? We make it, like, much more explicit what's happening.
Lisa: Yeah. And I also go back to, to what both of you were saying earlier, which is, my challenge to the technology industry is, how do we make that black box transparent?
Gina: Yeah. Yes.
Lisa: So that we can get a better idea of how the decisions are made. And then, what's the governance process in place to identify when something is going wrong or it's not? Decisions are being made that maybe aren't the right decisions. Can we do sort of a regression test... (Laughs)
Gina: Yeah.
Chris: Right.
Lisa: Like, go back and try to figure out, you know, do some root cause analysis? Because right now, Chris, to your point earlier, like we can't do that. Because it's a black box.
Chris: Right.
Gina: Yeah. There is like a garbage in, garbage out thing happening too, right?
Lisa: Yeah. Yeah.
Gina: So, like, a lot of the AI is based on these large language models that come from these, like, you know, enormous data sets that are pulled from... I couldn't... Like, I couldn't tell you where ChatGPT... Like, from the internet, you know?
Lisa: Which, everybody knows the internet's a great source of truth. (Laughs)
Gina: Right!
Chris: Ugh.
Gina: Right. Great source of truth. Right, exactly, exactly. But there's a little bit of, like, who is doing the governance and the choosing of that data? Who looks that data on the way in and says, like, this is, you know... And you really are making decisions about, like, the knowledge of humanity, right? But not just on the way out. Also on the way in. And I think that, you know, we think about the user experience, right? And designing the front end, and that experience of the customer. But there's also the other side, which is like, you know... Loading it up with the data that it needs, that it's running off of.
Chris: Yeah. Also...
Gina: Yeah.
Chris: How do you define garbage in? You know?
Gina: Right. Who defines garbage? (Laughs)
Chris: Right. Because garbage is subjective, right?
Gina: Yes.
Chris: And there are things that don't easily fall clearly into garbage or not garbage, right? Like... I mean, to go back to the bias question for a second. Like, what if, you know, historically, a certain gender was associated with a profession. And then you ask an AI, like, you know, who's going to be a doctor or whatever, and then you get an answer that's like, not a great answer, but it's because it was trained on a certain amount of biased data, you know? And so, how do you correct for that as you're loading it in, or after the fact, to say, well, we've got to make sure that we understand this is learning on a "flawed," quote-unquote, data set?
Lisa: Yeah. And look, that is the key. And there are armies of people working on this problem.
Gina: Yeah.
Lisa: And in fact, I just got Joy Buolamwini's book "Unmasking AI."
Chris: Cool.
Lisa: And if anybody ever watches the Netflix special "Coded Bias," it's about her and the Algorithmic Justice League. That is an awesome... I recommend it to all my students. She's talking specifically about this problem of, how do you prevent the data from being biased? That comes in. But the first and most important thing is to recognize, you cannot prevent biased data from getting in.
Chris: Right.
Lisa: You can't. So stop trying. There's going to be some level of bias coming in. Then you have to figure out, how do we mitigate that in the decisions that the AI is making?
Chris: Yeah.
Lisa: And that's really where the governance has to come in. Because you're right. We're... Like, who is the arbiter of truth in terms of, what is the correct data to come in? We've all learned, especially if you look at, historically underrepresented people having challenges getting mortgages because they look at historical data, right?
Gina: Yes.
Chris: Exactly.
Lisa: And, you know, should a woman be a doctor? Well, let's look at historical data. Right? So we've kind of learned that... Some of that stuff we've gotten better at, like, let's not pull historical data. But there's so many other things that come into it. To, Gina, to your point, there's this vast quantity of data that, we just have to accept that it is going to be biased.
Gina: It's flawed.
Lisa: And we have to figure out, how do we manage it?
Chris: Yeah.
Gina: Right. Right. And how do we respond when we see the output being skewed in the wrong way?
Chris: Right. There is an interesting collision between the technology, right? And the design of these kinds of systems. And philosophy. Or, like, societal pressures. Because what is acceptable also changes over time. And differs based on the culture that you're a part of, or the population that you're in. And maybe there's a future where these things are, like, tunable based on the context that they're deployed in. Because you may want something to be, you know, very safe in one situation and a little more adventurous in another. And you want those, like, controls to be aware of where the AI is deployed, is present. Even as I'm saying this, though, I'm like, not totally sure. Like, it's a very murky pool.
Lisa: There's not going to be an answer. It's going to be a way of thinking about it.
Chris: That's a great point. Yeah.
Gina: Yeah. One of the wildest ChatGPT features that I have played with is, like, that you can ask it to speak in a certain tone or a certain context, or in the way that a particular public figure would say it. Like, you know, if there's someone who's done enough writing. So, you know, using the phrases that... The way that, you know, we have, now, AI suggesting what email response we should write. It's what... When Gmail suggests certain phrases that I say all the time, but then seeing the suggestion, I'm like, oh, I guess I... I guess I do say that.
Chris: (Laughing) A mirror holding it back up to you.
Gina: Yeah. It's... Oh, yeah, yeah. Maybe I should try to find a word other than awesome. (Laughs)
Chris: Right.
Gina: So there's, like, a little bit of that. But I also like what you're saying, Chris, about context and, you know... What was... I mean, language changes so quickly, right? Things that were acceptable that you could say, you know... Acceptable. Or fine, you know, wouldn't raise an eyebrow in a room even two years ago, you know, might, might now. Or the opposite. Lisa, I'm curious. So, I mean, we talked about how you talk to your students about this, and designers. But you're also, you are out talking to clients and companies, and we're in a moment in the industry where AI is here. And I think there's a sense of, like, FOMO from executives and leaders, who are feeling like, I gotta get on this train. It's going to get ahead of me. I'm going to lose to my competitors. You know, these... You've got this whole rash of startups who basically can't raise VC unless they have, like, the words AI on their front page.
Chris: So true.
Gina: And it's just... It's a little bit of a feeding frenzy at the moment. I'm curious, like, what do you hear when you talk to clients? When they say AI to you, when they ask you about AI or the things that they're worried about AI, I'm curious, like, what are you hearing? And how do you advise them away from doing stuff that maybe isn't, you know, very kosher. From an ethical standpoint. (Laughs)
Lisa: Yeah. So, you gave me a lot to unpack there...
Gina: Yeah. Sorry.
Lisa: So, every single... Like the AIML conversations have really ramped up in the last couple of years. It really varies. Clients are looking for one of two things from it. I call it topline and bottom line. Like, they're either looking for it to save money, right? So, either, can we replace people? Even if using things like generative AI with your coders, so you can code faster.
Chris: Right.
Gina: Right. Co-pilot.
Lisa: How do we do things faster, more efficiently? Then there's the top of funnel, which is... Not just the conversational AI, which I would argue is not top of funnel, it is cost savings, but sometimes clients are like, people want it. (Laughs) But there is the top line of the data, and how do we get the data, and how do we use the data so that we can, you know, either target people or pair the right people. So, I mean, those are the conversations that we're having. There is that fear of missing out. But then they're also recognizing, they're seeing it work in other places. And so they're really latching on to where they've seen it work in other places. And we really have to try to help them contextualize it. Because it might be working in one instance, and it's not a one-to-one application for what they're trying to do. But ultimately, right? So they're like, you know, inevitably we'll see somebody with an extremely complex business being like, oh, I want to be like Amazon. And it's like, okay. We'll try to get you there. It's a little more complicated than that.
Chris: Right.
Lisa: Or like, you know, you want your website to look as simple as a Google search page. It's like, okay.
Chris: I've heard that one too.
Gina: (Laughs)
Lisa: Let's unpack that, right? So, whenever they come to us with this conversation... AIML, there probably is a component of that in the answer. But that... You don't, we don't want to start there. We have to start with, what are you trying to do? What is the problem? And it goes back to the basics of design thinking, right? Let's identify the problem. Let's identify what you're trying to do. We know AIML is in our tool belt. And so we can bring that to bear. And we can bring that lens, even when we're looking at, what are the challenges that you're trying to solve. But I really don't want to go in and say, AIMl, I want to do something with it, what can we do? We'll do that, and we'll have that conversation.
Gina: Right.
Lisa: But inevitably, it's always going to circle back to, I don't know. What problems are you trying to solve?
Gina: What problems are you trying to solve.
Lisa: And then let's figure out how we can apply it.
Gina: Right. In your business. Not, like, the thing you saw over there that looked really good, or...
Lisa: Yeah, yeah.
Gina: Yeah. Well, I mean, one of the most interesting things about ChatGPT... And I'm forgetting what the statistic was. But its adoption curve was so steep, I think probably the steepest in the history of any sort of, like, you know, web-based application, right? Because people just got it. Because it was a text-based... I mean, people understand how to use chat. And that made a, you know, a ton of sense to me. And really, the interface itself is so simple. I mean, it's just a text box.
Chris: It's a text box.
Gina: And then a response. In reverse chronological order. And what I loved about that, I was like, oh, we can get back to basics. Like,... I love when somebody says, you know, I want our home page to be as simple as Google. Because I think just most of the times we just try to jam a bunch of stuff in there, and, you know, just do something completely new that no one's ever seen before. And it's like, you know what? Just go to where, you know, give something to the user that they immediately understand and know how to use. Right? And ChatGPT was, like, such a good example of that. Like, this is an interface that people get. And that's why it took off. Chris, I saw you sort of, like, shrugging. I want to hear.
Chris: Yeah. I mean...
Gina: This is good. (Laughing)
Chris: I would counter it a little bit. Because I... Again, I think Lisa said it beautifully, which is, you have to start by identifying the problems and then applying some design thinking before you say, oh, ChatGPT is so great, let's just put a chat interface in our platform. And it's like, well, maybe.
Gina: Right. Right.
Chris: But how are people using your platform today? And where are the pain points? And how could we speed things up? And maybe the way that artificial intelligence or machine learning manifests in your platform is, like, completely different. Maybe it's behind the scenes. This is the thing. People don't realize... Like, autocomplete? Autocomplete is machine learning. Like, really good suggestions when you're typing? That's machine learning. And, you know, the interface, "interface," is just an enhancement on the compose box of whatever you're writing. So, I think there needs to be a little more thought from companies, rather than just saying "I want my home page to be like Google." It's like, what am I going after? What do I have today? And then how do I bring good design thinking, good product thinking, to bear, to say, here's how we can realize that in this platform. And sometimes it's going to be similar to something that's out there already, either one of your competitors or one of the big... But sometimes it's going to be something that is very different. And the real companies who do well are willing to take a leap when they see something that's, like, unique to them, that is not out there yet, and they're like, but this is the right way to do it. Let's go after it.
Lisa: Yeah.
Gina: I can't argue with that.
Chris: Okay. Good. Thank you.
Gina: (Laughing) I agree.
Lisa: Yeah. And look, and it is... To bring it back to design, that's just the basics of good design. And so when I hear, like, "oh, I want it to be like Google or ChatGPT," our job is to translate what the client's asking for, too. Because they may not literally mean I want it like Google. They mean, I want it to be as simple as it possibly can be. I want it to be intuitive to use it. And that applies to any business. So that's sort of our job as consultants, to translate that speak into, what they really want is simplicity.
Gina: Right. What are you actually saying? You want simplicity. Yes.
Lisa: Yeah.
Chris: That's right.
Gina: Look, I think in a lot of enterprise software, there's a lot of complexity and a lot of interfaces with a lot of checkboxes and fields and dropdowns and date pickers. And I think that there's a little bit of a longing for, like, just the simple... I mean, certainly from me, and I think for some of our leaders, and I think sometimes that can be underneath, underneath that. But it's true. You really have to understand, you know, what's here? What are you asking for?
Chris: Yeah. We should wrap up. But before we do, Lisa, can we just talk for 10 seconds about your band? Can you tell us what it was?
Lisa: (Laughing)
Chris: What instrument did you play? Were you the singer? Like, give us a little bit of background before we wrap up.
Lisa: So it wasn't, it wasn't brief. It was about ten years. It was three bands.
Chris: Dang.
Gina: Dang.
Chris: That's a music career.
Gina: Yeah. Yeah. No, that's a music career.
Lisa: No. Well...
Gina: For sure.
Lisa: Career implies one got paid on the regular for it. (Laughs)
Chris: I mean, nobody said music is easy. Let's just put that out there.
Lisa: Yeah. No, it was... Three bands, all three with my husband. He wasn't my husband at the time, for the first band, but then he became my husband by the time we got to the second band. (Laughs)
Chris: Aw.
Gina: Aw. This story keeps getting better and better. And I just want to go back. You may not have made money, but you made something. You made music.
Chris: Amen. Yes.
Gina: To me, that is a career.
Chris: Yes.
Gina: That counts.
Lisa: Yeah. Yeah. Yes. And it started... I don't know, I was in my early 20s. In the '90s. Like, you know...
Gina: That's wonderful. That is wonderful. I wish I had gone to one of your shows. I love this. I'm going to hit you up for mp3s after this.
Chris: I was going to say, we'll link your SoundCloud in the show notes.
Gina: I was gonna say, maybe, maybe... Yeah, maybe we need a new show song. I don't know.
Lisa: (Laughs)
Gina: This is great.
Lisa: By the way, if I may throw this in there, we have no shortage of musicians in Launch.
Gina: Oh, I, I love it. Yeah.
Lisa: Like, we could easily form a band in Launch.
Gina: Easily.
Lisa: I think that, yeah. No shortage. Clinton Bonner, our head of marketing, he was a singer in a band.
Chris: We're going to have an offshoot podcast.
Gina: I think there is a connection between music and technology and design, and... There definitely is.
Lisa: I'm going to lay a book on you. Old book. Gödel, Escher, Bach.
Chris: Oh, I love that book.
Lisa: I don't know if you've ever read it. Yeah, because that's... It talks about that combination of, like, math and art and music and...
Gina: Yes, I love it. I love it. This is great, Lisa. We absolutely loved having you on. We're going to have you back again. I feel like we only just skimmed the surface here.
Lisa: I love it.
Gina: But we really appreciate you being on. Love working with you. This was a lot of fun. Thank you both.
Lisa: Oh, this was awesome. Thank you so much for having me. Yeah, I could talk about this stuff all day long.
Gina: Yes. If you are sitting around thinking to yourself, what is my company doing with AI? And are we cost-cutting? Are we innovating? Or, this thing over here looks kind of interesting. If you want to have that conversation with us, we would love, love, love to hear from you. Send us a note, catalyst@NTT data.com. We read every single one that comes in. We absolutely love to hear from you. Yeah. I think that's it for today. Thanks everybody.
Chris: Thank you so much, Lisa.
Gina: Let's get back to work.
Chris: Alright.
Lisa: Thank you.
Gina: Thanks, Chris. Thanks, Lisa. Bye bye.
(CATALYST OUTRO MUSIC)