Podcast
Podcast
December 3, 2024

From anxiety to advantage: How to tackle employees’ AI fears

Catalyst
Podcast
Share:
##
min. read
Christen Bell breaks down why people are afraid of AI and how it can be addressed.

As AI technology takes center stage in today’s digital transformation discussions, we wanted to dig into the often-overlooked human side of AI implementation. Managing employee concerns around AI is crucial for successful adoption, so Christen Bell, an industrial psychologist specializing in AI’s impact on the workplace, joined Clinton to unpack the complex emotions, fears, and management approaches needed for responsible AI adoption.

Understanding the roots of AI anxiety

AI’s rapid integration has sparked widespread anxiety among employees, who are often uncertain about its impact on their roles and job security. Bell explains that many organizations overlook these concerns, leading to resistance and reduced trust.

Bell notes that AI fear often stems from lack of information or misinformation. Some employees feel uninformed and fearful of the unknown, while others hold misconceptions influenced by media portrayals or social narratives. Addressing these varied concerns requires a well-informed, empathetic approach from leaders.

Key AI attitudes in the workplace

Through her research, Bell has identified six distinct attitudes toward AI, defined by levels of understanding of and sentiment toward the technology. Three of them are “anti-AI” attitudes:

  1. Uninformed and Anti-AI: Lacking exposure to AI, these employees feel threatened, unsure about its purpose and capabilities, leading to heightened fear.
  2. Misinformed and Anti-AI: This group believes they understand AI but often holds exaggerated beliefs about its power, influenced by media portrayals of AI autonomy or consciousness.
  3. Informed and Anti-AI: Though knowledgeable, this group remains skeptical or fearful, voicing concerns around ethics, privacy, and potential job displacement.

Building trust through education and transparency

AI literacy and transparency are key strategies to alleviate fears. Educating employees on how AI models function, what data they use, and how outcomes are generated helps demystify the technology. Starting with data literacy and advancing to basic AI concepts, such as machine learning and neural networks, enables employees to understand AI’s limitations and benefits.

Transparency is equally crucial. All AI-generated content should be clearly labeled, preventing confusion and building trust. This labeling helps employees understand AI’s applications and limitations, countering the “uncanny valley” effect, where AI-generated content can feel unsettling if not disclosed.

Creating a trustworthy AI adoption strategy

For successful AI adoption, companies need proactive change management. Bell recommends assessing employee sentiment, creating AI training programs, and establishing open communication channels. A knowledgeable, credible leader should champion the AI rollout, addressing concerns with empathy.

Safe feedback mechanisms, such as anonymous channels, are essential, as employees may hesitate to raise concerns in a public forum. These channels foster transparency and inclusion, helping employees feel informed and supported.

A human-centric approach to AI

Bell’s insights underscore the need for thoughtful, human-centric AI adoption. By focusing on empathy, education, and transparency, companies can foster a culture of trust, ensuring that AI enhances rather than disrupts the workplace, ultimately aligning AI innovation with a positive employee experience.

As always, don’t forget to subscribe to Catalyst wherever you get your podcasts. We release a new episode every Tuesday, jam-packed with expert advice and actionable insights for creating digital experiences that move millions.

sources
Podcast
December 3, 2024

From anxiety to advantage: How to tackle employees’ AI fears

Christen Bell breaks down why people are afraid of AI and how it can be addressed.

As AI technology takes center stage in today’s digital transformation discussions, we wanted to dig into the often-overlooked human side of AI implementation. Managing employee concerns around AI is crucial for successful adoption, so Christen Bell, an industrial psychologist specializing in AI’s impact on the workplace, joined Clinton to unpack the complex emotions, fears, and management approaches needed for responsible AI adoption.

Understanding the roots of AI anxiety

AI’s rapid integration has sparked widespread anxiety among employees, who are often uncertain about its impact on their roles and job security. Bell explains that many organizations overlook these concerns, leading to resistance and reduced trust.

Bell notes that AI fear often stems from lack of information or misinformation. Some employees feel uninformed and fearful of the unknown, while others hold misconceptions influenced by media portrayals or social narratives. Addressing these varied concerns requires a well-informed, empathetic approach from leaders.

Key AI attitudes in the workplace

Through her research, Bell has identified six distinct attitudes toward AI, defined by levels of understanding of and sentiment toward the technology. Three of them are “anti-AI” attitudes:

  1. Uninformed and Anti-AI: Lacking exposure to AI, these employees feel threatened, unsure about its purpose and capabilities, leading to heightened fear.
  2. Misinformed and Anti-AI: This group believes they understand AI but often holds exaggerated beliefs about its power, influenced by media portrayals of AI autonomy or consciousness.
  3. Informed and Anti-AI: Though knowledgeable, this group remains skeptical or fearful, voicing concerns around ethics, privacy, and potential job displacement.

Building trust through education and transparency

AI literacy and transparency are key strategies to alleviate fears. Educating employees on how AI models function, what data they use, and how outcomes are generated helps demystify the technology. Starting with data literacy and advancing to basic AI concepts, such as machine learning and neural networks, enables employees to understand AI’s limitations and benefits.

Transparency is equally crucial. All AI-generated content should be clearly labeled, preventing confusion and building trust. This labeling helps employees understand AI’s applications and limitations, countering the “uncanny valley” effect, where AI-generated content can feel unsettling if not disclosed.

Creating a trustworthy AI adoption strategy

For successful AI adoption, companies need proactive change management. Bell recommends assessing employee sentiment, creating AI training programs, and establishing open communication channels. A knowledgeable, credible leader should champion the AI rollout, addressing concerns with empathy.

Safe feedback mechanisms, such as anonymous channels, are essential, as employees may hesitate to raise concerns in a public forum. These channels foster transparency and inclusion, helping employees feel informed and supported.

A human-centric approach to AI

Bell’s insights underscore the need for thoughtful, human-centric AI adoption. By focusing on empathy, education, and transparency, companies can foster a culture of trust, ensuring that AI enhances rather than disrupts the workplace, ultimately aligning AI innovation with a positive employee experience.

As always, don’t forget to subscribe to Catalyst wherever you get your podcasts. We release a new episode every Tuesday, jam-packed with expert advice and actionable insights for creating digital experiences that move millions.

sources

Podcast
December 3, 2024
Ep.
462

From anxiety to advantage: How to tackle employees’ AI fears

Christen Bell breaks down why people are afraid of AI and how it can be addressed.
0:00
39:31
https://content.production.cdn.art19.com/validation=1733334532,f44d240a-1656-5511-adcc-ceab2740c6e1,-KRUAcGqIgX99OPjr3Kf0cWV4kU/episodes/2c32d5a5-b8bb-4217-b3dc-3fb4d21a799d/12f86cafd605bd0e3e3f2a23bb2592591e03f2a9bc08e33d5612104bf73b0ebfb3a861f08b39988f158ad00a6dfc1bf0f7a47b30acb3c146f44a0ef3634d2eeb/Catalyst_Episode460_ChristenBell.mp3

As AI technology takes center stage in today’s digital transformation discussions, we wanted to dig into the often-overlooked human side of AI implementation. Managing employee concerns around AI is crucial for successful adoption, so Christen Bell, an industrial psychologist specializing in AI’s impact on the workplace, joined Clinton to unpack the complex emotions, fears, and management approaches needed for responsible AI adoption.

Understanding the roots of AI anxiety

AI’s rapid integration has sparked widespread anxiety among employees, who are often uncertain about its impact on their roles and job security. Bell explains that many organizations overlook these concerns, leading to resistance and reduced trust.

Bell notes that AI fear often stems from lack of information or misinformation. Some employees feel uninformed and fearful of the unknown, while others hold misconceptions influenced by media portrayals or social narratives. Addressing these varied concerns requires a well-informed, empathetic approach from leaders.

Key AI attitudes in the workplace

Through her research, Bell has identified six distinct attitudes toward AI, defined by levels of understanding of and sentiment toward the technology. Three of them are “anti-AI” attitudes:

  1. Uninformed and Anti-AI: Lacking exposure to AI, these employees feel threatened, unsure about its purpose and capabilities, leading to heightened fear.
  2. Misinformed and Anti-AI: This group believes they understand AI but often holds exaggerated beliefs about its power, influenced by media portrayals of AI autonomy or consciousness.
  3. Informed and Anti-AI: Though knowledgeable, this group remains skeptical or fearful, voicing concerns around ethics, privacy, and potential job displacement.

Building trust through education and transparency

AI literacy and transparency are key strategies to alleviate fears. Educating employees on how AI models function, what data they use, and how outcomes are generated helps demystify the technology. Starting with data literacy and advancing to basic AI concepts, such as machine learning and neural networks, enables employees to understand AI’s limitations and benefits.

Transparency is equally crucial. All AI-generated content should be clearly labeled, preventing confusion and building trust. This labeling helps employees understand AI’s applications and limitations, countering the “uncanny valley” effect, where AI-generated content can feel unsettling if not disclosed.

Creating a trustworthy AI adoption strategy

For successful AI adoption, companies need proactive change management. Bell recommends assessing employee sentiment, creating AI training programs, and establishing open communication channels. A knowledgeable, credible leader should champion the AI rollout, addressing concerns with empathy.

Safe feedback mechanisms, such as anonymous channels, are essential, as employees may hesitate to raise concerns in a public forum. These channels foster transparency and inclusion, helping employees feel informed and supported.

A human-centric approach to AI

Bell’s insights underscore the need for thoughtful, human-centric AI adoption. By focusing on empathy, education, and transparency, companies can foster a culture of trust, ensuring that AI enhances rather than disrupts the workplace, ultimately aligning AI innovation with a positive employee experience.

As always, don’t forget to subscribe to Catalyst wherever you get your podcasts. We release a new episode every Tuesday, jam-packed with expert advice and actionable insights for creating digital experiences that move millions.

sources

Episode hosts & guests

Christen Miller Bell

GenAI Community of Practice Leader
NTT DATA
View profile

Episode transcript

Clinton Bonner: I was like, ha-wha... What? Like, we have an expert on this thing?

Christen Bell: (Laughs)

Clinton: So, I was really thrilled to get to know you.

(CATALYST INTRO MUSIC)

Clinton: Welcome to Catalyst, the Launch by NTT Data podcast. Catalyst is an ongoing discussion for digital leaders dissatisfied with the status quo, and yet optimistic about what's possible through smart technology and great people. Joining me in studio today is Christen Bell, an industrial psychologist who has been researching how people react to the presence of intelligent systems in the workplace for the last ten years. She's doing her postgraduate research on the fear of AI and how it shows up. She's the lead of the AI Community of Practice for our consulting arm here at NTT Data, and I am thrilled to have her on with us today with this big, big topic. Christen, how are you doing down there? I know you're in Charlotte. How are you? How is Charlotte? What's it like down there?

Christen: It's beautiful. It is cloudless, blue sky, 70 degrees. It's perfect. But more than that, I have been a long time listener, first time joiner. Right? So I'm so excited to be here, Clinton. 

Clinton: Perfect. We can maybe take live calls today. We'll see. We'll see how this goes.

Christen: (Laughs)

Clinton: So, we are so excited for you to be joining us so we can unpack some of this significance around this real fear of AI and what's going on right now. So, AI is on every single boardroom in the enterprise. They gotta figure out what they want to do with AI. And we're going to talk a lot about, hey, what about the human side? That there is real fear out there. And you've been studying this for quite a while, which is super cool. So, first things first, how the heck does one get on to this topic? Yes, we know it's a hot topic, but how do you say, that's it. I want to discover and go deep into the fear of AI. How did you get there?

Christen: Sure. Well, first things first. I think I'm a bit of a victim myself. I studied, in undergrad, romance languages. So I went and took, like, four different foreign languages, thinking that I would have this really sexy job as a translator. And suddenly, digital translation services got real hot. And I started to realize that maybe the job that I anticipated having for myself wasn't really going to be there. So, that's really the first... First thing I saw. Second, what's interesting is, about ten years ago, I think it was NTT Data had a really large financial industry client. And the CFO of a branch of this bank had heard about RPA technology and felt like his accounts payable department might be a really great place to test and implement the technology. And so their team was really large, maybe 100 people working in accounts payable. And I'll paint this demographic for you. It's largely women, most of whom had been doing this job for over 15 years. Many of them only had, like, some kind of certificate, maybe an associate's degree in this area. Very few of them actually had bachelor's degrees. And many of them also still carried trauma from the Great Recession. A lot of them had been laid off. A lot of them had been unemployed for a very long period of time during the recession. And so they were carrying this trauma with them. And NTT was tasked to roll out the RPA, or robotic process automation, in tandem with the client. And the very first thing we noticed on the ground was this fear. You could hear it. You would hear people talk about it in the breakroom. And so, for the sake of time, I will bypass all of the things we did. I'll go straight to the point, which was that the client organization actually handled this beautifully. Honestly, I was shocked. I watched it go down and I was like, wow, this is really well done. So, in the end, this RPA tool replaced the work that about half of the AP specialists were doing. And on that deployment, the client did a fantastic job of making sure that the employees were communicated with often, that they knew exactly what was happening and why. They really focused on the talk track of the increased accuracy of the system, so that everyone understood how it would help them. And then the client also took a really great deal of care in managing the people throughout the process. So, in the end, you know, some took on new jobs at the same band level, filling requisitions that were already open. The client already had a robust tuition program in place, so HR made sure that people knew how to use that, how to utilize that tuition stipend. And so, some people went and attended coding boot camps, and then they came back into the group and they were programming the bots. There were other people that went and got certifications in process design. They came back and did the processes for the bots to follow. And so, in the end, it was only, like, less than 8% of people in that 100 person group that actually left the organization completely, left that organization. And so, to me, I watched that, and it really got my wheels turning. I started to think the following questions: You know, what happens when technology is more advanced than this?

Clinton: Mhm. Yep.

Christen: Because I knew it was coming, right? What happens when an organization isn't as human-centric, and the employees are treated like they don't matter? Or, in a more drastic manner, what happens if the people refuse? What if they revolt? What if they unionize against the changes? What then?

Clinton: A great tale there, for sure. That one has a happy ending. In that sense, okay, about 8% of people were displaced ultimately, but they, sounds like they did the very best they could to take care of their people, and most people either upleveled or figured it out in one way or another.

Christen: Yeah.

Clinton: And, look, technology... It's not like it's going to stop, right? That's one of the things, I think, that people have a fallacy against, or just... It just never has. You look back throughout the history of humans, we just... If there's something that's going to advance... Something, not to be so obtuse about it, but it marches forward. Whether you like it or not. And then, I think a lot of what we're going to talk about today is, how is AI potentially different? And why, and what kind of fears are being stoked now...

Christen: Yeah.

Clinton: ...That maybe, about a decade ago, weren't quite there yet? And is this displacement potentially more aggressive, slash larger, for more, to impact more humans? And I think those are all real things. And I think burying your head in the sand and not talking about them is probably the worst thing you could do at an enterprise level. I'd love to get your opinion on that.

Christen: Yeah, I think that...  A lot of people want to ostrich themselves, right?

Clinton: Yep. Mhm.

Christen: The proverbial bury your head in the sand thing. Progress is going to happen. Whether you want it to or not. Right? And so, if you stick your head in the sand, then what's going to happen is, when you finally decide to pop up, so much has happened that you can't catch up.

Clinton: Succinct. Thank you. Alright. So you've been at this now for... It's good. I'd rather have it that way. So you've been at this now for, you know, "at this now." You've been studying this topic and then the impact of AI, this idea of the sphere around AI for, you know, almost a decade now. You've had your ears and brain and heart kind of tuned a little differently earlier, being like, oh, I see something really, really coming, and if we don't prepare for this, it's going to wallop a lot of people. So...

Christen: Yeah.

Clinton: What are some of the key things? A decade's a long time. I would say you're probably well ahead of the average human when they put their thoughts around this. What are some of the biggest things that you've learned? And the second part, have things changed for you also that maybe you started, year one, year two, that, as tech advanced, as humans advance, that maybe some of those positions have even changed for you? So, a little bit of a two-parter.

Christen: I'd like to address the second question first.

Clinton: Sure.

Christen: You can see the change over time. Largely, ten years ago, people weren't that fearful because it wasn't that close.

Clinton: Right.

Christen: It was just this, like, topic that people talked about, that just wasn't really tangible to them. They didn't really understand what it meant. And there wasn't a practical use of it in their home. There wasn't something that they could point to. And so, largely, ten years ago, there just wasn't a whole lot of fear.

Clinton: Right.

Christen: That's really changed quite a bit now. And the fear is now ever-present, because people are starting to see AI pop up everywhere, right? It's embedded in their phones, it's embedded in Alexa, it's embedded in a whole bunch of tools. Now you go to Google, and instead of just doing a Google search, it's, you know, Google's AI tool. And it's everywhere now, and so, that's starting to spook people. What's interesting is, the data you see today, if you go searching for statistics on fear or mistrust of AI, you're going to see numbers all over the place. And so, at first you would say, well, that's really confusing. It sounds like, you know, people are everywhere. But there's actually trends in the data. I saw a stat recently saying that 53% of people are more concerned than excited about AI. And then you can juxtapose that, Google just did their Digital Futures project, and that says that 68% of people who have used AI in the past year are excited about the future. Well, those statistics don't match, right? And so, if you dig in, and you kind of figure out what's going on with the trends, you see that people are both fearful and curious at the same time, and they're not mutually exclusive. They can be both at the same time. So you could be curious about what's going to happen, but also a little worried that it could go wrong. And the other thing I'm seeing is that, especially in corporations, when they're doing internal surveys to find out what people think about AI, they're getting these... What I'm going to say, false positives. Essentially, people are speaking more positively in the workplace about AI than they are at home. And at home, when they're talking with their friends and their family in a psychologically safe environment, they're actually, you know, speaking up about the fears that they have that they're not talking about in the workplace.

Clinton: That, to me, is a sign of mistrust, also. Right?

Christen: Mhm.

Clinton: So even if you're taking a survey, even if it's probably anonymous...

Christen: Right.

Clinton: ...People just feel that, oh, I can't talk against the machine, type thing. Like, this is going to happen? Is that what you find in your studies?

Christen: Right, right. In the workplace, people don't want to be seen as, like, the Luddites.

Clinton: Yep.

Christen: You know, who's refusing to adapt. I think folks are kind of taking a little bit of an adapt or die mentality. And they don't want to be the one person that's not jumping on board, and they don't want to be seen as, like, difficult. And so, they are saying what they think the organization wants to hear. So. We can speak to that a little bit more when we talk about how my research pans out, how we segment people and things like that. That's really coming to the forefront in a couple of those segments.

Clinton: Yeah, yeah. Well, let's rotate over to that, too, because I know, within that research... You know, you don't want to paint with a broad brush when you're doing your research, and start bucketing humans. You know, it's like...

Christen: Yep.

Clinton: However, to organize data, and then to really... The ability to understand, are there patterns? Is this noise, or are there patterns? Well, you do have to make some decisions, and you do have to put people into representable categories that you believe they do fall into for certain criteria. And I think that's what you've done quite a bit of in your research, too. So take us through that, please.

Christen: Sure. So I have found that people typically bucket into about six categories. Those six categories are, you're either uninformed, you're misinformed, or you're informed. And then from there, you're segmented into pro-AI and anti-AI. So, since we're here to talk about fear, I'm just going to cover the anti-AI ones. So, let's talk about the uninformed anti-AI. So, these folks are very prevalent. I'd say the trend is looking like 25 to 30% of the population right now. And it's not an age thing here. There does seem to be a correlation with those who aren't in a corporate working environment. So, think, like, retail workers, restaurant workers, stay-at-home parents, retirees, folks like that. People who are not sitting in corporate America. And I think that the causation here is that most folks in corporate America are getting their information from company-led AI training. So, people who don't have access to that naturally fall into this uninformed category. And that leads us to this massive fear of the unknown. I think it was Yves Chouinard, who is the founder of Patagonia, he's attributed to saying that fear of unknown is the greatest fear of all. And I think that's really prevalent here. So, in consulting, as a consultant, right? That's what I do. When I'm rolling out a new AI solution, we typically have to address this unknown factor first. And so we do that by slowly trickling information to the masses. So, what this looks like is, getting the layperson to understand the world of intelligent systems is kind of like asking someone whose math knowledge ended at pre-algebra to suddenly start doing differential equations.

Clinton: Yeah.

Christen: It's... It's a big gap, right?

Clinton: Huge gap, yeah.

Christen: And so, you've got to bring them along on the journey and start small. So we tend to start with data literacy first. And that's where the students learn about data cleanliness, data stewardship, understanding data sources and constructs, so that they then can identify, interpret and act on data within a business context in order to influence, you know, business value or an outcome. So that's step one. Then, we tack on intelligent systems knowledge. And so, then we call that AI literacy training. And there we teach what machine learning is, what deep learning is, what a neural network is. What's a general adversarial network? Things like that. And more importantly, we teach them what data these tools access. So we build on that data literacy training. We teach them, then, how the data is used within these advanced systems. So, what are the models trained on? And this is really crucial for people to be able to use AI technology effectively. So, with this knowledge, they can effectively trust the system that they're using, because they're at least in the know of how that, you know, of what data it has access to. But they can also identify possible hallucinations in the output, as well as understand, you know, how to get the most out of their queries.

Clinton: Okay. So that's one... You describe...

Christen: That's one. (Laughs)

Clinton: You described, basically, six segments and we're not going to cover them all. But, like I said, we are focusing on the fear of AI. So I believe that was the uninformed and afraid, basically, right?

Christen: Yes. Uninformed and afraid. Yes. So then , our next category is misinformed and anti-AI, or afraid.

Clinton: Right.

Christen: And this group is fun. This is a fun one to talk about. Because you get some wild responses in here that get pretty funny. There's another quote that I love which is that we don't fear the unknown, we fear what we think we know about the unknown. And that's really relevant here. So, in this category lies the people who feel like they know enough about AI to make an informed decision. But then when you ask them where they get their information from, you find out that it's not the most credible sources. And right now, there's a lot of disinformation happening around AI. And especially if you're using Reddit or X, formerly known as Twitter, or if you're listening to political pundits, things like that, there's just a lot of information that's plain wrong. And you'll hear things like, exaggerated capabilities, that AI has human-like consciousness, or that AI has attained the same level of human thinking, which is interesting, because even neuroscientists don't know how humans think, so, like, how did AI figure that out? But... People are starting to see a presence of deepfakes or AI-driven disinformation campaigns. And so they attribute all of AI to what they've seen. And so, many have conspiracy theories that they've attributed to AI. And all of this is misinformation.

Clinton: Right.

Christen: In addition to that, you have this subset where people's misinformation is from Hollywood.

Clinton: Mhm.

Christen: People see Terminator or Westworld, or...

Clinton: Yeah.

Christen: And they use that to conceive their notions around AI. So, with this group, your task is a bit larger than the previous one. Because now you have to find out what these crazy notions are, and then you have to dispel them. And then you go through the same steps of data literacy training and AI literacy training. And then, what you get to at the end, if I keep this Hollywood analogy going, right? Is that you are able to pull back the curtain and prove that The Wizard of Oz isn't some big, you know, scary, omniscient, omnipotent being, but rather it's just the man behind the curtain with some fancy buttons to press, right?

Clinton: How hard is that to prove when things are evolving so quickly? Things are rapidly changing, and you do hear stories out there that, oh, you know, a certain system took over and gained access to a satellite. Like, those stories are out there, and you're like, okay, well, what do I believe? Where is this coming from? And is this real? And, how dug in are many of the people in that particular category you're talking about?

Christen: Yeah, this is like a psychologist's constant argument. People always want to talk about the negative thing more than anything positive.

Clinton: Sure.

Christen: You see this in, like, restaurant reviews, things like that, right? People are 20 more times likely to post a negative review than they are a positive review. And so, someone who comes across reviews of a restaurant is going to see, just, a skewed data set because of that. And so, that's one of the talk tracks that we use, which is, yes, of course you're going to hear about the times that it goes wrong. But then, sit and think about all those times you've heard about it going wrong. For every time you've heard something wrong, there's probably 100 times where it went right. And that's what they don't see, is the 100 times where it went right. Instead, they just hear about, you know, Tay, the racist Twitter bot. They think that everything is going to end up like that. So, it's a tough job to go on a, like, misinformation campaign to try and right people's conceptions. But it's possible. For sure.

Clinton: And I think, I would imagine it takes a while, and it takes consistency, which is all a part of, like, the change management that you and the team lead at NTT Data, again, specifically for this absorption in the use of AI. I guess the other piece for me that I think is worthy of discussing is that, with other technologies in the past, there might not have been this sense of... Or maybe there is, but from my perspective, not this sense of, like, existential threat. That, hey, if things go really bad, let's say they're misinformed, right? Let's just say they're misinformed. But they're skeptic of, hey, if things really go bad, this could be, like, World War Three, War Games type stuff. We had a movie about this in the 1980s with Matthew Broderick, it was a wonderful movie. And the Whopper.

Christen: (Laughs)

Clinton: And then the machine basically runs a scenario, because he hacks it, and he almost starts World War Three, right? It's a pretty fun movie, actually, I think Ally Sheedy, and I forget, some other folks. But... Great movie. What about that aspect of, like, look, the threat could be so pitched. What does that do to the, A, the fear? And then how do you... In an enterprise environment, how do you talk with a skeptic of it who maybe doesn't feel like they want to contribute to it? Do they feel they are contributing to it if they kind of, quote-unquote "play along"?

Christen: Yeah. So, the existential threat thing is very real to a lot of people. There's actually a term for this called ex-risk. And... I would say that in this case, a lot of the existential threat that people think they are feeling is not actually ever going to happen. I saw a statistic just yesterday, and I don't know where it came from, and I don't know if it's real, but it was that 85% of the things you expect to happen aren't going to. And I think in this case, there are some very real concerns that people have. You know. And then there are some just absolutely blown out of the water concerns. That people think that this is going to lead to some global war, or it's going to lead to food shortage, or things like that, or that the human race is going to get wiped out. We're going to end up in a Wall-E situation where the only beings living on Earth aren't living...

Clinton: Right.

Christen: ...And they're robots. Things like that. So again, information is power here. And this is one of those things that, when the people perceive an existential threat, that's a little... It's a little robust, making sure that you really talk it through with them. Make sure that they have all the right information. Asking them where they've gotten their information from, making sure that they're focusing on credible sources, things like that, is the best route you can take.

Clinton: Got it. To talk about it a little bit, I think is, at least worthy, putting a little light on there versus again, saying, oh no, you're not going to encounter anybody in the enterprise out of 100 people that don't feel that this is a threat. They don't want to contribute to it in one way or another.

Christen: Yeah.

Clinton: But I do know that we have another category too, just doing the math here, is that we have the category of folks who are deemed informed, but still anti-AI. So let's talk about them, please. 

Christen: Yeah. So, I mentioned the existential threat part is, like, a two-part thing. And you've got people who are misinformed and they perceive an existential threat. And then you have people that are actually quite well-educated, they're very informed, they've done their research. And so you have this subset. It's not a lot of people. If you're looking at the greater population, there's not a whole lot of people who are very informed and then decide that they're, like, holistically anti-AI. Because they're afraid of it. With this group, there's not a whole lot we can do to remove the fear or the mistrust. They've done their homework, they've made up their minds. And this group actually tends to be vocal. So, it may seem like this group is comprised of more people than really exist, but it's more like, they're just noisier than the others. This is an important group, because they're the squeaky wheel that's getting the oil. And they're important because some of their pushback on this is helping us get better regulation and oversight in this area. This is a topic that they see having a big trickle-down effect. Maybe they've followed futurists like Victor Vinge and Ray Kurzweil that talk about something called Singularity...

Clinton: Yep.

Christen: ...Which is a hypothetical date at which AI outstrips human intelligence and machines take over. You know, Ray Kurzweil wrote a book called The Singularity Is Near, and then he recently published one called The Singularity Is Nearer. And so, he's trying to warn people that this is coming. Again, take it with a very large grain of salt. But some people really adhere to his tenets and believe what he's saying there. There's other people in this group who are maybe worried about more than their job security, and they see it as, like, a threat to future generations. They see that, maybe the lack of ethics and lack of regulation around it right now are something of which they should be very concerned, and they see it as kind of being the wild, wild West, and people could do whatever. And so, they're worried about what the future looks like there. And then... I've heard some people talk about the lack of privacy. They fear that in the future, lack of, you know, that privacy is going to go away completely. Altogether. That you're constantly going to be monitored wherever you are, wherever you go, even in your home. What you're doing, what you're thinking. You know, those folks think that AI is one of the causations of that lack of privacy. And so, because they attribute the AI to that, then suddenly they're fearful of it.

Clinton: Let me dig in there for a second, too.

Christen: Yeah.

Clinton: Because I think what's interesting there is... Look, we've got different parts of the globe, which has always been the case, but, evolving very differently, right? So, I hear, I'll speak for myself, that there are things like social scoring that are prevalent and gaining steam in different parts of the world. China being one of them, right? Where, if you score a certain way, good or bad, you get certain social benefits. Like, can you get on this train or not?

Christen: Right.

Clinton: Like, pretty serious things. And so, I can understand...

Christen: Yeah.

Clinton: ...As somebody who grew up in New York my entire life, and as an American being like, whoa, whoa, whoa, no. Like, whether or not I drink a little alcohol or do certain things, I don't want some algorithm God telling me whether I can or cannot have a certain right. Now, the thing for me, though, would be... But is that AI? Or is that just data science anyway? Like, that part to me is, like, the leap over to say, okay, well, AI is driving that? Versus, if they want to go do that, I'm not sure if AI had to be the engine to drive that, Christen. So I wonder if that's just a leap too far to say, it's not really AI there. I just wonder your take.

Christen: Sure. Some of the facial recognition technology that they're using in China is AI-powered. They've gotten some very, very negative publicity for some of their practices, including using facial recognition technology to target certain minorities in Western China. And so, some of that is AI-based. But some of it is, to your point, that some of it's machine learning, some of it is just very large data science.

Clinton: Yep.

Christen: Right? And, you're right. I think the point that you're making here is that AI kind of gets the bad rap. I was listening to one of my favorite AI ethicists this morning, Doctor Chowdhury. She's the lead AI ethicist at Twitter, and she was saying that, I wish that it hadn't been called Artificial Intelligence. She's like, the name of it is just... She was like, I wish we had called it something else because I don't think it would get such a bad rap. And I agree. Even to go back to that RPA case study, we didn't call it Robotic Process Automation. We called it Rapid Process Automation, because just the change of the name made it less scary to those people. Yeah, so I think it's getting a bad rap.

Clinton: Yeah, I think on that point too, maybe it's just Apple being geniuses again with marketing. Like...

Christen: (Laughs) Yeah.

Clinton: They don't call it artificial intelligence now, right? They call it Apple intelligence. And maybe that is...

Christen: That's right.

Clinton: I mean, yes, it's branding, of course, and they're masters of that. However, this also might be very largely part psychology. And I wonder if you think that's in play.

Christen: Yes, absolutely. I have been coaching my clients for years to reconsider the branding of things. And, I mean that first time, calling it Rapid Process Automation instead of Robotic was the first time we really saw how powerful a name could be.

Clinton: Yeah.

Christen: Even calling something, just the broad term of intelligent systems, that's kind of my favorite term. Because something about artificial just gives people the ick.

Clinton: I mean, artificial flavors, it's just, it's a...

Christen: Right!

Clinton: Gets your shoulders and ears up real quick, even if you weren't paying attention, going like, well, why would I want anything artificial jammed into my, you know, perfectly balanced nature of body, mind and soul, right?

Christen: (Laughs) Right.

Clinton: It's an interesting topic that the naming convention of that can really just put people in a position, right out of the gate, to uplevel or increase mistrust before, even, anything factual has happened, right?

Christen: Right.

Clinton: So, that'll be something to watch, too, to see if there is a bit of a pivot. And I did want to ask you, you're studying this for all this time. What about, like, demographic trends?

Christen: Yeah.

Clinton: Are they prevalent? Are they there? Are there breakdowns of age, gender, anything like that that your research has proven out?

Christen: Yeah. It's not what you would expect. I have to admit, I went in with kind of a pseudo-hypothesis that there would be a strong generational trend. And so far there's not that skew, which is interesting. There's other trends emerging. I mentioned earlier, the trends of those who are uninformed, anti-AI tend to not work in corporate America, right? So, that trend has bubbled up. Another factor that has come to the fore is country cultures. Pretty drastically, actually. There's really big trends that India and China are leaders in the trust of AI, so... Trust and fear are kind of hand in hand. And so, people living in India and China have the least amount of fear of AI. And I think that's reflective of a bunch of things happening in their specific regions at the moment. A lot of advancement of technology is happening in both of those countries. There's a lot of push. And a lot of people are employed in technology in both of those countries. And so, I think that's why you see that trust happening there. And then, if you ask, you know, if AI has more benefits than drawbacks, like, 78% of Chinese respondents agreed, compared to only 35% of Americans. The data is showing that a lot of English-speaking countries are the most distrustful of AI. So, Australia, UK, US, tend to be lagging behind everybody else as far as technologically advanced nations. You know, English-speaking countries are the back of the pack there.

Clinton: Yeah. Interesting.

Christen: And then there's also an industry trend as well. We're seeing that manufacturing and human resources have the highest level of distrust and fear around AI compared to other industries. And you could unpack that and probably figure out why. HR has a really big risk of bias. You know, the risk/reward in HR is really high. And so... And then in manufacturing, a lot of the folks that work in manufacturing just, back to the... You're scared of what you don't know. That's showing up there big time.

Clinton: So, how about when we are looking to... Well, when you and team are looking to get this going and get adoption inside an enterprise? I'd imagine one of the bigger things is, that whether they are fearful of it or not, there's going to be some, hey, how do we do things in a transparent and accountable way? And I realize...

Christen: Yeah.

Clinton: ...That probably pitches fear when they think it's not. Or, it actually is not, right? So, that's a use case, too. So, when you're consulting, how do you do your best to get transparency and accountability so that the trust level goes higher and fear gets reduced?

Christen: So, there's two components at play here. People are hearing messaging in their workplace, but people are also getting messaging outside of their job, right? And so, one of those things you can control, and the other thing you can't. So, I can't control what people hear about ChatGPT, or, you know, what they learn about using Grok or any of those things. But I can control the messaging that comes from a corporate level. So there's a couple of things. You have to have really active and visible sponsorship from a sponsor that is credible, who is in a position of power, that knows what's going on. Someone that people listen to. And they need to be in it. I mean, they can't just be the person in the high castle that's like, oh, hey, we're rolling out AI. They need to be alongside the rollout, with the team, knowledgeable about what's going on. And they need to be the voice of the transformation. There needs to be a lot of training. There also needs to be a mechanism for people to voice their concerns, and not in a town hall. Not in a town hall. Nobody wants to be the person that's like, hey, I'm worried about AI bias in front of, like, 300 of their peers.

Clinton: Right. Well, you said earlier like those people tend to be, I don't want to be the Luddite, like you said.

Christen: Right! Exactly.

Clinton: So, I'm not going to speak up, especially in that environment, right? So...

Christen: Yeah. So, you need, like, a safe mechanism for people to say, like, hey, I want to talk to somebody about this. So whether it's a, you know, an anonymous tip line, or something like that, people need to be able to voice their concerns. And they need to feel like it's okay to do that, and they're not going to have any sort of retribution for doing so. And then, outside of the workplace, there is some very, very strong data that the majority of people want anything that is generated by AI to be labeled as such, right? So, if you see a video or you see an image, they want it to be labeled. So, I'm living in North Carolina, we recently were struck by Hurricane Helene. And some really terrible images came out from the storm, that were AI-generated.

Clinton: Geez.

Christen: Things like... Yeah. A girl crying, holding a puppy, things like that. And people believed it. They thought that this was, like, an actual image that came out. And a lot of people, when they realized it was AI-generated, they were shocked. They were like, wait, why would you do that?

Clinton: Right.

Christen: And so, there's a lot of people that want there to be, like, some kind of disclaimer that shows that this is AI-generated. The same with, you know, text that's generated by AI, they want there to be some kind of disclaimer that says that this is generated by AI. And so, doing that does two things. To your point, it's transparent. Right? So it helps us get around the uncanny valley effect. And then the other thing is, it helps us who still create without the help of AI to prove their mettle in a time where they're scared that they're going to be rendered obsolete.

Clinton: Well, the uncanny valley part is interesting, because this idea usually applied to, you know, humanoid robotics and they're not quite there, right? Or something trying to be human and it's not quite there.

Christen: Mhm. Yep.

Clinton: The newer challenge is, you just mentioned that photo of the girl holding a puppy with floods in the background.

Christen: Yeah.

Clinton: I saw that, I don't know, maybe 2 or 3 weeks ago now. Until just now, I was... I was today years old when I found out that was AI-driven.

Christen: (Laughs) Yeah.

Clinton: I didn't know that. Now, there were people suffering just like that, but that's a... That's an AI-generated image? I didn't know that either. So, my point being, is the uncanny valley... It's collapsed. That seemed to be a very real photograph. And deep fakes and things like that...

Christen: Yeah.

Clinton: And that, again, is just, that pitches the fear of, well then, how the heck can we trust this, right? So I think it's a fascinating place to be. And I also really feel for you and folks that are in this to say, okay, we've got our work cut out for us, right? This is something that, at a C level, if you're not thinking critically about the organizational change management of getting AI in, getting adoption and doing this in a really, really robust and smart way, you're going to encounter pitfalls and things you just weren't thinking about. And I do wonder, like... When you're talking with clients, how many of them are, like, really in tune to that and believe that, versus those who just aren't thinking that way yet, that they actually have to get the change management aspect really well? Otherwise, it's going to be some tough times for enterprises.

Christen: (Laughs) That's a great question. I feel like this is the talk track that I am just pounding to death, but it's so, so important. Right now, this astounding number... It's like 70 to 85% of organizations that have deployed generative AI are not getting the results they anticipated. They jump in, they're moving fast, because AI is moving fast, so they think they've gotta move fast, right? So they just jump in with both feet, deploy some tool. Six months later, they look around and they go, wait a second, this is not doing what we thought it was going to, or like, people aren't using it as much as we thought. And then they look around, and they say, well, maybe we should get some consultants in here to figure out why it's not working.

Clinton: (Laughs) Right.

Christen: And so, they bring in teams like my team to figure out why. And, I mean, this is really what I do. And so we come in and we help them figure out why. And the answer is largely, well, you did not do any preemptive change management. Right? You didn't plan for the change. You didn't do any sentiment analysis on the front end. You did not stand up AI governance and ethics council or committee on the front end, which is something that's huge for establishing trust and allaying that fear that people tend to have. A lot of times they just jump straight into AI tool training. So, a really great example of this is, like, I just rolled out Copilot training. Well, do people know how Copilot works? No. So, did you do any sort of preemptive training to teach them what Copilot is and what it's trained on? What it can use? When that little toggle in Copilot says, like, work versus web, do people know what that really means? No. They don't. And so, they're not using it to the most effective way, because all of these steps were skipped.

Clinton: The thing that goes along with it also is, the lack of change management, which you're talking about. Really lovely there. And then I would also say, you said it, too. They're charging at it because there's such froth around it, that they feel they have to do something, and they're grabbing a tool. And then saying, you know, magic, presto, give us 300% productivity gain.

Christen: Right.

Clinton: Versus a... Just a more methodical look, to say, okay, let's blend change management. Let's do that right. But can we look at AI for what it could actually solve at our enterprise? Like, what are the use cases that actually matter to, either our employees, our end users, our constituents if you're in public sector?

Christen: Right.

Clinton: And kind of get out of the, what I think is this froth of, grab a tool, throw it in and magic.

Christen: Yeah.

Clinton: Versus critical thinking on, how can this be applied for our people? Again, employees or end users, customers, et cetera, et cetera. There's a lack of both. There's a lack of planning with change management, and there's a lack of criticality of, how could AI be applied for our problems, versus generically putting a tool in place and expecting magic? Is it two sides of the coin for you also?

Christen: Yeah, and I love hearing your lens of that. I can see your, like, customer experience...

Clinton: Yeah.

Christen: ...Come bubbling up there. Yeah. And, from a psychology standpoint, it's a really unique problem to sit and look at, because most organizations are always going to go with the proof of concept that leaders think are going to earn them the greatest ROI. And what's interesting is, you're kind of posing this other situation, which is, let's go with the proof of concept that is going to be something that people want the most. And so, such an interesting thing there, you... If I blow this out, right? The users get something that solves problems for them, right?

Clinton: Yes.

Christen: Your employees now have a tool that is useful, and it takes away pain from their daily lives. And so suddenly, now, you have associated the use of AI with something great, right? And so, in their minds, they're like, oh, this tool is fantastic. Yes, I want to use this. And so then, additional rollouts of other proofs of concept are going to be much more successful, because of that perception that the employees have of the tool. Yeah, I love that approach. It's really hard to convince your C-suite to take that route.

Clinton: Yeah. Interesting.

Christen: Because a lot of times, the tools that people really, really want might not be the ones that are going to prove to have the greatest ROI on paper.

Clinton: Right. And that's exactly it. It's an on-paper thing. Because it can be difficult to project that accurately, especially when you're looking at, like, emerging technologies, like we're talking about here. Any last bits, you know, advice, anything you want to leave the audience with, or how to contact you, whatever it might be?

Christen: Yeah. So, if I could give any advice to our listeners, it's to be smart about this. And what does that mean? It means make sure you educate yourself. If you are listening and you happen to be one of those people that falls into the, you know, not corporate America... Or maybe your organization hasn't done any data or AI literacy training, those exist. You can find YouTube videos on that. But learn. Learn what it is so that you can make an educated decision on how you really feel. And then the second thing is, knowing what's happening gives you a voice that you can use to speak up. And your voice can help us create protections, regulations, create awareness around things and ensure that the things that you might be fearful of don't happen. And so, my second ask is for people to use their voice. You know, speak up. If something... If you don't like something, then share it and see if it can get changed.

Clinton: Love it. Let's drop it there. But also, Christen Bell, C-h-r-i-s-t-e-n B-e-l-l. Is the best place, if folks want to connect with you to ask questions, find you out on LinkedIn, is that the best way to get you?

Christen: Yeah, you can find me on LinkedIn. I think I'm listed as Christen Miller Bell. I've got my maiden name there.

Clinton: Gotcha.

Christen: Or you can email me. It's Christen.Bell@NTTData.com.

Clinton: Perfect. Well, thank you so much. This has been a really interesting, and of course, a hugely topical discussion, because there is so much discussion and hype around AI right now. I appreciate you sharing so much of this insight and this understanding into the reasoning behind the fear of AI. I recommend anybody out there that is having these discussions at the enterprise level, whether you are not yet getting the benefits of AI, you're seeing adoption problems, or you're really about to go down that path of, how do we do this the right way for our enterprise? Reach out to Christen. She provided her email, and of course, hit her up on LinkedIn. Christen, thank you so much for being on the podcast. And, as always, I want to thank everybody for tuning in and listening to Catalyst. In this studio, we believe that smart technology and great people can create digital experiences that move millions. Join us next time on Catalyst, the Launch by NTT Data podcast.

(CATALYST OUTRO MUSIC)

Show full transcript
Back to top button