Insights
Podcast

De la ansiedad a la ventaja: cómo abordar los temores de IA de los empleados

Catalyst Podcast
/
<Read time>
/
Feb 14, 2025

A medida que la tecnología de IA ocupa un lugar central en las discusiones actuales sobre la transformación digital, queríamos profundizar en el lado humano que a menudo se pasa por alto de la implementación de IA. Administrar las preocupaciones de los empleados en torno a la IA es crucial para una adopción exitosa, por lo que Christen Bell, una psicóloga industrial especializada en el impacto de la IA en el lugar de trabajo, se unió a Clinton para desentrañar las complejas emociones, miedos y enfoques de gestión necesarios para la adopción responsable de la IA.

Comprender las raíces de la ansiedad de la IA

La rápida integración de la IA ha provocado una ansiedad generalizada entre los empleados, que a menudo no están seguros de su impacto en sus funciones y seguridad laboral. Bell explica que muchas organizaciones pasan por alto estas preocupaciones, lo que lleva a la resistencia y a la reducción de la confianza.

Bell señala que el miedo a la IA a menudo se deriva de la falta de información o la desinformación. Algunos empleados se sienten desinformados y temerosos de lo desconocido, mientras que otros sostienen conceptos erróneos influenciados por retratos mediáticos o narrativas sociales. Abordar estas variadas preocupaciones requiere un enfoque bien informado y empático por parte de los líderes.

Actitudes clave de IA en el lugar de trabajo

A través de su investigación, Bell ha identificado seis actitudes distintas hacia la IA, definidas por los niveles de comprensión y sentimiento hacia la tecnología. Tres de ellas son actitudes “anti-IA”:

  1. Desinformado y anti-IA: Al carecer de exposición a la IA, estos empleados se sienten amenazados, inseguros sobre su propósito y capacidades, lo que lleva a un mayor miedo.
  2. Desinformado y anti-IA: Este grupo cree entender la IA pero a menudo tiene creencias exageradas sobre su poder, influenciadas por las retratos mediáticas de la autonomía o conciencia de la IA.
  3. Informado y anti-IA: Aunque conocedor, este grupo sigue siendo escéptico o temeroso, expresando preocupaciones en torno a la ética, la privacidad y el posible desplazamiento laboral.

Construir confianza a través de la educación y la transparencia

La alfabetización y la transparencia en IA son estrategias clave para aliviar los temores. Educar a los empleados sobre cómo funcionan los modelos de IA, qué datos utilizan y cómo se generan los resultados ayuda a desmitificar la tecnología. Comenzar con la alfabetización de datos y avanzar hacia conceptos básicos de IA, como el aprendizaje automático y las redes neuronales, permite a los empleados comprender las limitaciones y los beneficios de la IA.

La transparencia es igualmente crucial. Todo el contenido generado por IA debe estar claramente etiquetado, evitando confusiones y generando confianza. Este etiquetado ayuda a los empleados a comprender las aplicaciones y limitaciones de la IA, contrarrestando el efecto “valle desconocido”, donde el contenido generado por IA puede resultar inquietante si no se divulga.

Creación de una estrategia de adopción de IA confiable

Para una adopción exitosa de IA, las empresas necesitan una administración proactiva del cambio. Bell recomienda evaluar el sentimiento de los empleados, crear programas de capacitación en IA y establecer canales de comunicación abiertos. Un líder conocedor y creíble debe impulsar el lanzamiento de IA, abordando las preocupaciones con empatía.

Los mecanismos de retroalimentación seguros, como los canales anónimos, son esenciales, ya que los empleados pueden dudar en plantear inquietudes en un foro público. Estos canales fomentan la transparencia y la inclusión, ayudando a los empleados a sentirse informados y apoyados.

Un enfoque de IA centrado en el ser humano

Los conocimientos de Bell subrayan la necesidad de una adopción de IA reflexiva y centrada en el ser humano. Al centrarse en la empatía, la educación y la transparencia, las empresas pueden fomentar una cultura de confianza, asegurando que la IA mejora en lugar de interrumpir el lugar de trabajo y, en última instancia, alinear la innovación de IA con una experiencia positiva para los empleados.

Como siempre, no olvides suscribirte a Catalizador dondequiera que consigas tus podcasts. Estrenamos un nuevo episodio todos los martes, repleto de consejos de expertos y conocimientos prácticos para crear experiencias digitales que mueven a millones.

No items found.
0:00
0:00
https://rss.art19.com/episodes/2c32d5a5-b8bb-4217-b3dc-3fb4d21a799d.mp3
Show insight details
Episode hosts and guests
Christen Miller Bell
GenAI Community of Practice Leader
/
NTT DATA
Written by 
Catalyst Podcast

<Person description bio Lorem ipsum dolor sit amet consectetur. Porttitor duis aliquam sed bibendum. In tincidunt tellus tristique nisi adipiscing odio morbi. Hendrerit id quis id commodo aliquam augue ipsum mauris amet. A habitant sem scelerisque odio lectus et.>

Episode transcript

Clinton Bonner: Yo estaba como, ha-wha... ¿Qué? Como, tenemos un experto en esta cosa?

Campana de Cristo: (Risas)

Clinton: Entonces, me emocionó mucho conocerte.

(MÚSICA DE INTRODUCCIÓN DEL CATALIZADOR)

Clinton: Bienvenido a Catalyst, el podcast Launch by NTT Data. Catalyst es una discusión continua para líderes digitales insatisfechos con el status quo y, sin embargo, optimistas sobre lo que es posible a través de la tecnología inteligente y las grandes personas. Hoy me une en estudio Christen Bell, psicóloga industrial que lleva diez años investigando cómo reaccionan las personas ante la presencia de sistemas inteligentes en el lugar de trabajo. Ella está haciendo su investigación de posgrado sobre el miedo a la IA y cómo se manifiesta. Ella es la líder de la comunidad de práctica de IA para nuestro brazo de consultoría aquí en NTT Data, y estoy encantado de tenerla con nosotros hoy con este gran, gran tema. Christen, ¿cómo te va ahí abajo? Sé que estás en Charlotte. ¿Cómo estás? ¿Cómo está Charlotte? ¿Cómo es ahí abajo?

Cristiano: Es hermoso. Está despejado, cielo azul, 70 grados. Es perfecto. Pero más que eso, llevo mucho tiempo escuchando, carpintero por primera vez. ¿Verdad? Entonces estoy tan emocionada de estar aquí, Clinton.

Clinton: Perfecto. Quizá podamos tomar llamadas en vivo hoy. Ya veremos. Ya veremos cómo va esto.

Cristiano: (Risas)

Clinton: Entonces, estamos muy emocionados de que se una a nosotros para que podamos desempaquetar parte de este significado en torno a este miedo real a la IA y lo que está sucediendo en este momento. Por lo tanto, la IA está en cada sala de juntas de la empresa. Tienen que averiguar qué quieren hacer con la IA. Y vamos a hablar mucho de, oye, ¿y el lado humano? Que hay miedo real ahí fuera. Y llevas bastante tiempo estudiando esto, lo cual es súper cool. Entonces, primero lo primero, ¿cómo diablos se llega a este tema? Sí, sabemos que es un tema candente, pero cómo se dice, eso es todo. Quiero descubrir y adentrarme en el miedo a la IA. ¿Cómo llegaste ahí?

Cristiano: Claro. Bueno, lo primero es lo primero. Creo que yo también soy un poco víctima. Estudié, en licenciatura, lenguas romances. Entonces fui y tomé, como, cuatro idiomas extranjeros diferentes, pensando que tendría este trabajo realmente sexy como traductora. Y de repente, los servicios de traducción digital se pusieron realmente calientes. Y empecé a darme cuenta de que tal vez el trabajo que anticipaba tener para mí no iba a estar realmente ahí. Entonces, esa es realmente la primera... Lo primero que vi. Segundo, lo interesante es que, hace unos diez años, creo que era NTT Data tenía un cliente realmente grande de la industria financiera. Y el CFO de una sucursal de este banco había oído hablar de la tecnología RPA y sentía que su departamento de cuentas por pagar podría ser realmente un gran lugar para probar e implementar la tecnología. Y así su equipo era realmente grande, tal vez 100 personas trabajando en cuentas por pagar. Y voy a pintar este demográfico para ti. Se trata en gran parte de mujeres, la mayoría de las cuales llevan más de 15 años haciendo este trabajo. Muchos de ellos solo tenían, como, algún tipo de certificado, tal vez un título de asociado en esta área. Muy pocos de ellos en realidad tenían títulos de licenciatura. Y muchos de ellos también seguían llevando el trauma de la Gran Recesión. Muchos de ellos habían sido despedidos. Muchos de ellos habían estado desempleados durante un periodo muy largo de tiempo durante la recesión. Y así llevaban este trauma con ellos. Y a NTT se le encomendó la tarea de implementar el RPA, o automatización robótica de procesos, en tándem con el cliente. Y lo primero que notamos en el suelo fue este miedo. Podías oírlo. Escucharías a la gente hablar de ello en la sala de descanso. Y así, por el bien del tiempo, voy a pasar por encima de todas las cosas que hicimos. Iré directo al grano, que era que la organización cliente realmente manejó esto maravillosamente. Honestamente, me sorprendió. La vi bajar y estaba como, guau, esto esta muy bien hecho. Entonces, al final, esta herramienta RPA reemplazó el trabajo que cerca de la mitad de los especialistas de AP estaban haciendo. Y en ese despliegue, el cliente hizo un trabajo fantástico al asegurarse de que los empleados se comunicaran con frecuencia, que supieran exactamente lo que estaba sucediendo y por qué. Realmente se enfocaron en la pista de conversación de la mayor precisión del sistema, para que todos entendieran cómo les ayudaría. Y luego el cliente también tomó un gran cuidado en administrar a las personas durante todo el proceso. Entonces, al final, ya sabes, algunos asumieron nuevos trabajos en el mismo nivel de banda, llenando requisiciones que ya estaban abiertas. El cliente ya tenía un sólido programa de matrícula, por lo que HR se aseguró de que la gente supiera cómo usar eso, cómo utilizar ese estipendio de matrícula. Y así, algunas personas fueron y asistieron a campamentos de entrenamiento de codificación, y luego regresaron al grupo y estaban programando los bots. Hubo otras personas que fueron y obtuvieron certificaciones en diseño de procesos. Regresaron e hicieron los procesos para que los bots siguieran. Y así, al final, fue solo, como, menos del 8% de las personas en ese grupo de 100 personas que en realidad abandonaron la organización por completo, abandonaron esa organización. Y así, para mí, vi eso, y realmente me puso las ruedas girando. Empecé a pensar las siguientes preguntas: Sabes, ¿qué pasa cuando la tecnología está más avanzada que esta?

Clinton: Mhm. Sí.

Cristiano: Porque sabía que iba a venir, ¿verdad? ¿Qué sucede cuando una organización no está tan centrada en el ser humano y los empleados son tratados como si no importaran? O, de una manera más drástica, ¿qué pasa si la gente se niega? ¿Y si se rebelan? ¿Y si se sindicalizan contra los cambios? ¿Qué entonces?

Clinton: Un gran cuento ahí, seguro. Ese tiene un final feliz. En ese sentido, bien, alrededor del 8% de las personas fueron desplazadas en última instancia, pero ellos, parece que hicieron lo mejor que pudieron para cuidar a su gente, y la mayoría de la gente subió de nivel o lo descubrió de una manera u otra.

Cristiano: Si.

Clinton: Y, mira, la tecnología... No es como que se va a parar, ¿verdad? Esa es una de las cosas, creo, contra las que la gente tiene una falacia, o simplemente... Simplemente nunca lo ha hecho. Miras hacia atrás a lo largo de la historia de los humanos, nosotros solo... Si hay algo que va a avanzar... Algo, que no sea tan obtuso al respecto, pero marcha hacia adelante. Te guste o no. Y luego, creo que mucho de lo que vamos a hablar hoy es, ¿cómo es potencialmente diferente la IA? Y por qué, y qué tipo de miedos se están avivando ahora...

Cristiano: Si.

Clinton: ... Que tal vez, hace una década, ¿aún no estaban del todo ahí? ¿Y es este desplazamiento potencialmente más agresivo, una barra más grande, para más, para impactar a más humanos? Y creo que todas esas son cosas reales. Y creo que enterrar la cabeza en la arena y no hablar de ellos es probablemente lo peor que podrías hacer a nivel empresarial. Me encantaría saber tu opinión al respecto.

Cristiano: Sí, creo que... Mucha gente quiere avestruz ella misma, ¿verdad?

Clinton: Sí. Mhm.

Cristiano: El proverbial entierran tu cabeza en la cosa de la arena. El progreso va a suceder. Lo quieras o no. ¿Verdad? Y así, si metes la cabeza en la arena, entonces lo que va a pasar es que, cuando finalmente decidas aparecer, han pasado tantas cosas que no puedes ponerte al día.

Clinton: Succinto. Gracias. Bien. Entonces has estado en esto ahora por... Es bueno. Prefiero tenerlo así. Entonces has estado en esto ahora por, ya sabes, “en este ahora”. Has estado estudiando este tema y luego el impacto de la IA, esta idea de la esfera alrededor de la IA desde, ya sabes, casi una década. Has tenido tus oídos y cerebro y corazón un poco diferentes antes, siendo como, oh, veo que algo realmente, de verdad viene, y si no nos preparamos para esto, va a adormar a mucha gente. Así que...

Cristiano: Si.

Clinton: ¿Cuáles son algunas de las cosas clave? Una década es mucho tiempo. Yo diría que probablemente estés muy por delante del humano promedio cuando ponen sus pensamientos en torno a esto. ¿Cuáles son algunas de las cosas más importantes que has aprendido? Y la segunda parte, ¿han cambiado las cosas para ti también que tal vez empezaste, año uno, año dos, que, a medida que avanzaba la tecnología, a medida que avanzaban los humanos, que tal vez algunas de esas posiciones incluso hayan cambiado para ti? Entonces, un poco de dos partes.

Cristiano: Me gustaría abordar primero la segunda pregunta.

Clinton: Claro.

Cristiano: Se puede ver el cambio a lo largo del tiempo. En gran parte, hace diez años, la gente no tenía tanto miedo porque no estaba tan cerca.

Clinton: Right.

Christen: It was just this, like, topic that people talked about, that just wasn't really tangible to them. They didn't really understand what it meant. And there wasn't a practical use of it in their home. There wasn't something that they could point to. And so, largely, ten years ago, there just wasn't a whole lot of fear.

Clinton: Right.

Christen: That's really changed quite a bit now. And the fear is now ever-present, because people are starting to see AI pop up everywhere, right? It's embedded in their phones, it's embedded in Alexa, it's embedded in a whole bunch of tools. Now you go to Google, and instead of just doing a Google search, it's, you know, Google's AI tool. And it's everywhere now, and so, that's starting to spook people. What's interesting is, the data you see today, if you go searching for statistics on fear or mistrust of AI, you're going to see numbers all over the place. And so, at first you would say, well, that's really confusing. It sounds like, you know, people are everywhere. But there's actually trends in the data. I saw a stat recently saying that 53% of people are more concerned than excited about AI. And then you can juxtapose that, Google just did their Digital Futures project, and that says that 68% of people who have used AI in the past year are excited about the future. Well, those statistics don't match, right? And so, if you dig in, and you kind of figure out what's going on with the trends, you see that people are both fearful and curious at the same time, and they're not mutually exclusive. They can be both at the same time. So you could be curious about what's going to happen, but also a little worried that it could go wrong. And the other thing I'm seeing is that, especially in corporations, when they're doing internal surveys to find out what people think about AI, they're getting these... What I'm going to say, false positives. Essentially, people are speaking more positively in the workplace about AI than they are at home. And at home, when they're talking with their friends and their family in a psychologically safe environment, they're actually, you know, speaking up about the fears that they have that they're not talking about in the workplace.

Clinton: That, to me, is a sign of mistrust, also. Right?

Christen: Mhm.

Clinton: So even if you're taking a survey, even if it's probably anonymous...

Christen: Right.

Clinton: ...People just feel that, oh, I can't talk against the machine, type thing. Like, this is going to happen? Is that what you find in your studies?

Christen: Right, right. In the workplace, people don't want to be seen as, like, the Luddites.

Clinton: Yep.

Christen: You know, who's refusing to adapt. I think folks are kind of taking a little bit of an adapt or die mentality. And they don't want to be the one person that's not jumping on board, and they don't want to be seen as, like, difficult. And so, they are saying what they think the organization wants to hear. So. We can speak to that a little bit more when we talk about how my research pans out, how we segment people and things like that. That's really coming to the forefront in a couple of those segments.

Clinton: Yeah, yeah. Well, let's rotate over to that, too, because I know, within that research... You know, you don't want to paint with a broad brush when you're doing your research, and start bucketing humans. You know, it's like...

Christen: Yep.

Clinton: However, to organize data, and then to really... The ability to understand, are there patterns? Is this noise, or are there patterns? Well, you do have to make some decisions, and you do have to put people into representable categories that you believe they do fall into for certain criteria. And I think that's what you've done quite a bit of in your research, too. So take us through that, please.

Christen: Sure. So I have found that people typically bucket into about six categories. Those six categories are, you're either uninformed, you're misinformed, or you're informed. And then from there, you're segmented into pro-AI and anti-AI. So, since we're here to talk about fear, I'm just going to cover the anti-AI ones. So, let's talk about the uninformed anti-AI. So, these folks are very prevalent. I'd say the trend is looking like 25 to 30% of the population right now. And it's not an age thing here. There does seem to be a correlation with those who aren't in a corporate working environment. So, think, like, retail workers, restaurant workers, stay-at-home parents, retirees, folks like that. People who are not sitting in corporate America. And I think that the causation here is that most folks in corporate America are getting their information from company-led AI training. So, people who don't have access to that naturally fall into this uninformed category. And that leads us to this massive fear of the unknown. I think it was Yves Chouinard, who is the founder of Patagonia, he's attributed to saying that fear of unknown is the greatest fear of all. And I think that's really prevalent here. So, in consulting, as a consultant, right? That's what I do. When I'm rolling out a new AI solution, we typically have to address this unknown factor first. And so we do that by slowly trickling information to the masses. So, what this looks like is, getting the layperson to understand the world of intelligent systems is kind of like asking someone whose math knowledge ended at pre-algebra to suddenly start doing differential equations.

Clinton: Yeah.

Christen: It's... It's a big gap, right?

Clinton: Huge gap, yeah.

Christen: And so, you've got to bring them along on the journey and start small. So we tend to start with data literacy first. And that's where the students learn about data cleanliness, data stewardship, understanding data sources and constructs, so that they then can identify, interpret and act on data within a business context in order to influence, you know, business value or an outcome. So that's step one. Then, we tack on intelligent systems knowledge. And so, then we call that AI literacy training. And there we teach what machine learning is, what deep learning is, what a neural network is. What's a general adversarial network? Things like that. And more importantly, we teach them what data these tools access. So we build on that data literacy training. We teach them, then, how the data is used within these advanced systems. So, what are the models trained on? And this is really crucial for people to be able to use AI technology effectively. So, with this knowledge, they can effectively trust the system that they're using, because they're at least in the know of how that, you know, of what data it has access to. But they can also identify possible hallucinations in the output, as well as understand, you know, how to get the most out of their queries.

Clinton: Okay. So that's one... You describe...

Christen: That's one. (Laughs)

Clinton: You described, basically, six segments and we're not going to cover them all. But, like I said, we are focusing on the fear of AI. So I believe that was the uninformed and afraid, basically, right?

Christen: Yes. Uninformed and afraid. Yes. So then , our next category is misinformed and anti-AI, or afraid.

Clinton: Right.

Christen: And this group is fun. This is a fun one to talk about. Because you get some wild responses in here that get pretty funny. There's another quote that I love which is that we don't fear the unknown, we fear what we think we know about the unknown. And that's really relevant here. So, in this category lies the people who feel like they know enough about AI to make an informed decision. But then when you ask them where they get their information from, you find out that it's not the most credible sources. And right now, there's a lot of disinformation happening around AI. And especially if you're using Reddit or X, formerly known as Twitter, or if you're listening to political pundits, things like that, there's just a lot of information that's plain wrong. And you'll hear things like, exaggerated capabilities, that AI has human-like consciousness, or that AI has attained the same level of human thinking, which is interesting, because even neuroscientists don't know how humans think, so, like, how did AI figure that out? But... People are starting to see a presence of deepfakes or AI-driven disinformation campaigns. And so they attribute all of AI to what they've seen. And so, many have conspiracy theories that they've attributed to AI. And all of this is misinformation.

Clinton: Right.

Christen: In addition to that, you have this subset where people's misinformation is from Hollywood.

Clinton: Mhm.

Christen: People see Terminator or Westworld, or...

Clinton: Yeah.

Christen: And they use that to conceive their notions around AI. So, with this group, your task is a bit larger than the previous one. Because now you have to find out what these crazy notions are, and then you have to dispel them. And then you go through the same steps of data literacy training and AI literacy training. And then, what you get to at the end, if I keep this Hollywood analogy going, right? Is that you are able to pull back the curtain and prove that The Wizard of Oz isn't some big, you know, scary, omniscient, omnipotent being, but rather it's just the man behind the curtain with some fancy buttons to press, right?

Clinton: How hard is that to prove when things are evolving so quickly? Things are rapidly changing, and you do hear stories out there that, oh, you know, a certain system took over and gained access to a satellite. Like, those stories are out there, and you're like, okay, well, what do I believe? Where is this coming from? And is this real? And, how dug in are many of the people in that particular category you're talking about?

Christen: Yeah, this is like a psychologist's constant argument. People always want to talk about the negative thing more than anything positive.

Clinton: Sure.

Christen: You see this in, like, restaurant reviews, things like that, right? People are 20 more times likely to post a negative review than they are a positive review. And so, someone who comes across reviews of a restaurant is going to see, just, a skewed data set because of that. And so, that's one of the talk tracks that we use, which is, yes, of course you're going to hear about the times that it goes wrong. But then, sit and think about all those times you've heard about it going wrong. For every time you've heard something wrong, there's probably 100 times where it went right. And that's what they don't see, is the 100 times where it went right. Instead, they just hear about, you know, Tay, the racist Twitter bot. They think that everything is going to end up like that. So, it's a tough job to go on a, like, misinformation campaign to try and right people's conceptions. But it's possible. For sure.

Clinton: And I think, I would imagine it takes a while, and it takes consistency, which is all a part of, like, the change management that you and the team lead at NTT Data, again, specifically for this absorption in the use of AI. I guess the other piece for me that I think is worthy of discussing is that, with other technologies in the past, there might not have been this sense of... Or maybe there is, but from my perspective, not this sense of, like, existential threat. That, hey, if things go really bad, let's say they're misinformed, right? Let's just say they're misinformed. But they're skeptic of, hey, if things really go bad, this could be, like, World War Three, War Games type stuff. We had a movie about this in the 1980s with Matthew Broderick, it was a wonderful movie. And the Whopper.

Christen: (Laughs)

Clinton: And then the machine basically runs a scenario, because he hacks it, and he almost starts World War Three, right? It's a pretty fun movie, actually, I think Ally Sheedy, and I forget, some other folks. But... Great movie. What about that aspect of, like, look, the threat could be so pitched. What does that do to the, A, the fear? And then how do you... In an enterprise environment, how do you talk with a skeptic of it who maybe doesn't feel like they want to contribute to it? Do they feel they are contributing to it if they kind of, quote-unquote "play along"?

Christen: Yeah. So, the existential threat thing is very real to a lot of people. There's actually a term for this called ex-risk. And... I would say that in this case, a lot of the existential threat that people think they are feeling is not actually ever going to happen. I saw a statistic just yesterday, and I don't know where it came from, and I don't know if it's real, but it was that 85% of the things you expect to happen aren't going to. And I think in this case, there are some very real concerns that people have. You know. And then there are some just absolutely blown out of the water concerns. That people think that this is going to lead to some global war, or it's going to lead to food shortage, or things like that, or that the human race is going to get wiped out. We're going to end up in a Wall-E situation where the only beings living on Earth aren't living...

Clinton: Right.

Christen: ...And they're robots. Things like that. So again, information is power here. And this is one of those things that, when the people perceive an existential threat, that's a little... It's a little robust, making sure that you really talk it through with them. Make sure that they have all the right information. Asking them where they've gotten their information from, making sure that they're focusing on credible sources, things like that, is the best route you can take.

Clinton: Got it. To talk about it a little bit, I think is, at least worthy, putting a little light on there versus again, saying, oh no, you're not going to encounter anybody in the enterprise out of 100 people that don't feel that this is a threat. They don't want to contribute to it in one way or another.

Christen: Yeah.

Clinton: But I do know that we have another category too, just doing the math here, is that we have the category of folks who are deemed informed, but still anti-AI. So let's talk about them, please. 

Christen: Yeah. So, I mentioned the existential threat part is, like, a two-part thing. And you've got people who are misinformed and they perceive an existential threat. And then you have people that are actually quite well-educated, they're very informed, they've done their research. And so you have this subset. It's not a lot of people. If you're looking at the greater population, there's not a whole lot of people who are very informed and then decide that they're, like, holistically anti-AI. Because they're afraid of it. With this group, there's not a whole lot we can do to remove the fear or the mistrust. They've done their homework, they've made up their minds. And this group actually tends to be vocal. So, it may seem like this group is comprised of more people than really exist, but it's more like, they're just noisier than the others. This is an important group, because they're the squeaky wheel that's getting the oil. And they're important because some of their pushback on this is helping us get better regulation and oversight in this area. This is a topic that they see having a big trickle-down effect. Maybe they've followed futurists like Victor Vinge and Ray Kurzweil that talk about something called Singularity...

Clinton: Yep.

Christen: ...Which is a hypothetical date at which AI outstrips human intelligence and machines take over. You know, Ray Kurzweil wrote a book called The Singularity Is Near, and then he recently published one called The Singularity Is Nearer. And so, he's trying to warn people that this is coming. Again, take it with a very large grain of salt. But some people really adhere to his tenets and believe what he's saying there. There's other people in this group who are maybe worried about more than their job security, and they see it as, like, a threat to future generations. They see that, maybe the lack of ethics and lack of regulation around it right now are something of which they should be very concerned, and they see it as kind of being the wild, wild West, and people could do whatever. And so, they're worried about what the future looks like there. And then... I've heard some people talk about the lack of privacy. They fear that in the future, lack of, you know, that privacy is going to go away completely. Altogether. That you're constantly going to be monitored wherever you are, wherever you go, even in your home. What you're doing, what you're thinking. You know, those folks think that AI is one of the causations of that lack of privacy. And so, because they attribute the AI to that, then suddenly they're fearful of it.

Clinton: Let me dig in there for a second, too.

Christen: Yeah.

Clinton: Because I think what's interesting there is... Look, we've got different parts of the globe, which has always been the case, but, evolving very differently, right? So, I hear, I'll speak for myself, that there are things like social scoring that are prevalent and gaining steam in different parts of the world. China being one of them, right? Where, if you score a certain way, good or bad, you get certain social benefits. Like, can you get on this train or not?

Christen: Right.

Clinton: Like, pretty serious things. And so, I can understand...

Christen: Yeah.

Clinton: ...As somebody who grew up in New York my entire life, and as an American being like, whoa, whoa, whoa, no. Like, whether or not I drink a little alcohol or do certain things, I don't want some algorithm God telling me whether I can or cannot have a certain right. Now, the thing for me, though, would be... But is that AI? Or is that just data science anyway? Like, that part to me is, like, the leap over to say, okay, well, AI is driving that? Versus, if they want to go do that, I'm not sure if AI had to be the engine to drive that, Christen. So I wonder if that's just a leap too far to say, it's not really AI there. I just wonder your take.

Christen: Sure. Some of the facial recognition technology that they're using in China is AI-powered. They've gotten some very, very negative publicity for some of their practices, including using facial recognition technology to target certain minorities in Western China. And so, some of that is AI-based. But some of it is, to your point, that some of it's machine learning, some of it is just very large data science.

Clinton: Yep.

Christen: Right? And, you're right. I think the point that you're making here is that AI kind of gets the bad rap. I was listening to one of my favorite AI ethicists this morning, Doctor Chowdhury. She's the lead AI ethicist at Twitter, and she was saying that, I wish that it hadn't been called Artificial Intelligence. She's like, the name of it is just... She was like, I wish we had called it something else because I don't think it would get such a bad rap. And I agree. Even to go back to that RPA case study, we didn't call it Robotic Process Automation. We called it Rapid Process Automation, because just the change of the name made it less scary to those people. Yeah, so I think it's getting a bad rap.

Clinton: Yeah, I think on that point too, maybe it's just Apple being geniuses again with marketing. Like...

Christen: (Laughs) Yeah.

Clinton: They don't call it artificial intelligence now, right? They call it Apple intelligence. And maybe that is...

Christen: That's right.

Clinton: I mean, yes, it's branding, of course, and they're masters of that. However, this also might be very largely part psychology. And I wonder if you think that's in play.

Christen: Yes, absolutely. I have been coaching my clients for years to reconsider the branding of things. And, I mean that first time, calling it Rapid Process Automation instead of Robotic was the first time we really saw how powerful a name could be.

Clinton: Yeah.

Christen: Even calling something, just the broad term of intelligent systems, that's kind of my favorite term. Because something about artificial just gives people the ick.

Clinton: I mean, artificial flavors, it's just, it's a...

Christen: Right!

Clinton: Gets your shoulders and ears up real quick, even if you weren't paying attention, going like, well, why would I want anything artificial jammed into my, you know, perfectly balanced nature of body, mind and soul, right?

Christen: (Laughs) Right.

Clinton: It's an interesting topic that the naming convention of that can really just put people in a position, right out of the gate, to uplevel or increase mistrust before, even, anything factual has happened, right?

Christen: Right.

Clinton: So, that'll be something to watch, too, to see if there is a bit of a pivot. And I did want to ask you, you're studying this for all this time. What about, like, demographic trends?

Christen: Yeah.

Clinton: Are they prevalent? Are they there? Are there breakdowns of age, gender, anything like that that your research has proven out?

Christen: Yeah. It's not what you would expect. I have to admit, I went in with kind of a pseudo-hypothesis that there would be a strong generational trend. And so far there's not that skew, which is interesting. There's other trends emerging. I mentioned earlier, the trends of those who are uninformed, anti-AI tend to not work in corporate America, right? So, that trend has bubbled up. Another factor that has come to the fore is country cultures. Pretty drastically, actually. There's really big trends that India and China are leaders in the trust of AI, so... Trust and fear are kind of hand in hand. And so, people living in India and China have the least amount of fear of AI. And I think that's reflective of a bunch of things happening in their specific regions at the moment. A lot of advancement of technology is happening in both of those countries. There's a lot of push. And a lot of people are employed in technology in both of those countries. And so, I think that's why you see that trust happening there. And then, if you ask, you know, if AI has more benefits than drawbacks, like, 78% of Chinese respondents agreed, compared to only 35% of Americans. The data is showing that a lot of English-speaking countries are the most distrustful of AI. So, Australia, UK, US, tend to be lagging behind everybody else as far as technologically advanced nations. You know, English-speaking countries are the back of the pack there.

Clinton: Yeah. Interesting.

Christen: And then there's also an industry trend as well. We're seeing that manufacturing and human resources have the highest level of distrust and fear around AI compared to other industries. And you could unpack that and probably figure out why. HR has a really big risk of bias. You know, the risk/reward in HR is really high. And so... And then in manufacturing, a lot of the folks that work in manufacturing just, back to the... You're scared of what you don't know. That's showing up there big time.

Clinton: So, how about when we are looking to... Well, when you and team are looking to get this going and get adoption inside an enterprise? I'd imagine one of the bigger things is, that whether they are fearful of it or not, there's going to be some, hey, how do we do things in a transparent and accountable way? And I realize...

Christen: Yeah.

Clinton: ...That probably pitches fear when they think it's not. Or, it actually is not, right? So, that's a use case, too. So, when you're consulting, how do you do your best to get transparency and accountability so that the trust level goes higher and fear gets reduced?

Christen: So, there's two components at play here. People are hearing messaging in their workplace, but people are also getting messaging outside of their job, right? And so, one of those things you can control, and the other thing you can't. So, I can't control what people hear about ChatGPT, or, you know, what they learn about using Grok or any of those things. But I can control the messaging that comes from a corporate level. So there's a couple of things. You have to have really active and visible sponsorship from a sponsor that is credible, who is in a position of power, that knows what's going on. Someone that people listen to. And they need to be in it. I mean, they can't just be the person in the high castle that's like, oh, hey, we're rolling out AI. They need to be alongside the rollout, with the team, knowledgeable about what's going on. And they need to be the voice of the transformation. There needs to be a lot of training. There also needs to be a mechanism for people to voice their concerns, and not in a town hall. Not in a town hall. Nobody wants to be the person that's like, hey, I'm worried about AI bias in front of, like, 300 of their peers.

Clinton: Right. Well, you said earlier like those people tend to be, I don't want to be the Luddite, like you said.

Christen: Right! Exactly.

Clinton: So, I'm not going to speak up, especially in that environment, right? So...

Christen: Yeah. So, you need, like, a safe mechanism for people to say, like, hey, I want to talk to somebody about this. So whether it's a, you know, an anonymous tip line, or something like that, people need to be able to voice their concerns. And they need to feel like it's okay to do that, and they're not going to have any sort of retribution for doing so. And then, outside of the workplace, there is some very, very strong data that the majority of people want anything that is generated by AI to be labeled as such, right? So, if you see a video or you see an image, they want it to be labeled. So, I'm living in North Carolina, we recently were struck by Hurricane Helene. And some really terrible images came out from the storm, that were AI-generated.

Clinton: Geez.

Christen: Things like... Yeah. A girl crying, holding a puppy, things like that. And people believed it. They thought that this was, like, an actual image that came out. And a lot of people, when they realized it was AI-generated, they were shocked. They were like, wait, why would you do that?

Clinton: Right.

Christen: And so, there's a lot of people that want there to be, like, some kind of disclaimer that shows that this is AI-generated. The same with, you know, text that's generated by AI, they want there to be some kind of disclaimer that says that this is generated by AI. And so, doing that does two things. To your point, it's transparent. Right? So it helps us get around the uncanny valley effect. And then the other thing is, it helps us who still create without the help of AI to prove their mettle in a time where they're scared that they're going to be rendered obsolete.

Clinton: Well, the uncanny valley part is interesting, because this idea usually applied to, you know, humanoid robotics and they're not quite there, right? Or something trying to be human and it's not quite there.

Christen: Mhm. Yep.

Clinton: The newer challenge is, you just mentioned that photo of the girl holding a puppy with floods in the background.

Christen: Yeah.

Clinton: I saw that, I don't know, maybe 2 or 3 weeks ago now. Until just now, I was... I was today years old when I found out that was AI-driven.

Christen: (Laughs) Yeah.

Clinton: I didn't know that. Now, there were people suffering just like that, but that's a... That's an AI-generated image? I didn't know that either. So, my point being, is the uncanny valley... It's collapsed. That seemed to be a very real photograph. And deep fakes and things like that...

Christen: Yeah.

Clinton: And that, again, is just, that pitches the fear of, well then, how the heck can we trust this, right? So I think it's a fascinating place to be. And I also really feel for you and folks that are in this to say, okay, we've got our work cut out for us, right? This is something that, at a C level, if you're not thinking critically about the organizational change management of getting AI in, getting adoption and doing this in a really, really robust and smart way, you're going to encounter pitfalls and things you just weren't thinking about. And I do wonder, like... When you're talking with clients, how many of them are, like, really in tune to that and believe that, versus those who just aren't thinking that way yet, that they actually have to get the change management aspect really well? Otherwise, it's going to be some tough times for enterprises.

Cristiano: (Risas) Esa es una gran pregunta. Siento que esta es la pista de conversación que solo estoy golpeando hasta la muerte, pero es tan, muy importante. Ahora mismo, este asombroso número... Es como que del 70 al 85% de las organizaciones que han implementado IA generativa no están obteniendo los resultados que anticipaban. Saltan, se mueven rápido, porque la IA se mueve rápido, así que piensan que tienen que moverse rápido, ¿verdad? Entonces simplemente saltan con ambos pies, despliegan alguna herramienta. Seis meses después, miran a su alrededor y van, espera un segundo, esto no es hacer lo que pensábamos que iba a hacer, o como, la gente no lo está usando tanto como pensábamos. Y luego miran a su alrededor, y dicen, bueno, tal vez deberíamos conseguir algunos consultores aquí para averiguar por qué no está funcionando.

Clinton: (Risas) Derecha.

Cristiano: Y así, traen equipos como mi equipo para averiguar por qué. Y, quiero decir, esto es realmente lo que hago. Y entonces entramos y les ayudamos a averiguar por qué. Y la respuesta es en gran parte, bueno, no hiciste ninguna administración preventiva de cambios. ¿Verdad? No planeaste el cambio. No hiciste ningún análisis de sentimiento en el front-end. No levantaste el consejo o comité de ética y gobierno de IA en la parte delantera, que es algo enorme para establecer confianza y para aallar ese miedo que la gente tiende a tener. Muchas veces simplemente saltan directamente al entrenamiento de herramientas de IA. Entonces, un gran ejemplo de esto es, como, acabo de lanzar el entrenamiento de Copiloto. Bueno, ¿la gente sabe cómo funciona Copilot? No. Entonces, ¿hiciste algún tipo de entrenamiento preventivo para enseñarles qué es el Copiloto y en qué está entrenado? ¿Qué puede usar? Cuando ese pequeño toggle en Copilot dice, como, trabajo versus web, ¿la gente sabe lo que eso realmente significa? No. Ellos no. Y así, no lo están usando de la manera más efectiva, porque se omitieron todos estos pasos.

Clinton: A lo que va de la mano también es, la falta de gestión del cambio, de lo que estás hablando. Realmente encantador allí. Y entonces yo también diría, tú lo dijiste, también. Lo están cargando porque hay tanta espuma a su alrededor, que sienten que tienen que hacer algo, y están agarrando una herramienta. Y luego diciendo, ya sabes, magia, presto, danos 300% de ganancia de productividad.

Cristiano: Derecha.

Clinton: En comparación con un... Solo una mirada más metódica, por decir, bien, mezclemos la gestión del cambio. Hagámoslo bien. Pero, ¿podemos analizar la IA para lo que realmente podría resolver en nuestra empresa? Como, ¿cuáles son los casos de uso que realmente le importan, ya sea a nuestros empleados, a nuestros usuarios finales, a nuestros electores si está en el sector público?

Cristiano: Derecha.

Clinton: Y como salir del, lo que creo que es esta espuma de, agarra una herramienta, tírala y magia.

Cristiano: Si.

Clinton: En comparación con el pensamiento crítico sobre, ¿cómo se puede aplicar esto a nuestra gente? Nuevamente, empleados o usuarios finales, clientes, etcétera, etcétera, etcétera. Faltan ambos. Hay una falta de planificación con la gestión del cambio, y hay una falta de criticidad de, ¿cómo podría aplicarse la IA a nuestros problemas, en lugar de poner genéricamente una herramienta en su lugar y esperar magia? ¿También son dos caras de la moneda para ti?

Cristiano: Sí, y me encanta escuchar tu lente de eso. Puedo ver su, como, experiencia del cliente...

Clinton: Si.

Cristiano: ... Ven burbujeando allá arriba. Si. Y, desde el punto de vista de la psicología, es un problema realmente único para sentarse y mirar, porque la mayoría de las organizaciones siempre van a ir con la prueba de concepto que los líderes piensan que les va a ganar el mayor ROI. Y lo interesante es que estás planteando esta otra situación, que es, vamos con la prueba de concepto que va a ser algo que la gente más quiere. Y entonces, algo tan interesante ahí, tú... Si exploto esto, ¿verdad? Los usuarios obtienen algo que les resuelve problemas, ¿verdad?

Clinton: Sí.

Cristiano: Sus empleados ahora tienen una herramienta que es útil, y les quita el dolor de su vida diaria. Y así de repente, ahora, has asociado el uso de IA con algo grandioso, ¿verdad? Y así, en sus mentes, son como, oh, esta herramienta es fantástica. Sí, quiero usar esto. Y entonces, las implementaciones adicionales de otras pruebas de concepto van a ser mucho más exitosas, debido a esa percepción que tienen los empleados de la herramienta. Sí, me encanta ese enfoque. Es muy difícil convencer a tu C-suite para que tome esa ruta.

Clinton: Si. Interesante.

Cristiano: Porque muchas veces, las herramientas que la gente realmente quiere, de verdad, podrían no ser las que van a demostrar tener el mayor ROI sobre el papel.

Clinton: Derecha. Y eso es exactamente. Es una cosa sobre el papel. Porque puede ser difícil proyectar con esa precisión, especialmente cuando se trata de, como, tecnologías emergentes, como estamos hablando aquí. ¿Alguna última parte, ya sabes, consejo, cualquier cosa con la que quieras dejar a la audiencia, o cómo contactarte, sea lo que sea?

Cristiano: Si. Entonces, si pudiera dar algún consejo a nuestros oyentes, es que sean inteligentes al respecto. ¿Y qué significa eso? Significa asegurarse de educarse a sí mismo. Si estás escuchando y resulta que eres una de esas personas que cae en el, ya sabes, no corporativo América... O tal vez su organización no ha realizado ningún entrenamiento de alfabetización de datos o IA, esos existen. Puedes encontrar videos de YouTube sobre eso. Pero aprende. Aprende qué es para que puedas tomar una decisión educada sobre cómo te sientes realmente. Y luego lo segundo es, saber lo que está pasando te da una voz que puedes usar para alzar la voz. Y tu voz puede ayudarnos a crear protecciones, regulaciones, crear conciencia sobre las cosas y asegurarnos de que las cosas que podrías temer no sucedan. Y así, mi segunda pregunta es que la gente use su voz. Ya sabes, habla. Si algo... Si algo no te gusta, entonces compártelo y mira si se puede cambiar.

Clinton: Me encanta. Dejémoslo ahí. Pero también, Christen Bell, C-H-R-I-S-T-E-N B-e-L-L. Es el mejor lugar, si la gente quiere conectarse contigo para hacerte preguntas, descubrirte en LinkedIn, ¿es esa la mejor manera de conseguirte?

Cristiano: Sí, puedes encontrarme en LinkedIn. Creo que estoy en la lista como Christen Miller Bell. Tengo mi apellido de soltera ahí.

Clinton: Gotcha.

Cristiano: O puedes enviarme un correo electrónico. Es Christen.Bell@NTTData.com.

Clinton: Perfecto. Bueno, muchas gracias. Esta ha sido una discusión realmente interesante, y por supuesto, de gran actualidad, porque hay mucha discusión y exageración en torno a la IA en este momento. Te agradezco que compartas tanto de esta visión y esta comprensión del razonamiento detrás del miedo a la IA. Recomiendo a cualquier persona que esté teniendo estas discusiones a nivel empresarial, ya sea que aún no esté obteniendo los beneficios de la IA, esté viendo problemas de adopción o realmente esté a punto de seguir ese camino de, ¿cómo hacemos esto de la manera correcta para nuestra empresa? Llega a Christen. Ella proporcionó su correo electrónico, y por supuesto, la golpeó en LinkedIn. Christen, muchas gracias por estar en el podcast. Y, como siempre, quiero agradecer a todos por sintonizar y escuchar Catalyst. En este estudio, creemos que la tecnología inteligente y las grandes personas pueden crear experiencias digitales que mueven millones. Únase a nosotros la próxima vez en Catalyst, el podcast Launch by NTT Data.

(CATALIZADOR DE MÚSICA OUTRO)

No items found.
Hablemos.

Transform insights into action and ideas into outcomes.