The following is a rough transcript which has not been revised by High Signal or the guest. Please check with us before using any quotations from this transcript. Thank you. === jordan: [00:00:00] I started to get the ideas for data literacy. I was at American Express, built a plan around it and they told me, no, people are not ready for that. Just do what you're doing. Maybe in the future they can do it. I didn't wanna move with. Um, so I went to Qlik as an entrepreneur. They hired me and basically turned me loose. Guess who came calling eventually for data literacy? Help American Express, who declined what I put forward. The same thing going on. You're seeing it with AI today. You and I know how much power. Sits in AI when you use it correctly. Problem is companies are buying tools left and right investing in this all the while. They're not doing the fundamental change management mindset, shift culture change, literacy change. hugo: That was Jordan Morrow, senior Vice President of Data and AI transformation at Agile One on what happens in the shift from being data-driven to becoming AI enabled. Widely known as the Godfather of data Literacy. Jordan warns that many organizations are repeating the mistakes of a [00:01:00] decade ago by prioritizing expensive tooling over the cultural change and literacy required to actually move the needle. We were told data science would be the sexiest job of the 21st century, and yet the big data and machine learning revolutions have not delivered across the board. What lessons can we learn from this so that AI doesn't suffer the same fate? In this episode of High Signal Jordan and I explore why enterprise AI projects are seeing a 90% failure rate while shadow AI is quietly thriving at the individual level. Jordan introduces his framework for engineered intelligence, a blend of machine capability with human emotional intelligence and critical thinking. We discuss how the definition of literacy is shifting from technical execution, like writing code to contextual application, and the ability to ask the right questions. Jordan also outlines his three C's framework, curiosity, creativity, and Critical Thinking, and explains why the real winners of this era will treat AI as a [00:02:00] PhD partner to uncover business problems that were previously invisible. We also dive into the realities of AI, ROI and why leaders really have to move past dopamine driven expectations to find long-term success. If you enjoy these conversations, please leave us a review. Give us five stars. Subscribe to the newsletter and share it with your friends. Links are in the show notes. I'm Hugo Bound Anderson, and welcome to High Signal. Now, before we jump in, let's quickly check in with Duncan from Delphia. Hey there, Duncan. I thought maybe you could just tell us a bit about Delphina and why we make high signal. Hey Hugo, how are you? At Delphia? We're building AI agents for data science. Through the nature of our work, we speak with the very best in the field, and so with the podcast, we're sharing that high signal also. That was such a rich conversation with Jordan. Uh. And I just wondered what resonated with you the most. Jordan Morrow is basically the person who coined data literacy as a discipline. You know, his story is so good. I just love [00:03:00] that intro. He came up with a plan to make data literacy happen, got told no by his own company, went and proved it elsewhere, and then watched that same company come back for more. Is that the same movie playing out with ai? Maybe. Let's get into it. Hey there, Jordan, and welcome to the show. jordan: Absolutely a pleasure to be here. It's good to see you. hugo: Likewise. And you know, we chatted for the first time, maybe six years ago, and we've never done so publicly. So it's super fun to be doing a podcast together. jordan: A hundred percent. And when we did it a few years ago, obviously it was COVID, I think the last time we chatted for at length, like we're gonna do today. Not only was it COID, but you were stuck in a hotel trying to get back Exactly. Back to Australia. So it's, it's good to be here and just chat away and to talk about different ideas. hugo: Exactly. So one thing I love about your work is for over a decade now, you've been a primary advocate for data literacy, and I'm wondering [00:04:00] how the definition of. Literacy now changes that we're moving from, we move from big data to data science and machine learning and now into the AI era. jordan: It's such a fascinating topic when you think about what these AI tools are doing to people is, I think it's made it so literacy is even more important today than ever before because it's no longer. Can you code or do this or that? It really boils down to do you have this ability to empower decisions and to ask the right questions and to do the right things that enable AI driven satisfaction and value. And I think that the literacy side is even more important today than ever before, hugo: without a doubt. And now we're hearing phrases like AI enabled more than data driven. So I'm wondering. jordan: Yep. hugo: Part of the point is that AI doesn't replace data, but it has been the hype cycle cultural consciousness. But I'm wondering [00:05:00] what the fundamental shift in mindset required to make a transition from being data driven to being AI enabled. And does that mean we don't really need to be data driven anymore? jordan: No, I think what's really interesting about what I would call, let's say the data and AI space is over the years. As things march on, we always get this whole new hype cycle, if you will. Data science was one of these big data was one of these velocity of data. Even data literacy, data driven. All these terms boil down to the same thing, and that is are we able to use data to make smarter decisions? That's what it boils down to. And in the end, what is AI built off of? It's built off of data, so data driven, AI driven, whatever one wants to call it. The reality is it boils down to do we have the skills and the enablement and the adoption to make smart decisions using AI or [00:06:00] data? And I think that it's even more important today to be able to put things in context to drive that forward in order for real success to come to life. hugo: 110% agree, but I do think something. That very much has changed Previously to be able to interact with data, you needed to understand perhaps things about databases and schema. Yes. And be writing pandas code or whatever. Perhaps you still do need that, but in I would agree. Now in the now in the experience of what it is, like the phenomenology of what it is to interact with data, suddenly you can interact with a rag system in natural language and that type stuff. Yes. And that works sometimes, but it also. There are very dangerous cases yes. There as well where you'll have retrieval processes that give you the wrong thing or hallucinations or whatever it may be. So what has changed in the way democratization to be able to use these tools? Um, I jordan: think this goes right to the literacy side even more. It is, I do think there [00:07:00] is a fundamental backend understanding that people need to have. AI is not deterministic. It's probabilistic. And so when it hallucinates, people freak out so quickly, and I'm the guy who says, quit worrying about it. You know what? Hallucinates more with data than ai. Humans, humans misuse data all over the place, and we do, but because this is a machine versus human, people trust the human more. That being said, though, I think you're exactly right. When you democratize AI out to the masses. There needs to be, not necessarily teaching on coding and pandas and everything that goes into an LLM, but teaching on how it actually works so that contextually you apply it to the right business case and use cases that drive value. Sometimes you don't even need ai, right? You just need Microsoft Excel. Sometimes you need powerful machine learning models to do some pretty nice predictive modeling and there's everything in between, so I don't think. I think literacy enablement has gotten to the point. Maybe you don't need to know [00:08:00] all the vocabulary that makes up the data and AI space. It becomes more of contextually, do you understand what these things do and how to apply them to different problems. It's not just about prompting something. It's about can you use ai, whether it's predictive modeling, agent ai, generative AI to deliver a result. And when people understand that your adoption goes up, your enablement goes up, deployment can happen better. The mindset shift that needs to be there, it's not AI replacing me. It's AI augmenting me. All these things can come together and empower people. hugo: Yeah, without a doubt. I, speaking of the failure modes we have though, they're getting better and better in in, in a lot of, I mean, the systems and the models are getting better and better. Oh yeah. So let me give you one example. I did a podcast recently with McKinney who created Pandas among many other. Things. And he writes nearly all of his software in Go Now, and he hasn't jordan: Yep. hugo: Ever written a line of go him himself. And on [00:09:00] top of that, he'll get adversarial code reviews. He'll get something like Gemini 3.0 or Opus 4.6 to write the code, then Codex 5.2 to critique the code and yeah, and fix it and provide all the suggestions and builds software that works for him now. So I'm just wondering, with all the failure modes we've seen, if. Like, how much do you think we really need to understand now the models are getting so good? jordan: I think that it becomes less and less an understanding of the foundational models on the backend. Going right back to what you just said. You don't even need to write code, but you do need to know that, let's say I'm using anti-gravity from Google or Cursor or Codex or any of these, how do I prompt it to get the true outcome, right? That is really what it, it's natural language and we as humans attach. Linguistic capability to intelligence. That doesn't necessarily always fit, but that's why we think these things are so smart. Now, they are, don't get me wrong, they're very intelligent. But the reality of it is if I'm gonna go into, let's say anti-gravity or cursor or codex, [00:10:00] and I start and I'm like, I wanna build this, do I fully, I don't need to understand what it's doing on the backend, but I do need to understand that when I prompted. Is the result it gave me the result that I actually wanted. And that does go back to like ontologies and vocabulary and things like that and understanding the flow of information and, but the one thing that I would say is beautiful about how sophisticated and powerful these are becoming is you can learn on the go. It's not a go read a book, study the textbook and then go, it's, you can do this. It's just go start iterating. And when you see that it, maybe it's losing its power. Take a step back and go again. And I think that's one of the beautiful things about these tools is it brings out what we could call a childlike curiosity to just start experimenting and playing around. And I think that is awesome because it gets back, it helps us get back to being human. hugo: Absolutely. And it brings people closer to technology, brings people closer. Yes. To product, to [00:11:00] decisions, to business outcomes. And I do, I mean, you know, we've both been working in data ml, literacy education, advising, building for over a decade now. And yeah, the amount of. Of in interesting conversations I get to have with different people now. jordan: Yep. hugo: Is so exciting and so inspiring jordan: isn't it? It's amazing. Like just this morning, so I'm essentially chief AI officer at the company I work for, and just this morning I had someone who has really embraced this, is not an AI specialist, doesn't maybe never coded a day in her life. Right. And, but she has embraced being almost like an AI specialist at our organization and the innovation and the efficiencies and the cost effectiveness and the things she's been able to bring to life. She won an award because of her natural curiosity, and I'm more than happy to support her in that award because she deserved it. And like us AI nerds and data nerds, we better be succeeding with this. But it's, when I [00:12:00] watch someone. Who never would've thought they could use AI in their career, I would bet thrive with it. To your point, you, if you do it right, it's not about human replacement, it's about human augmentation. And when you do that right, and you get people excited about it, that's when you get cultural shift and mindset shift across organizations. hugo: Without a doubt. And so something you mentioned, something we've chatted about before, something we're both very interested in is going from data to decision. Right? And you know, the classic ladder of analytics takes us from descriptive, so exploratory data analysis to diagnostic, where we're starting to think about causal questions. Why is this happening to predictive machine learning than to prescriptive, which is decision making and decision science as we both Yep. Know I am. And this is a provocation or a spicy take, can. How much can AI help us skip out all the model parts and even perhaps go to decision without necessarily telling us why? If it's the right, do we even need to know why to make [00:13:00] the right decision if it's given to us? jordan: I think the human does have to be a part of that to know. We don't necessarily maybe need to know all the ins and outs, but when you apply it, the human needs to be there to say, this was right or wrong. So when I take a look at that analytic progression that you just described for me, I love this idea. I have a new formula. My book comes out in March this year, and that formula, I call it engineered intelligence. It's no longer human intelligence versus artificial intelligence. It's two parts, data and ai, two parts, human IQ and eq. So that your engineering decisions. So if we go along, descriptive, diagnostic, predictive, prescriptive, along the whole way. The AI can come up with a descriptive analytic. Very simple, right? Very straightforward. But you as the human need to understand the what behind it, the emotional intelligence behind that, and I think that goes across all four spectrums so that it's not necessarily go in and build the decision. Allow AI to give you the decision, [00:14:00] but you're then gonna apply your gut feel, your intuition, your emotional intelligence to understand is that the right decision? If not, you just turn right back to the AI and say, you're 80% of the way there. But this is a key element I need you to take into consideration. So to your point, I don't know, for the vast majority of people, they don't need to know all the ins and outs of every level of analytic. It really comes down to my three Cs, which is curiosity, creativity, and critical thinking. Can you apply those three Cs towards what the AI is doing? So that you're delivering value, and that's a decision is one thing. AI can make decisions left or right, but are those decisions gonna provide us value? That is a different ball game to be thinking of. hugo: Yeah, I like that a lot. So I wanna, I wanna figure out what mistakes we made in data science and. Whether we can avoid them in the world of ai. To step back a bit, you and I both know 2011 or so data scientists was famously called the sexiest job of the 21st century. You and I are [00:15:00] living testament to that clearly. jordan: Yeah. Look at us now. Absolutely. hugo: Yeah. Yeah. I. Many companies, uh, actually the lion's share of companies struggled in the past 15 years to extract real value from their data teams. I think we saw it highly successful in certain parts of technology, famously bang plus. But I'm wondering why you think so many of those initiatives failed to, why didn't data deliver on its promise? jordan: And I talk about this and have talked about this a lot over the past few years. I, and first off, you're a hundred percent right. That quote comes from Harvard, October, 2012. Harvard says that, and one of the reasons they failed in my thought process is 90 to 99% of people. Are not data scientists, they're not data professionals by background trade or anything. But that was the sexiest job. That's, that was, there's so much data now we can derive value from it, but there, here's the hitch to that. You [00:16:00] didn't have someone who could pull the lever correctly. So what they would do is they'd go out and they'd just hire data scientists thinking they were gonna do something. Well, there's a big problem here. 90 to 99% of people are not data professionals. 90 to 99% of data professionals are not business professionals. And so you go out, this is the same thing with you buy Tableau or Click or Power BI and implement it and democratize it, and everybody's gonna get answers. No. This is actually what created the ripe environment for my field, that I help invent data literacy to thrive. But even then, data literacy became a problem. I'll share a prime example of this. When I started to get the ideas for data literacy, you could say the idea, not ideas, but idea. I was at American Express, built a plan around it and they told me, no, people are not ready for that. Just do what you're doing. Maybe in the future they can do it. They tried to shift my job. I didn't wanna move with 'em, so I went to Qlik as an [00:17:00] entrepreneur. They hired me and basically turned me loose. Guess who came calling eventually for data literacy, help American Express, who declined what I put forward now. I'm very glad they did because I might not be talking to you today if they hadn't. I might've just built an American Express and stuck there. But here's the thing that they said to me, and it's a friend of mine, she's still a friend of mine. She's the chief HR officer of Verizon now. She said, we hired a bunch of data scientists and basically we didn't know what we were doing. Mm-hmm. This is American Express, a very data-driven, very data successful company. They failed with data scientists and then all of a sudden the field opens up, we're democratizing to things. Data literacy comes along and people said, this is gonna solve it. Nope, it's not. It is a part of the puzzle when 90 to 99% of people are not data professionals, and all of a sudden you put data in their hands that does not guarantee they know what to do with it. You need literacy things, and it's the flip for data scientists. They could build amazing statistical [00:18:00] models. But do they know how to apply it to a business question? Totally. The same thing going on. You're seeing it with AI today. You and I know how much power sits in AI when you use it correctly. There is so much power. The problem is companies are buying tools left and right, investing in this all the while. They're not doing the fundamental change management mindset, shift culture change, literacy change. They're not treating it like products. As my friend Malcolm Hawker talks about. Instead, we're buying ai. It's gonna solve everything we bought, data literacy. It's gonna solve everything. We hired a data scientist, it's gonna solve everything. No, it's not. And so that's why I think you're still seeing today, AI is not delivering the value that people think it should. While that means you're not using it in the correct manner. 'cause if you do use it right, you find how much value can come from it. hugo: So what I'm hearing is we're currently, for the most part, making the same mistakes that we made during the day. jordan: Yep. There was a big, I forget, I think it might've been [00:19:00] MIT, that was talking about how 90% of generative AI projects are failing. But what was funny about that is if you dig into the data of it, shadow AI is thriving, meaning someone behind the scenes is using whatever AI tool they want and getting value from it and succeeding with it. So maybe enterprise projects are failing, individuals are using it and succeeding. Problem with that is if it's an individual, you might not have governance. They might be putting data into public models. It's a whole different issue. So same thing is happening. If we are not truly understanding how to implement these things, get adoption and drive change management with re-skilling, we're gonna run into issues. And that's what people are doing. And we're seeing it time and time again. But you nailed it, Hugo. And that is. There are pockets that are doing this right and they're gonna thrive because AI learns on its own. And when you get ahead with AI that's learning on its own, your competition might not be [00:20:00] able to catch up because you're so far ahead. It's just gonna continue to learn. So it's a fascinating time for sure, hugo: without a doubt. And I do wonder whether these products were built for individual use, at least the, like the big ones like. Any frontier model and a product wrapper around it, such as Chat, GBT or Claude or Gemini is for individual use. They, they've built things around multiplayer mode and working with teams and that type of stuff, but I do wonder whether we still don't have those applications yet and that's why jordan: the could be, hugo: to your point, shadow AI and individuals using it is something that. Like I know how to use AI very well for myself. I don't actually have a, there are a couple of things I do like bots and discord and that type of stuff, but I don't have a strong sense of how three people can work together with a language model product on their team. Yeah. As a team member yet, jordan: it's fascinating 'cause you're [00:21:00] nailing it. And this, I think what those wrapper frontier model products did like a chat Gemini copilot. Is they lull people into a false sense of this is awesome, but in reality, what are you actually using it for? If it's just to improve an email, that's great, but that is a fraction of what can be done with it. Some of the things we've done at our company is I take a frontier model and then I build it into a very specific use case. For example, legislation and regulation is huge in the talent space. Build a model around it so that my team, to your point, I can have 50 people go at this model. Ask for the latest legislation, how should I talk with this to clients about this, et cetera. And that's where you see the value because it became not a large language model, a small language model designed for a very specific purpose. And we see people thriving with it. But I think part, there's two things that I think get in the way of people truly understanding what to do is, number one, when you're prompting these frontier models, they are very good at what [00:22:00] they do. So you get lulled into that false sense of this is what I should be doing. And number two. You have AO AI Synco Fancy, that is built into the tool. It's a people pleaser. So the moment you continue to prompt it, it's talking you up. It's positive with you, but that again, it's wonderful. That's a good thing. But is that value added to your company? To be honest, I'd rather have the AI tell me, you're an idiot. What kind of prompt was that? It's not gonna do that. But I would rather it do that to me than what it is doing. And so. I do think the other side of it that I think is really fascinating. When I talk to people, I look at three pillars of ai. There's generative ai, which is the hype and everything. There's agentic ai, which you can build good agentic solutions. Maybe not cost effective yet, but it's gonna get there. The third pillar is been around for ages. It's predictive modeling. So when I work with clients and they're like, we wanna use ai. I usually steer directly to predictive modeling first because I wanna take the data that they have and build predictions that see [00:23:00] around the corner for 'em. They're so caught up in generative ai. They forget the fact that machine learning's been here for who knows how long, and you can use that to build amazing predictions that provide you solutions. So go do it. hugo: Totally agree. So we've identified that. A lot of the same mistakes are being made. I'm wondering what does it look like now for a company to truly walk the walk with respect to AI adoption as opposed to just championing, championing it for the hype cycle and for external pr? jordan: I think one of the key things that we're starting to see, and I hired someone last year for my AI team who's a change manage specialist, who I had mentored before I tried to get her company, I won't mention the name. It's one of the most famous on the planet. I tried to get them to allow her to go speak at data events on change management. They wouldn't let her. And, uh, that is the biggest miss because to me, a company that is doing this right is basing their AI and data solutioning off of use cases, not based off of hype. What [00:24:00] do we need to get done? Where can we actually use these? Like I said, Excel might be all you need. Sometimes you don't need AI for everything, but everybody jumps on the AI bandwagon. Number two, you number one, you get proper change management built off use cases. Number two, you don't just throw money at the tools you need to be throwing money at your people. We see over and over, and I actually do believe some of this dystopian hype to a degree, is entry level jobs are getting obliterated right now. People want you to use ai. The problem with that is you're creating a different gap. Because you're not allowing people to get experience. So that's gonna be a problem down the road. And number two, we're just throwing money at tools. So this does come back to literacy. You've gotta have the right mindset and you've gotta have the right skillset. So the first part is change management and the right use cases. Without the right skillset to use that you're not gonna succeed. That to me is where you go is I look at what is the problem we're trying to solve? What do I need, implement? What's the next problem [00:25:00] to solve? What do I need implement? And you just go through that pattern over and over. Sometimes it's data science, sometimes it's power bi, sometimes it's the biggest AI model you can find. But when you do it in that manner, you, you build the right stack, you build the right governance, you build the right technology, and maybe most importantly, you build the right skillset for your company to succeed. That hugo: makes a lot of sense. I, the next thing I wanna talk about is there's a significant amount of fear on uncertainty and confusion around AI in the current workforce. So I'm wondering how you feel leaders should address this internally in organizations. jordan: I think there's a balance between, number one, I wish humans in general would get off news feeds and buzzfeeds and things like that. 'cause those things target clicks, right? That's what they want you to do. So the more dystopian the link. People are gonna click on it. You know what I mean? But let's go to the business side of it. From a business perspective, I am very much [00:26:00] on board with leaders having radical candor. So you need to be honest that this is a shifting technology, but you can't just talk about it. Like I've been hired at a company once where I think they thought, this is the door tomorrow, he's gonna solve all this. They gave me basically no investment whatsoever. I could not get outta that company fast enough. So for leadership to really do this, they have to talk the talk and walk the walk. There is no sitting around and thinking this can be done. It is. You have got to invest in it. Number two, you've gotta invest in your people as leaders. It's not just going out and buying this tool or this technology or trying to find cost savings. It is a true fundamental shift. In the operations and workflows. Workflows of a company. And I think that's really what's really interesting when I talk to companies or travel the globe, is companies that have well established decades of experience and stuff, they can [00:27:00] struggle the most. 'cause they have so many, their systems are so old and built and this and that. Startups like is where it's at. And if you can find the right startup who is building this all natively, they're prime positioned to win this race now. To your point on leadership, it, it requires leaders to also acknowledge they've gotten things wrong. That's hard for a lot of C-Suite people. Mm-hmm. But for C-Suite to really make this work, they've gotta acknowledge and quick kicking the can down the road. If you bought a tool three years ago, that's not doing much. I don't care if you've spent 50 million on it, kick it to the curb and start fresh. Because if you just continue, it's like me. Literally, let's say I'm trying to lose weight. I love working out, but I'm not giving up my ice cream. How successful am I gonna be? My end goal is X, Y, Z, and I keep sabotaging it. And I think leadership has done that over time in data. But it comes back to that 90 to 99% of them aren't data professionals. So they don't understand that side of it. They have to have a heart to heart with [00:28:00] themselves and say, I don't even have the skills that are necessary, but I can get there and we're gonna get there as a company. hugo: So that's for leaders. I'm also interested. For individuals as jordan: yes, hugo: tech technical tasks like coding and data cleaning become easier to automate, but also, yep, all types of knowledge, work tasks. What skills should people be cultivating? Right now? jordan: It really boils down to my good friend Sadie St. Lawrence, right? She wrote a book, becoming an AI orchestrator. It comes down, my three Cs of data literacy are now the three Cs of data and ai, literacy, curiosity, creativity, critical thinking. We need to take a step back and put the technology aside and get really good at defining problems, asking questions, and critically thinking on what we want done. That literally means maybe it's once a month, you and your team shut off your phones. If you're in person, shut off your laptops, everything pushed to the side, write a problem on the board and just talk for a couple hours. Problem [00:29:00] with that is people don't know how to do that anymore. And this for individuals. For me, my book that comes out next month is called Data and AI Skills Gain the Confidence You Need to Succeed. Everyone has a seat at this table, but just like a child, you put them at the table when they're one years old. It's not like they know how to use a knife on a fork and know how to feast and all that. In order for everyone to succeed to sit at that table, you have to study. It might be that you've been doing something for 20 years and you're super comfortable with it, and AI can now do it in five seconds. Guess what? It's either you get uncomfortable or you get replaced. And so part of it for me is can we teach people how to think, how to ask better questions to turn off the digital, to bring the human back to it? Really interesting thing that came out last year from Davos, so the World Economic Forum. By 2030, the top 10, faster growing skills, six of the 10 were human. Adaptability, resilience, this and that creativity, analytical thinking. [00:30:00] I think those things matter so much. The ai, you nailed it. Hugo is gonna be able to do a lot of the tasks. Are you good enough to ask questions to get those tasks done Right? And if you're not and someone else is, that's where things come into play. And we can have a whole other episode on the education system. Systems across the globe need to train people's minds differently than they do 'cause the tasks and getting degrees and things like this, and engineering and all that, like software engineering. It's not gonna be needed anymore and to the extent that it is today. And so how do we teach people? It's, it really boils down to every individual needs to relearn how to think and how to actually problem solve and critically think. 'cause we just don't do that. And I don't blame ai. It started well before ai. Dopamine hits from social media and things like that have hammered our ability to critically think. So I think people just need to get back to that. hugo: Yeah. And also, I think, I went to high school in the nineties and. The education system, [00:31:00] it doesn't necessarily, some of it incentivizes thinking and reasoning, but a lot of it incentivizes memorization, how to write essays, that type of stuff, as opposed to wrote learning and all of these things. jordan: The thing I've told people, because I get asked this about education and my wife's a teacher, how often are you writing a 10 page paper in your job? Not very often. How often are you having to present all the fricking time? hugo: Yeah, jordan: change the assignment. Make it a 10 minute presentation with five minutes for q and a. Tell the student they can use ai, they can use power, but they can use whatever they want the moment they have to present and think on their feet. That's gonna show you if they know that subject matter or not. And so that to me is teaching them how to use the technology and tools that they're gonna use anyway. But number two, how to critically think on questions, how to present, how to communicate effectively. Those are the skills that matter the most when you're out in your job, not writing a 10 page paper. hugo: So last time we spoke, we talked about just [00:32:00] how software and product itself is being changed completely because the ability to generate code and increasingly better code, the marginal cost is close to zero. Absolutely. And. We also said we should probably talk about Claude Cowork, and that was a few weeks ago. Of course. Now, and now we have open claw and which I mean this week Pete Steinberger joined, they annoy, yeah. Analysis joining Open ai. And these tools are ones which I suppose start to deliver on the promise of automating knowledge work and not just. Software engineering. So yeah. I'm just wondering what early signs of impact you're seeing here, here are and how we need to think about how we relate to the work we do when using tools like these. jordan: Yeah, it's, this goes back to some of the early signs that we're seeing. I so many layoffs have been occurring for the last few months, and I feel for people, because I've experienced it myself in my career and been a part of all that, and I'm still not sure we're at the point [00:33:00] where these layoffs are AI driven. They're using AI as the excuse. I think part of it is they overhired in the pandemic. hugo: So my hot take is that they're AI driven. But because the company's invested in ai Yes. And not got return on investment. Yeah. So they're neg, they're negatively AI driven. Actually, jordan: abs, it could be a hundred percent. And so for me, what it boils down to, and I wish everybody who hears this and hears you and I talk about these things. I want them to be curious, just go test it right? Then. It might be able to do knowledge work for us, but that doesn't change the fact that we're the humans with emotional intelligence. So for example, imagine that AI and the data say you should lay off 60% of your company, and that is the, it is a pure data-driven decision. What those things aren't taken into account is reputational damage, culture damage. Partnership damage that you're gonna cause with your clients. 'cause you might be late. So I'm not saying that you don't lay off, maybe it's 20%, not 60%. [00:34:00] That is a human understanding and emotional intelligence decision coming into play. So when I look at the knowledge work of what AI is doing, it might tell you to think of those things, but we as humans need to bring those things to the table. I would encourage everybody with any of these tools as the knowledge stuff, cowork book writing from Claude, all these things are there. Go experiment and see what it does. Ask it questions. Learn for yourself, because you can then say, this is a problem I have at work. I wonder if a tool could do this. I just studied this tool last week. Boom. What? That to me says that every leader at a company should be doing this. Are you blocking off time for your employees to regularly just experiment? Because if you're not, you're not creating a safe environment for them to feel like they can experiment. They're already fearing things, they're already worried about things. Help them feel that you're on their side by creating that safety net. hugo: So that was gonna be my next question. How and how do you actually [00:35:00] experiment when it's a new tool, e. E, each week or each day and it's absolute cognitive overload, and how do you decide what to experiment with? jordan: I would block off just 30 minutes a day. Just block off 30 minutes. Whether it's for study, whether it's for experimentation, I don't care what it is. 30 minutes a day is not that long when you think about it. But we go back to, individuals need to set aside that 30 minutes leaders. Need to let those individuals know that 30 minutes is dedicated to them, go do it. I tell my employee, if my employee were to come to me and say, my change management specialist, I wanna spend half a day every Friday to study. It's a green light. Thousand percent. She's really good at using ai, but I want her learning. I want her staying on top of these things so that when a new iteration or something comes out, she already knows how to do it. And so that's that psychological safety. People are in fear of ai. And they might fear that if I'm not working all the time, I'm gonna be let go in. In my mind, I'm like, I'm gonna tell my [00:36:00] employees go take and set aside that time because I've, I'm a huge believer. 30 minutes up front saves four hours later on. So go spend that 30 minutes. And I Einstein I is, I think he's the one who attributes the quote something like, if I had 60 minutes to work on a problem, I'd think about it for 55. You know what I mean? Absolute, absolutely. We just don't do that, and we as leaders need to provide psychological safety to the employee to know that they can do that and experiment. hugo: So I totally agree with that. I, I think probably the spicy version of the question is the follow up question is, should leaders be prepared to have short term losses by investing in experimentation in order to have medium longterm gains? jordan: A hundred percent. So I was in a private meeting just a couple weeks ago with a speaker from Davos. There was only like 26 of us in the meeting maybe. And you have to understand, true, ROI from AI is one to two years out. It's not, I can buy it today [00:37:00] in the next week because we, again, we get so lulled into feeling good because it responds so quick. True, ROI, it might be an upfront investment for a year of loss. Three years down the road, you're paying off and you're beating your competition. But yes, it's like I'm a huge gym rat. Right. I love being in the gym. I love working out. I love strength training. It freaking hurts if you're doing it right, but you're doing it for the gains that you get down the road. Mm-hmm. Why shouldn't that be the case with investing in talent, in ai, in reskilling? If I have to block people's calendars off, and that means I have a little more work to do, but it's providing them an opportunity, give me the work, because that's gonna give me. Six months down the road a benefit, but we're so shortsighted, we're so dopamine driven at this point that a lot of people can't see that hugo: without a doubt. So we've been talking around agentic ai and we mentioned, I mentioned, you know, the more and more we'll have AI systems embedded in teams. So the fact that [00:38:00] we are moving towards a world more and more of a agentic ai, where AI will act more as an active team member rather than just a passive tool. I'm wondering what type of. Management challenges and team challenges you expect to arise when we have teams that are both have both humans and autonomous agents in them. jordan: This goes back to what we were just talking about, not experimentation, but literacy training and teaching people. It's one thing to tell someone what agent AI is and you can use technical, non-technical terms and just call it, it's AI doing a task of a human and you don't have to supervise it. That is just a small fraction, right? You and I understand the moment you try and build a team that has four Ag agentic solutions, six humans, there is a dynamic there that's a small group that's only 10 quote unquote people. Four Ag agentic, six people. Now expand that out to a company of 50,000 people. So I a hundred percent this boils down to are we re-skilling and up-skilling people, not necessarily [00:39:00] on the technical. But on the application, on the understanding of limitations of what works, what doesn't work in order for that to happen. This is a full scale literacy movement. Yet again, it's just gotten bigger. It's leaders. Do you understand what you're talking about? Mid-level leadership? Do you understand employees? Do you understand what that really means is to your point, I'm about to not hire this role. I'm not gonna replace it. I know a agentic can do 70% of that work. And it could do 30% of these employees works. So I'm gonna do a flip. You take that work, agentic, you people take this work, and I think this is where some of the company's failure comes into play, and that is when you deploy ai, what do you do with the people once it's in place? Well, a lot of companies have done layoffs. That's not the answer per se. It's what are you strategically doing once you implement AI to empower those people to bring your business more success? [00:40:00] That's another challenge that these leaders have to do, and that's why you start small. You start by use case and then you build your ecosystem bigger versus going to buy all the latest hyped up tools and thinking that's gonna do it. hugo: I'm interested in a huge amount of the conversation is around. How do we make current processes and current things we do as an organization more efficient? Just imagine if we'd done this with computers. Mm-hmm. If we'd said, these are, this is how organizations work. We've got these new things called computers. How do we use these to make, we would've missed out on 99.99, nine per percent. What are we missing here? How should we be dreaming bigger? jordan: I, well, and that's the thing, and I think you just said the right term, and I wish people would do this. Quit just thinking of cost cutting inefficiency. I truly want, the way I describe generative ai, when you're using it correctly, it is a PhD partner that sits on your shoulder. I'll use a prime example of this company I work for. We're [00:41:00] designing black swan events. What happens when stuff hits the fan? But here's the problem, if all we do is read articles on what that means, I went in, I took every article we've been assigned to study, put it into a deep research model, and I said, what are we missing? What is the hidden piece that we're not thinking of? I can read all this. My mental models are such that it's gonna hinder my ability to see everything. Hey ai, you're my PhD partner. Tell me what I'm not seeing and it's gonna be able to see so much more than I do. Then what I did is I sent it off to the entire executive team. I'm like, you don't need to read the big deep research white paper I built. Here's an executive brief on it. Study this. Right, read all the articles, study this executive brief, because when we come together in this meeting, it is very easy to make people feel comfortable that we know all the same things. We're talking about the same things. What about these five things that we missed because we as humans can't process it. [00:42:00] To me that's thinking bigger. It's not how do I make the old more efficient? I'm looking at it and saying, what is something totally new I can do? And I think that's where the companies that are gonna really, really win. Going forward. It's not just fitting AI into the old, it's what the freak can we do that is new and I'm so excited about it. That to me is your goal. It's your success. It's what we should be doing. hugo: So my final question for you, Jordan, is what's your, if you were to give one piece of advice for chief AI officers or data leaders who are struggling as most are to bridge the gap between, you know, high level vision for AI and on the ground execution. Yeah. What would it be? jordan: Go talk to people. Do not sit behind your desk. Go talk to people. We've said this in this podcast, and that is the technology's getting better. I don't need my chief AI officer, chief Data Officer, senior leaders in data and AI to necessarily just understand the technical, because that [00:43:00] is a comfort thing for them. My fourth book was Business 1 0 1 for the Data Professional. I don't know how many times in that book I said Go network with people. Hmm. If you wanna succeed as a chief data or chief AI officer, you need to be amongst the people, talking to them, asking them what's their problem? What is not working? What are you doing that you feel doesn't need to be done? Are you using the tool? Why aren't you using the tool? If what you're doing is sitting behind the scenes trying to build the perfect data stack, in the end, if you didn't talk to people, that data stack might freaking suck because you built it towards what you thought, not what your audience needed. And so for me. It's the soft skills that matter the most for leaders. It is not the technical at this point, and I hate calling 'em soft skills 'cause I find them to be the hardest skills on the planet to learn. But you need to be amongst the people, asking them questions, talking to them about problems, finding out how they're using an experiment. I just created at my company, a brain trust where we're meeting on a regular basis [00:44:00] because there are people doing shadow AI work or they, I advise them on the work, but it's not being shared across groups. We have an ai, a MA every month come talk and it's super popular. Just come ask your questions. This month's a MA is actually a hackathon. We have people, we're gonna look at problems. We're gonna design an AI solution with who knows how many dozens of people in it, because then they're gonna see, oh, this is potential. But all of that comes to be 'cause we're out with the people. We talk to the people. And maybe the second thing, if I can add a kind of second tip. Study, study, study, not study just AI and this and that, but read books. Please read books. I have a book right here and people think I'm funny with this one. It is called The Language of Trees by Katie Holton. And people will be like, why in the world would you get that? I was like, we humans are arrogant. We, we feel, we are the, don't get me wrong, we are supreme intelligence on the planet. I get that, but it [00:45:00] doesn't mean we have all the intelligence. How do trees communicate? Can I build that into ai? Swarm thinking and swarm mentality. You look at a flock of birds that is pivoting so beautifully in the sky, 500 of them pivoting, basically instantly go and try and get 500 employees to pivot quickly. Right? It doesn't happen. And so I want to study and read and learn how to environments work. Ecosystems work. How does communication work? What about deep listening to people? What about helping them just to feel seen? That emotional intelligence. Uh, and studying it is gonna be as powerful, maybe even more powerful than some of the technology you use hugo: without a doubt. So thanks for all the work you do Jordan, in, in data and I I AI literacy, but also thanks for coming in and sharing it with me and all of your wisdom and hard learn lessons with the community as well. jordan: No. Thank you so much for having me. I love it. hugo: Thanks so much for listening to High Signal, brought to you by Delphina. If you enjoyed this episode, don't forget to sign up for our newsletter, follow us on [00:46:00] YouTube and share the podcast with your friends and colleagues like and subscribe on YouTube and give us five stars and a review on iTunes and Spotify. This will help us bring you more of the conversations you love. All the links are in the show notes. We'll catch you next time.