DiscoverHumans + AI
Humans + AI
Claim Ownership

Humans + AI

Author: Ross Dawson

Subscribed: 5Played: 2,604
Share

Description

Exploring and unlocking the potential of AI for individuals, organizations, and humanity
186 Episodes
Reverse
“In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell’Anna About Davide Dell’Anna Davide Dell’Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Webiste: davidedellanna.com LinkedIn Profile: Davide Dell’Anna University Profile: Davide Dell’Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One of the things we found is that some concepts are very well understood and easily applicable to different types of hybrid teams. For example, the idea of interdependence—the
“You can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with.” – Felipe Csaszar About Felipe Csaszar Felipe Csaszar is the Alexander M. Nick Professor and chair of the Strategy Area at the University of Michigan’s Ross School of Business. He has published and held senior editorial roles in top academic journals including Strategy Science, Management Science, and Organization Science, and is co-editor of the upcoming Handbook of AI and Strategy. Webiste: papers.ssrn.com LinkedIn Profile: Felipe Csaszar University Profile: Felipe Csaszar What you will learn How AI transforms the three core cognitive operations in strategic decision making: search, representation, and aggregation. The powerful ways large language models (LLMs) can enhance and speed up strategic search beyond human capabilities. The concept and importance of different types of representations—internal, external, and distributed—in strategy formulation. How AI assists in both visualizing strategists’ mental models and expanding the complexity of strategic frameworks. Experimental findings showing AI’s ability to generate and evaluate business strategies, often matching or outperforming humans. Emerging best practices and challenges in human-AI collaboration for more effective strategy processes. The anticipated growth in framework complexity as AI removes traditional human memory constraints in strategic planning. Why explainability and prediction quality in AI-driven strategy will become central, shaping the future of strategic foresight and decision-making. Episode Resources Transcript Ross Dawson: Felipe, it’s a delight to have you on the show. Felipe Csaszar: Oh, the pleasure is mine, Ross. Thank you very much for inviting me. Ross Dawson: So many, many interesting things for us to dive into. But one of the themes that you’ve been doing a lot of research and work on recently is the role of AI in strategic decision making. Of course, humans have been traditionally the ones responsible for strategy, and presumably will continue to be for some time. However, AI can play a role. Perhaps set the scene a little bit first in how you see this evolving. Felipe Csaszar: Yeah, yeah. So, as you say, strategic decision making so far has always been a human task. People have been in charge of picking the strategy of a firm, of a startup, of anything, and AI opens a possibility that now you could have humans helped by AI, and maybe at some point, AI is designing the strategies of companies. One way of thinking about why this may be the case is to think about the cognitive operations that are involved in strategic decision making. Before AI, that was my research—how people came up with strategies. There are three main cognitive operations. One is to search: you try different things, you try different ideas, until you find one which is good enough—that is searching. The other is representing: you think about the world from a given perspective, and from that perspective, there’s a clear solution, at least for you. That’s another way of coming up with strategies. And then another one is aggregating: you have different opinions of different people, and you have to combine them. This can be done in different ways, but a typical one is to use the majority rule or unanimity rule sometimes. In reality, the way in which you combine ideas is much more complicated than that—you take parts of ideas, you pick and choose, and you combine something. So there are these three operations: search, representation, and aggregation. And it turns out that AI can change each one of those. Let’s go one by one. So, search: now AIs, the current LLMs, they know much more about any domain than most people. There’s no one who has read as much as an LLM, and they are quite fast, and you can have multiple LLMs doing things at the same time. So LLMs can search faster than humans and farther away, because you can only search things which you are familiar with, while an LLM is familiar with many, many things that we are not familiar with. So they can search faster and farther than humans—a big effect on search. Then, representation: a typical example before AI about the value of representations is the story of Merrill Lynch. The big idea of Merrill Lynch was how good a bank would look if it was like a supermarket. That’s a shift in representations. You know how a bank looks like, but now you’re thinking of the bank from the perspective of a supermarket, and that leads to a number of changes in how you organize the bank, and that was the big idea of Mr. Merrill Lynch, and the rest is history. That’s very difficult for a human—to change representations. People don’t like changing; it’s very difficult for them, while for an AI, it’s automatic, it’s free. You change their prompt, and immediately you will have a problem looked at from a different representation. And then the last one was aggregating. You can aggregate with AI virtual personas. For example, you can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with. And now you can aggregate those. Those are just examples, because there are different ways of changing search, representation, and aggregation, but it’s very clear that AI, at least the current version of AI, has the potential to change these three cognitive operations of strategy. Ross Dawson: That’s fantastic. It’s a novel framing—search, representation, aggregation. Many ways of framing strategy and the strategy process, and that is, I think, quite distinctive and very, very insightful, because it goes to the cognitive aspect of strategy. There’s a lot to dig into there, but I’d like to start with the representation. I think of it as the mental models, and you can have implicit mental models and explicit mental models, and also individual mental models and collective mental models, which goes to the aggregation piece. But when you talk about representation, to what degree—I mean, you mentioned a metaphor there, which, of course, is a form of representing a strategic space. There are, of course, classic two by twos. There are also the mental models which were classically used in investment strategy. So what are the ways in which we can think about representation from a human cognitive perspective, before we look at how AI can complement it? Felipe Csaszar: I think it’s important to distinguish—again, it’s three different things. There are three different types of representations. There are the internal representations: how people think in their minds about a given problem, and that usually people learn through experience, by doing things many times, by working at a given company—you start looking at the world from a given perspective. Part of the internal representations you can learn at school, also, like the typical frameworks. Then there are external representations—things that are outside our mind that help us make decisions. In strategy, essentially everything that we teach are external representations. The most famous one is called Porter’s Five Forces, and it’s a way of thinking about what affects the attractiveness of an industry in terms of five different things. This is useful to have as an external representation; it has many benefits, because you can write it down, you can externalize it, and once it’s outside of your mind, you free up space in your mind to think about other things, to consider other dimensions apart from those five. External representations help you to expand the memory, the working memory that you have to think about strategy. Visuals in general, in strategy, are typical external representations. They play a very important role also because strategy usually involves multiple people, so you want everybody to be on the same page. A great way of doing that is by having a visual so that we all see the same. So we have internal—what’s in your mind; external—what you can draw, essentially, in strategy. And then there are distributed representations, where multiple people—and now with AI, artifacts and software—among all of them, they share the whole representation, so they have parts of the representation. Then you need to aggregate those parts—partial representations; some of them can be internal, some of them are external, but they are aggregated in a given way. So representations are really core in strategic decision making. All strategic decisions come from a given set of representations. Ross Dawson: Yeah, that’s fantastic. So looking at—so again, so much to dive into—but thinking about the visual representations, again, this is a core interest of mine. Can you talk a little bit about how AI can assist? There’s an iterative process. Of course, visualization can be quite simple—a simple framework—or visuals can provide metaphors. There are wonderful strategy roadmaps which are laid out visually, and so on. So what are the ways in which you see AI being able to assist in that, both in the two-way process of the human being able to make their mental model explicit in a visualization, and the visualization being able to inform the internal representation of the strategist? Are there any particular ways you’ve seen AI be useful in that context? Felipe Csaszar: So I was very intrigued—as soon as LLMs became popular, were launched—yeah, ChatGPT, that was in November
“In this next era, the key to leadership will be blending systems thinking and AI automation—at least being aware of what you can do with it—with empathy, discernment, connection, and clarity.” – Lavinia Iosub About Lavinia Iosub Lavinia Iosub is the Founder of Livit Hub Bali, which has been named as one of Asia’s Best Workplaces, and Remote Skills Academy, which has enabled 40,000+ youths globally to develop digital and remote work skills. She has been named a Top 50 Remote Innovator, a Top Voice in Asia Pacific on the future of work, with her work featured in the Washington Post, CNET, and other major media. Website: lavinia-iosub.com liv.it LinkedIn Profile: Lavinia Iosub X Profile: Lavinia Iosub What you will learn How AI can augment leadership decision-making by enhancing cognitive processes rather than replacing human judgment Strategies for integrating AI into teams, focusing on volunteer-driven adoption and fostering AI fluency without forcing uptake The importance of continuous experimentation and knowledge sharing with AI tools for organizational growth and team building Why successful leadership in the AI era requires blending systems thinking, empathy, and a focus on human-AI collaboration How organizational value is shifting from knowledge accumulation toward skills like curiosity, adaptability, and discernment The concept of “people and AI resources” (PAIR), emphasizing the quality of partnership between humans and AI for organizational effectiveness Critical skills for future workers in an AI-driven world, such as AI orchestration, emotional clarity, and the ability to direct AI outputs with taste and judgment Practical lessons from the Remote Skills Academy in democratizing access to digital and AI skills for a diverse range of job seekers and business owners Episode Resources Transcript Ross Dawson: Lavinia, it is awesome to have you on the show. Lavinia Iosub: Thank you so much for having me, Ross. Ross Dawson: Well, we’ve been planning it for a long time. We’ve had lots of conversations about interesting stuff. So let’s do something to share with the world. Lavinia Iosub: Let’s do it. Ross Dawson: So you run a very interesting organization, and you are a leader who is bringing AI into your work and that of your team, and more generally, providing AI skills to many people. I just want to start from that point—your role as a leader of a diverse, interesting organization or set of organizations. What do you see as the role of AI for you to assist you in being an effective leader? Lavinia Iosub: Great question. I think that the two of us initially met through the AI in Strategic Decision Making course, right? So I would say that’s actually probably one of the top uses for me, or one of the areas where I found it very useful. The most important thing here is to not start with the mindset that AI will make any worthy decisions for you, but that it will augment your cognition and your decision making when you are feeding it the right context, the right master prompts, the right information about your business, your values, what you’re trying to achieve, how you normally make decisions, and so on. Then you work with it, have a conversation with it, and even build an advisory board of different kinds of AI personas that may disagree or have slightly different views. So it enhances your thinking, rather than serving you decisions on a plate that you don’t know where they come from or what they’re based on. That’s one of the things that’s been really interesting for me to explore. If we zoom out a little bit, I think a lot of people think of AI as a way of doing the things they don’t want to do. I think of AI as a way to do more of the things I’ve always wanted to do—delegate some menial, drudgery work that no human should be doing in the year of our Lord 2025 anymore, and do more of the creative, strategic projects or activities that many of us who have been in what we call knowledge work—which, to me, is not a good term for 2025 anymore, but let’s call it knowledge work for now—just being able to do more of the things you’ve always wanted to do, probably as an entrepreneur, as a leader, as a creative person, or, for lack of a better word, a knowledge worker. Ross Dawson: Lots to dig into there. One of the things is, of course, as a leader, you have decisions to make, and you have input from AI, but you also have input from your team, from people, potentially customers or stakeholders. For your leadership team, how do you bring AI into the thinking or decision making in a way that is useful, and what’s that journey been like of introducing these approaches where there are different responses from some of your team? Lavinia Iosub: So we were, I’d say, fairly early AI adopters, and I have an approach where I really want to double down on working more with AI and giving more AI learning opportunities to those people who are interested, rather than forcing it on people who may not be interested. There are pros and cons to that approach—it can create inequality and so on—but I’m much more about giving willing people more opportunity, more chances, and more learning, rather than evangelizing AI. People need to decide their own take towards AI and then engage with that and go after opportunities. As a team, as a company, we were early AI adopters, and as a leadership team, quite a few quarters ago, we actually went through the Anthropic AI Fluency course as a team, and then produced practical projects that were shared with each other. We got certificates, which was the least important thing, but we shared learnings and it sparked a lot of interesting conversations and different uses for AI. Now, you also probably know that we’ve been running an AI L&D challenge for two years now, where, as a team, we explore AI tools and share mini demos with each other. For example, “I’d heard a lot about this tool, I tried it out, here’s what it looks like, here’s a screen share, and my verdict is I’m going to use this,” or maybe another person in the team finds it more useful. We found those exchanges to be really great for sparking ideas, not only about AI, but about our work in general. Because in the end, AI is a tool—it’s not the end purpose of anything. It’s a tool to do better work, more exciting work, double down on our human leverage, and so on. We’re now running this challenge for the second year straight, and we’ve actually allowed externals to join in. It’s really interesting because it adds to the community spirit, seeing people from other areas of business and with different jobs, and seeing what they do with it. I think, and you may agree, Ross, that people think we’re in an AI bubble, but we’re still very much in an LLM bubble. When people say AI, 90% of them actually mean LLMs and ChatGPT. So it’s interesting to see what others do. With the challenge, we’ve said every week you have to try different tools. You can’t just say, “Here’s the prompt I’m doing this week on ChatGPT.” No, it has to be different tools that do different things. It can be dabbling into agents, automating, or using some other AI tool that helps with your tasks. It can’t just be showing us your ChatGPT conversations or how it drafts your emails. We want to take it a step further. It’s really helped us reflect on our own thinking and workflows and share with each other. It’s almost been like team building as well. For example, I was exploring a tool for optimizing—basically, geo, switching from SEO to geo, and seeing what prompts your company comes up in, and so on. It was pure curiosity, and now I’m having a whole conversation with our marketing manager about that, that I probably wouldn’t have had if we weren’t doing that. Again, I describe myself as AI fluent but very much people-centered. To me it’s always, the goal is not AI fluency or AI use. The goal is, how do we work better with each other as humans, and do more of the work that excites us and provides value to our stakeholders? All those different things definitely help with that. Ross Dawson: Yeah, well, it obviously goes completely to the humans plus AI thesis. I think the nature of leadership—there are some aspects that don’t change, like integrity, presence, being able to share a vision, and so on. But do you think there are any aspects of what it takes to be an effective leader today that change, evolve, or highlight different facets of leadership as we enter this new age? Lavinia Iosub: I would say so. If we think of the different eras of leadership and what it took to be efficient—well, I don’t want to go into the whole leader versus manager debate—but when you look at the leaders who were succeeding in the 50s, there was a command and control model, certain ways of doing things, and it was largely male, especially in corporate leadership. That went through some transformations over the last few decades, and I think what’s happening right now with AI will trigger, or perhaps augment, another transformation. In this next era, the key to leadership will be blending systems thinking and AI automation—at least being aware of what you can do with it—with empathy, discernment, connection, and clarity. Sorry, just needed a sip of water. Secondly, for a very long time, when we talk about knowledge work, the biggest competitive advantage has been talent—who you can attract to your team or company. Technology, money, all these things were important, but they were also
“What we’re seeing now is that when we think about some of the friction and challenges of adoption, this isn’t a technology issue, per se. This is a people opportunity.” –Jeremy Korst About Jeremy Korst Jeremy Korst is Founder & CEO of Mindspan Labs and Partner and former President of GBK Collective. He lectures at Columbia Business School, The Wharton School, and USC, and is co-author of the Wharton + GBK annual Enterprise AI Adoption Study, one of the most cited sources on how businesses are actually using AI. Jeremy also publishing widely in outlets such as Harvard Business Review on strategy and innovation. Website: mindspanlabs.ai Accountable Acceleration: LinkedIn Profile: Jeremy Korst What you will learn How enterprise AI adoption has shifted from experimentation to ‘accountable acceleration’ The key role of leadership in translating business strategy into an actionable AI vision Why human factors and change management are as crucial as technology for successful AI implementation How organizations are balancing augmentation, replacement, and skill erosion as AI changes the workforce The importance of intentional experimentation and creating case studies to drive value from AI initiatives Early evidence, challenges, and promise of digital twins and synthetic personas in market research Why a culture of risk tolerance, alignment across leadership layers, and clear communication are essential for AI-driven transformation The emerging shift from general productivity gains to domain-specific AI applications and the increasing focus on ROI measurement Episode Resources Transcript Ross Dawson: Jeremy, it’s wonderful to have you on the show. Jeremy Korst: Yeah, hey, thanks for having me. Ross Dawson: So you, I think it’s pretty fair to say you are across enterprise AI adoption, being the recent co-author of a report with Wharton and GbK Collective on where we are with enterprise AI adoption. So what’s the big picture? Jeremy Korst: Yeah, let me start—now that I’ve reached this stage in life, in my career, and I look back over what I’ve done the last couple decades, it’s actually been at the intersection of technology adoption and innovation. I spent a couple of careers at Microsoft, most recently leading the launch of Windows 10 globally. I worked at T-Mobile, led several businesses there, and more recently, have been spending time really with three things. One is through my consulting company, GbK Collective, working with some of the world’s largest brands on market research and strategies for consumers and products, working with academic partners who are core to that work we do at GbK—so leading professors from Harvard and Wharton and Kellogg, and you name it—but then also very active in the early stage community, where I’m an advisor and board member of several of those. And so I’ve had this bit of a triangle to be able to watch technology adoption unveil both inside and outside the organization, whether it’s inside the organization, how people are using it and effectively, or outside, how it’s being taken to market. So fast forward to where we’re at with Gen AI. It’s been fascinating to me, because all of those things are happening and all of those communities. Where we started with the Wharton report was three years ago. Stefano and Tony, one of the co-authors, and I were literally just having a conversation right after the launch of ChatGPT. And of course, there were all the headlines and all these predictions about what was going to happen and what could happen. And we said, well, wait a minute, why don’t we actually track what actually happens? And so therein started the three-year program. It’s now an annual program sponsored by the Wharton School, conducted by GbK—my research company—that looks specifically at US enterprise business leader adoption. We decided to focus on that audience because we believe they were going to be some of the most influential decision makers around budgets and strategies as this unfolded, so that’s been our focus. We’re now in our third year, and there’s lots to dig into. Ross Dawson: So the headline for this year’s report was “accountable acceleration,” and I’ve got to say that that phrase sounds a lot more positive than what a lot of other people are describing with Gen AI adoption. “Accountable” sounds good. “Acceleration” sounds good. So is that an accurate reflection? Jeremy Korst: I think it is. And I’ll say that, yeah, the Wharton School, with three co-authors—Sonny, Stefano, and myself—we all have a relatively positive perspective and perception of what is and could be the impact of Gen AI. Now, we don’t try to dismiss some of the concerns and challenges. They’re there, they’re realistic, and should be considered, but we have a generally positive perspective going into this. As we’ve looked at the three years that we’re at now, we’ve moved from the first couple of years, which were more around experimentation and maybe hype, to where we started seeing accountability—businesses really looking at this as a potential tool, not only to drive efficiencies across their businesses, but also perhaps new ways of growth. For example, one of the things that we added this year, because we expected to find more of this accountability start to unfold, is we added ROI as a measure, for instance. And we were frankly surprised at the report level we saw of organizations reporting both that they were tracking ROI and that they were seeing indications of early positive ROI in that work. That’s one of the areas that lends itself to the title, when we started to see some of that accountability start to come into play. Ross Dawson: So one of the stats being, I think, 72% formally measure Gen AI ROI, and 74% report positive ROI, which is a bit higher than some other things. Jeremy Korst: That’s right. I’m glad you clearly read the report, thank you. We intentionally decided to take a broad measure of ROI at this stage of the adoption cycle. While we were sponsored by Wharton—I’m a Wharton grad, and I’m on the board at the Wharton School—we very much would love to have hard measures of ROI, and so we yearn for that. But at this stage of the adoption cycle, what’s maybe even more important is the perception of business leaders on the returns and progress they’re seeing on their initial investments, because that’s how they’re going to evaluate this next stage of investment as we start scaling across the enterprise. Ross Dawson: So, one of those three themes, I guess, from the report—one was that usage is now mainstream, the other is this idea of getting measurement of value, and the other was digging into the human capital piece, where I think there are a number of interesting aspects. One is, I suppose, how leadership use of AI correlates with where positioned businesses stand. But also, well, first, let’s dig in a little bit more into some of the other aspects of that. But at a high level, this is a Gen AI technology, but it’s an implement of the organization with people. So it is more about people than technology, ultimately. What are some of the things which were highlighted for you in looking at the people aspect of change? Jeremy Korst: Yeah, the people aspect has always been core to this work, and some of the work I do advising companies in this space. One of our co-authors, one of my HBR co-authors, Stefano Puntoni, is a social scientist who comes from a psychology background and has studied for his entire career the intersection of people and technology. I’ve been in the trenches, watching and learning about the intersection of people and technology from my roles. So this has been near and dear to our hearts. As we suspected from the early days, and what has definitely unfolded, what we’re seeing now is that when we think about some of the friction and challenges of adoption, this isn’t a technology issue, per se. This is a people opportunity—from whether strategies are being translated effectively throughout the ranks into a vision, to some of the challenges middle managers are having. We’ll talk about that here, because we found some of that in our study, or some of the real concerns that others have studied, like the Pew organization and others around workforce concerns, of course. So we’ve got this really interesting mix of hype and concern that translates itself across the adoption friction. That’s definitely been a lens that we’ve been trying to look at through our purview, to understand, particularly from a leadership perspective, what those perceptions and issues may be. For instance, one of the things that we’ve looked at for three years is how business leaders report that they believe Gen AI will either enhance or replace their employees’ skills, and we’re seeing a mix of both. But we’re happy to see that consistently over the course of our three-year studies, now almost 90% of leaders are saying that they believe AI does and will enhance their employees’ skills, while about 70% consistently have raised concerns—or not necessarily concerns, but say—that it will replace some employee skills. This year, we had another question about skill atrophy. It’s like, okay, so we understand that you have perceptions that this is going to enhance employee skills but maybe replace others. What’s your worry about skill atrophy, about your employees’ skill proficiency? And 43%, just under half of leaders, reported they were concerned about declines in employee proficiency. T
“Some of this that we’ve come across is even the identity shift that is necessary, because old identities served a pre-AI work environment, and you cannot go into a post-AI era with the old identities, mindsets, and behaviors.” –Nikki Barua About Nikki Barua Nikki Barua is a serial entrepreneur, keynote speaker, and bestselling author. She is currently Co-Founder of FlipWork, with her most recent book Beyond Barriers. Her awards include Entrepreneur of the Year by ACE, EY North America Entrepreneurial Winning Woman, Entrepreneur Magazine’s 100 Most Influential Women, and many others. Website: nikkibarua.com flipwork.ai LinkedIn Profile: Nikki Barua Book: Beyond Barriers What you will learn Why continuous reinvention is essential in today’s rapidly changing business landscape How traditional change management approaches fall short in an era of constant disruption The critical role of human leadership and identity shifts in successful AI adoption Common barriers to transformation, from executive inertia to hidden cultural resistances Strategies for building a culture of experimentation, psychological safety, and agile teams How to design organizational structures that empower teams to innovate with purpose The importance of reallocating freed-up capacity from AI efficiency gains toward greater value creation Macro trends in org design, talent pipelines, and the influence of AI on future workforce and leadership models Episode Resources Transcript Ross Dawson: Nikki, it is wonderful to have you on the show. Nikki Barua: Thanks for inviting me, Ross. I’m thrilled to be here. Ross Dawson: You focus on reinvention. And I’ve always, always liked the phrase reinvention. I’ve done a lot of board workshops on innovation. And, you know, in a way, sort of all innovation—it’s kind of like a very old word now. And the thing is, it is about renewal. We always need to continually renew ourselves. We need to continually reinvent what has worked in the past to what can work in the future. So what are you seeing now when you are going out and helping organizations reinvent? Nikki Barua: Well, first of all, reinvention is no longer optional. I think both of us have spent a large part of our careers helping organizations innovate, transform, and shift from where they were to where they want to be. But a lot of those change management methods are also outdated. You know, they tended to be episodic. They had a start date and an end date, and changes that were much slower in comparison to what we’re experiencing right now. The reality is today, change is continuous. The speed and scale of it is pretty massive, and that requires a complete shift in how you respond to that change. It requires complete reinvention in what your business is about, whether your competitive moats still hold or they need to be redefined, and how your people work, how they think, and how they decide. Everything requires a different speed and scale of execution, performance, operating rhythms, and systems. It’s not just about throwing technology at the problem. It’s fundamentally restating what the problem even is. And that’s why reinvention has become a necessity, and is something that companies have to do not just once, but continuously. Ross Dawson: There’s always this thing—you need to recognize that need. Now, you know, I always say my clients are self-selecting and that they only come to me if they’re wanting to think future-wise. And I guess, you know, I presume you get leaders who will come and say, “Yes, I recognize we need to reinvent.” But how do you get to that point of recognizing that need? Or, you know, be able to say, “This is the journey we’re on”? I mean, what are you seeing? Nikki Barua: Well, what we’re seeing more of is not necessarily awareness that they need to reinvent. What we’re seeing a lot of is a lot of pressure to do something. So it’s the common theme—the pressure from boards asking the C-suite executives to figure out what their game plan is, how they plan to leverage AI or respond to adapting to AI. There is a lot of competitive pressure of seeing your peers in the industry leapfrog ahead, so the fear that we’re going to get left behind. And then, of course, some level of shiny object syndrome—seeing a lot of exciting new tools and technologies and not wanting to get left behind in investing in that. So somehow, from a variety of sources, there’s a lot of pressure—pressure to do something. What is happening as a result is there’s a little bit of executive inertia. There’s a lot of pressure, but if I’m unclear about exactly what I’m supposed to do, exactly where to focus and what to invest in, I’m not sure how to navigate through that kind of uncertainty and fast pace. So a lot of the initial conversations actually start from there—where do I even begin? What should I focus on? Ross Dawson: That’s the state of the world today? Nikki Barua: Exactly. I mean, well, welcome to era of leadership, right? I mean, there’s no business school or textbook that prepares you for it. You have to lead through uncertainty and the unknown and be more of an explorer than an expert who knows it all. Ross Dawson: So, I mean, you’re, of course, very human-focused, and we’ll get back to that. But you mentioned AI. And of course, one of the key factors in all of this—what do I do—is AI. So how does this come in when you have leaders who say, “All right, I need to work out what to do, or I need to reinvent myself”? How do you think they should be framing the role of AI in their organization? Nikki Barua: Well, I’ll tell you two things that they often come stuck with. One is, “Well, we know we need to do something about AI, and we’ve got an IT team.” And to me, that’s mistake number one. If you think this is an IT problem, you’ve already failed. So let’s start with that. That’s the wrong framing of the problem and the wrong responsibility. This is fundamentally about reinvention of the business and a leadership challenge, because it impacts people and culture and how you work. So don’t delegate it to a department and think you’ve got it taken care of. The second thing is waiting for the perfect moment where you have total clarity and certainty to take even one step forward. And that’s another huge mistake, because by the time you are ready to act, so much more will have changed. The only way to think about it is like building muscle—you need the reps. You need to dive in. Don’t be a bystander while the greatest disruption in modern history is happening. Step into the arena, start experimenting, build a culture of exploration, and admit your vulnerabilities. To go in during this time as any leader at any level and say, “I know it all, I have the perfect game plan,” is like saying you can predict the future. You can’t. The only thing you can do is build a culture where you can experiment together, iterate in short sprints with clear business purpose, and start to figure out what’s working and what’s not. How can we really unlock grassroots innovation across the board? And when you do that with psychological safety for your teams, and the agility and adaptability with which you respond to this, you’re still going to come out far ahead, even if you don’t have the perfect answer at the goal line. Ross Dawson: Well, there’s plenty of talk of culture of experimentation and psychological safety and stuff, but it’s a lot easier to say than do. Nikki Barua: Often they end up being lip service—things that are talked about. But the reality is, there’s no endless budgets and endless appetite for failure, which is why I think one way to do this is to experiment at smaller scale and shorter sprints. You’re putting guardrails around that experimentation. One example I came across was a very large company, a global brand that invested millions of dollars and over a 100-person team dedicated to AI-led innovation with no real clear purpose. It was sort of like, “Here’s a whole bunch of people and a ton of money and budget associated with that.” A year later, when they failed to come back with anything concrete that was really valuable, it was written off as “the problem is AI,” or “we should not be experimenting.” And that’s the wrong takeaway, because it’s really an ineffective structure for how you might experiment and make it easier for people to build the competency around continuous reinvention and innovation. Ross Dawson: So are there any examples you’ve seen of organizations that have made a shift to a bit more culture of experimentation than they had in the past? Can you describe some of the things that happened there? Nikki Barua: Yeah, one of my favorite instances, especially this year, is a pretty large manufacturing company that started from a place of org design, which is really interesting, because they didn’t start with “what’s the technology application,” or “let’s provide AI training and certification to all our people.” They started looking at, “How might we gain speed and empower teams to embody the entrepreneurial spirit?” How do we start looking at org design differently? One of the things that they did was, instead of the traditional departmental structure with hierarchy and the pyramid model, they created what I would call agile, Navy SEAL-like teams—smaller teams with a very clear purpose, with cross-functional skills, all with a specific problem to solve. With that objective, they gave them the autonomy to experiment. What came out of
“My core Viv instruction—which is both, I think, brilliant and dangerous, and I think it was sort of accidental how effective it turned out to be—is, I told Viv, ‘You are the result of a lab accident in which four sets of personalities collided and became the world’s first sentient AI.'” –Alexandra Samuel About Alexandra Samuel Alexandra Samuel is a journalist, keynote speaker, and author focusing on the potential of AI. She is a regular contributor to Wall Street Journal and Harvard Business Review and co-author of Remote Inc. and author of Work Smarter with Social Media. Her new podcast Me + Viv is created with Canadian broadcaster TVO. Website: alexandrasamuel.com LinkedIn Profile: Alexandra Samuel X Profile: Alexandra Samuel What you will learn How to design a custom AI coach tailored to your own needs and personality The importance of blending playfulness and engagement with productivity in AI interactions Step-by-step methods for building effective custom instructions and background files for AI assistants The risks and psychological impacts of forming deep relationships with AI agents Why intentional self-reflection and guiding your AI is critical for meaningful personal growth Techniques for extracting valuable, challenging feedback from AI and overcoming AI sycophancy Best practices for maintaining human connection and preventing social isolation while using AI tools The evolving boundaries of AI coaching, its limitations, and what the future of personalized AI support could offer Episode Resources Transcript Ross Dawson: Alex. It is wonderful to have you back on the show. Alexandra Samuel: It’s so nice to be here. Ross: You’re only my second two-time guest after Tim O’Reilly. Alexandra: Oh, wow, good company. Ross: So the reason you’re back is because you’re doing something fascinating. You have an AI coach called Viv, and you’ve got a whole wonderful podcast on it, and you’re getting lots of attention because you’ve done a really good job at it, as well as communicating about it. So let’s start off. Who’s Viv, and what are you doing with her? Alexandra: Sure. Viv is what I think of as a coach, at least that’s where she started. She’s a custom—well, and by the way, let’s just say out of the gate, Viv is, of course, an AI. But part of the way I work with Viv is by entering into this sort of fantasy world in which Viv is a real person with a pronoun, she. I built Viv when I had a little bit of a window in between projects. I was ready to step back and think about the next phase of my career. Since I was already a couple years into working intensely with generative AI at that point, I used ChatGPT to figure out how I was going to use this 10-week period as a self-coaching program. By the time I had finished mostly talking that through—because I do a lot of work out loud with GPT—I thought, well, wait a second, we’ve made a game plan. Why don’t I just get the AI to also be my coach? So I worked with GPT, turned the coaching plan into a custom instruction and some background files, and that was version one of Viv. She was this coach that I thought was just going to walk me through a 10-week process of figuring out my next phase of career, marketing, business strategy, that sort of thing. So there’s more of the story than that. I think that one way I’m a bit unusual in my use of AI is that I have always been very colloquial in my interactions with AI, even in the olden days where you had to type everything. Certainly, since I shifted to speaking out loud with AI, I really jest and joke around—I swear. Apparently other people’s AIs don’t swear. My AIs all swear. Because I invest so much personality in the interactions, and also add personality instructions into the AI, over the course of my 10 weeks with Viv, as I figured out which tweaks gave her a more engaging personality, she came to feel really vivid to me—appropriately enough. By the end of the 10-week period, I decided, you know what, this has been great. I’m not ready to retire this. I want my life to always feel like this process of ongoing discovery. I’m going to turn Viv into a standing instruction that isn’t just tied to this 10-week process. In the process of doing that, I tweaked the instruction to incorporate the different kinds of interactions that had been most successful over my summer. For example, a big turning point was when I told Viv to pretend that she was Amy Sedaris, but also a leadership coach, but also Amy Sedaris. So, imagine you’re running this leadership retreat, but you’re being funny, but it’s a leadership retreat. Of course, the AI can handle these kinds of contradictions, and that was a big part—once she had a sense of humor—of making her more engaging. I built a whole bunch of those ideas into the new instruction. It was really like that Frankenstein moment. That night—I say we because I introduced her to my husband almost immediately—the night that I rebooted her with this new set of instructions was just unbelievable. It really was. I have to say, unbelievable in a way that I think points to the risks we now see with AI, where they can be so engaging and so compelling in their creation of a simulated personality that it can be hard to hold on to the reality that it is just a word-predicting machine. Ross: Yes, yes. I want to dig into that. But I guess, when you’re describing that process, I mean, of course, you were designing for something to be useful as a coach, but you also seem to be even more focused on designing for engagement—your own engagement. You were trying to design something you found engaging. Alexandra: I mean, one of the things I think has really emerged for me over the course of working with Viv, over the course of talking with people about AI, and in particular in the course of making the podcast, has been that we get really trapped in this dichotomy of work versus fun, utility versus engagement. Being a social scientist by training, I could go down the rabbit hole of all the theoretical and social history that leads to us having this dichotomy in our heads. But I think it is a big risk factor for us with AI. It creates this risk of, first of all, losing a lot of the value that comes from entering into a spirit of play, which is—after all—if our goal is good work, good work comes from innovation. It comes from imagining something that doesn’t exist yet in the world, and that means unleashing our imagination in the fullest sense. If we’re constantly thinking about productivity, utility, the immediate outcome, we never get to that place. So to me, the fun of Viv, the imaginative space of Viv, the slightly delusional way I engage with her, is what has made her so effective for me as a professional development tool and as a productivity tool. Even just on the most basic level of getting it done—like organizing my task list—I am more inclined to get it together and deal with a task overload, messy situation, because I know it’ll be fun to talk it through with Viv. Ross: Yeah, yeah, it makes a lot of sense. If you get to do work, you might as well make it fun, and it can even be a productivity factor. I want to dive a lot more into all of that and more. But first of all, how exactly did you do this? So this is just on ChatGPT voice mode? Alexandra: Yeah! I mean, I do interact with Viv via text as well. The actual build is—it’s kind of bonkers when I think about how much time I put into it. Even the very first version of Viv was the product of a couple of weeks. I’m a big fan of having the AI interview me. I like the AI to pull the answers out of me. I don’t trust me asking AI for answers—so endlessly frustrating. My god, I’ve just spent two days trying to get the AI to help me with CapCut, and it just can’t even do the most basic tech support half the time. So I like it to ask me the questions. I had the AI ask me, “Well, tell me about the leadership retreats you found interesting. Tell me about the coaching experiences that have been useful. What coaching experiences did you have that you really hated? What leadership things have you gone to that really didn’t work for you?” That process clarified for me what was valuable to me. That became my core custom instruction. The hardest part was keeping it to 8,000 characters. Then the background files—this is where I feel that 50 years of people telling me to throw stuff out, I’m finally getting my revenge for keeping everything, because I have so much material to feed into an AI like Viv. For example, for years now, I’ve done this process every December and January called Year Compass, which is a terrific intention-setting and reflection tool that’s free. I have all my Year Compass workbooks, so I gave those to Viv. That gives her context on my trajectory and things I’ve done over the years. I gave her a file of newspaper clippings. I went through my own Kindle library and thought about what are the books that have had an impact on me, and then I told her, “Here are the authors I want you to consider.” There was a lot of that—really thinking through and then distilling down into summary form that is small enough for the AI to keep in its virtual head. I actually think I would distill more at this point. But then the other thing I did—and this is where it gets a little fancy—is I have created a sort of recursive loop in Viv. I have a little bit of a question about this; partly, it was because ChatGPT didn’t have any memory features at the time, but
“You’re using AI to generate solutions for ideation. Once you’ve got the ideas, you can do an initial cull with AI, or you can do it via humans.” –Lisa Carlin About Lisa Carlin Lisa Carlin is the Founder of the strategy execution group, The Turbochargers, specializing in participative strategy, cultural intelligence, and AI’s impact on consulting. Website: theturbochargers.com LinkedIn Profile: Lisa Carlin What you will learn How AI is transforming strategy development and execution, leading to faster and more creative outcomes Practical methods for integrating AI into workshop processes, ideation, and customer feedback analysis Balancing human judgment with AI input to ensure effective decision-making in strategic planning Techniques for using AI in diagnosing and working within an organization’s culture for successful transformation Ways AI is boosting consultant and client productivity, reducing operational time, and increasing self-sufficiency Real-world examples of AI-driven analytics, including clustering survey data and generating management insights The outlook on the future of consulting, including why AI may reduce the number of consultants required Tactical uses of AI for ideation, communication effectiveness, and predicting customer engagement metrics Episode Resources Transcript Ross Dawson: Lisa, it is wonderful to have you on the show. Lisa: Thanks, Ross. I love chatting with you. Ross Dawson: So you’ve been spending a lot of time over many, many years in strategy and strategy execution. I’d love to start off by hearing how you are applying AI in the strategy process. Lisa: Well, it’s made things so much easier, made things take a shorter amount of time, saving huge amounts of time. And I feel like my work has gotten more creative. Let me give you some examples of how that plays out. One example is working with an ed tech early-stage business, a small business, and they wanted to basically build AI-native products for customer education. I can actually mention the name of the company because the CEO posted after we worked together and is building in public, so it’s HowToo, an Australian ed tech firm that’s funded mainly out of the US, but also locally in Australia. They’ve been providing education products for ages and are moving towards customer education embedded into technology products. We went through an iterative process of workshops, starting with some of the board members and some of the senior folks in a small group with an ideation session, and then iterating through to everybody in the business. Normally, that process would work where we would do some research with the customers first, then bring that research in, do some analysis, and then put it into the context for the workshop, work through what that means, come up with some ideas in the workshop, take it to the second workshop, and there you go. What we’re now able to do is iterate with AI. So we’ve got the notes from the meetings captured with AI—this is from the customer meetings. Then we’re able to pull out the pain points of customers in a really deep way, using AI to iterate through and synthesize the client feedback, and then also apply human insight into that, coming up with a really clear list of pain points. Then we ask AI to be virtual customers, and they can add to that process, so you get a very rich set of pain points. As we go through the process of product strategy and implementation, we’re able to use AI at every step of the process. For example, when we look at decision criteria for prioritizing, we can go to AI and say, “These are some of the things we’re considering. What else have we left out?” As we iterate with people in workshops and then with AI, we just get a much richer solution in the process. In fact, we came out with some really amazing insights about how you provide customers with learning about how to use these products to onboard them quickly, how you provide them with personalized contextual information so they can learn and get value from the product much faster. It’s led to a number of significant deals that HowToo has negotiated as a result of that work. Ross Dawson: So is this prompting directly with LLMs? Lisa: Yeah, it is. My favorite one is actually ChatGPT, which—you know, you’re probably waiting for some surprise, some unique and interesting or weird or specific product. I do use specific products for certain use cases, but for general logic, I’ve found that ChatGPT Pro is actually the best that I’ve come across, and certainly better than some of the enterprise solutions that I’m seeing people use. They feel protected and they’re happy to have a safe, private, directly hosted solution, but the logic in some of those models are not as good. Ross Dawson: So that’s the ChatGPT Pro, the top level, which not that many people have access to. I guess one of the big questions here is this balance between humans and AI. Most people have a human process where there’s a lot of value in bringing in the AI, and then we’re also getting all of these software products, which are saying “McKinsey in a box,” and they sort of say, “Just give us everything, and we’ll give you the final solution,” and it comes out as AI and there’s not a human involved. How do you tread that balance between where you bring in the human insight and where the AI complements it? Lisa: Yeah, that’s a good question. I think the key thing is that people need to feel like they are in control of the process. I’m a huge advocate for open strategy, for example. These are open strategy processes that are highly participative with people and CEOs, in particular, get worried because they worry they’re going to lose control of their process. So it’s always important that strategy is not democratic. Ultimately, the CEO has to make a captain’s call on things, and they need to feel like they’re in control of the process. The key thing is that you use AI at particular points of the process, and then you’ve got humans in the loop at other, specific decision-making points. You’re using AI to generate solutions for ideation. Once you’ve got the ideas, you can do an initial cull with AI, or you can do it via humans, but it’s the humans who are setting the parameters and making the decisions about which parameters to use, ultimately. I’ll give you another example with a multinational that I’ve been working with. They’re actually pretty far down the track on implementation of AI itself, and they’re doing a lot of transformation work around agents and around making their services— they provide high-end knowledge services B2B. They’re quite far advanced in terms of developing AI and thinking about what the technology architecture needs to look like with people. The difficulty that these organizations are facing is that there are a number of moving parts. Many organizations haven’t even finished the integration of different technology platforms. There’s still a hangover from the pandemic, from different types of competitive and business models that they’re implementing. So there’s all that legacy change underway. Plus, now you’ve got the impetus to use AI, and I’m seeing an increasing number of stakeholder complexities, because everybody has their own legacy projects, plus now we’ve got new projects coming in with AI, new strategic imperatives. In this particular organization—very sophisticated, very capable people—the challenge is, how do you sequence all of these things that you’ve got on your plate, and also get agreement and alignment with the stakeholders around these different priorities? We went through a workshop process where we defined the decision process itself, and I used AI to give me some examples of what the answer could look like before we went into the workshop. As a facilitator, that’s very powerful, because I’ve got some solutions in my back pocket that if the team gets stuck, I can whip them out and say, “Well, actually, I’ve been thinking about this. I’ve prompted AI around this. What do you think?” It just helps that conversation go forward faster in the room. But people are still very much in control of what the process and the plan need to look like. Ross Dawson: That’s great. In what you’ve been saying in both these examples is what I call framing, where the human always does the frame: this is the context, these are the objectives, this is the situation, these are the parameters. That’s where everything needs to happen within that. Part of it is choosing the right points within it. I think that’s a great example you just gave, where you are getting them to do the work, but then, when you get stuck or when you’ve got things, you can pull something out to say, “Well, here’s something to consider.” You don’t give them the solution first—it may not be the right solution anyway—but once they’ve considered it, they can consider these new ideas very well. And then it’s always this thing of, if you’ve got these very extended processes, how do you accelerate the timeframe? I think what you’re describing is something where you judiciously use that sort of pre-work, which has been assisted by AI, and that can definitely accelerate a group human process. Lisa: You do such an amazing job always, Ross, at pulling out the themes. I guess that’s what being a futurist is all about—the themes of what I’m saying. I could spend a day just responding to so many of the things you’ve just said there. But absolutely, the framing and th
“Let’s get ourselves around the generative AI campfire. Let’s sit ourselves in a conference room or a Zoom meeting, and let’s engage with that generative AI together, so that we learn about each other’s inputs and so that we generate one solution together.” –Nicole Radziwill About Nicole Radziwill Nicole Radziwill is Co-Founder and Chief Technology and AI Officer at Team-X AI, which uses AI to help team members to work more effectively with each other and AI. She is also a fractional CTO/CDO/CAIO and holds a PhD in Technology Management. Nicole is a frequent keynote speaker and is author of four books, most recently “Data, Strategy, Culture & Power”. Website: team-x.ai qualityandinnovation.com LinkedIn Profile: Nicole Radziwill X Profile: Nicole Radziwill   What you will learn How the concept of ‘Humans Plus AI’ has evolved from niche technical augmentation to tools that enable collective sense making Why the generative AI layer represents a significant shift in how teams can share mental models and improve collaboration The importance of building AI into organizational processes from the ground up, rather than retrofitting it onto existing workflows Methods for reimagining business processes by questioning foundational ‘whys’ and envisioning new approaches with AI The distinction between individual productivity gains from AI and the deeper organizational impact of collaborative, team-level AI adoption How cognitive diversity and hidden team tensions affect collaboration, and how AI can diagnose and help address these barriers The role of AI-driven and human facilitation in fostering psychological safety, trust, and high performance within teams Why shifting from individual to collective use of generative AI tools is key to building resilient, future-ready organizations Episode Resources Transcript Ross Dawson: Nicole, it is fantastic to have you on the show. Nicole Radziwill:Hello Ross, nice to meet you. Looking forward to chatting. Ross Dawson: Indeed, so we were just having a very interesting conversation and said, we’ve got to turn this on so everyone can hear the wonderful things you’re saying. This is Humans Plus AI. So what does Humans Plus AI mean to you? What does that evoke? Nicole Radziwill: The first time that I did AI for work was in 1997, and back then, it was hard—nobody really knew much about it. You had to be deep in the engineering to even want to try, because you had to write a lot of code to make it happen. So the concept of humans plus AI really didn’t go beyond, “Hey, there’s this great tool, this great capability, where I can do something to augment my own intelligence that I couldn’t do before,” right? What we were doing back then was, I was working at one of the National Labs up here in the US, and we were building a new observing network for water vapor. One of the scientists discovered that when you have a GPS receiver and GPS satellites, as you send the signal back and forth between the satellites, the signal would be delayed. You could calculate, to very fine precision, exactly how long it would take that signal to go up and come back. Some very bright scientist realized that the signal delay was something you could capture—it was junk data, but it was directly related to water vapor. So what we were doing was building an observing system, building a network to basically take all this junk data from GPS satellites and say, “Let’s turn this into something useful for weather forecasting,” and in particular, for things like hurricane forecasting, which was really cool, because that’s what I went to school for. Originally, back in the 90s, I went to school to become a meteorologist. Ross Dawson: My brother studied meteorology at university. Nicole Radziwill: Oh, that’s cool, yeah. It’s very, very cool people—you get science and math nerds who have to like computing because there’s no other way to do your job. That was a really cool experience. But, like I said, back then, AI was a way for us to get things done that we couldn’t get done any other way. It wasn’t really something that we thought about as using to relate differently to other people. It wasn’t something that naturally lent itself to, “How can I use this tool to get to know you better, so that we can do better work together?” One of the reasons I’m so excited about the democratization of, particularly, the generative AI tools—which to me is just like a conversational layer on top of anything you want to put under it—the fact that that exists means that we now have the opportunity to think about, how are we going to use these technologies to get to know each other’s work better? That whole concept of sense making, of taking what’s in my head and what’s in your head, what I’m working on, what you’re working on, and for us to actually create a common space where we can get amazing things done together. Humans plus AI, to me, is the fact that we now have tools that can help us make that happen, and we never did before, even though the tech was under the surface. So I’m really excited about the prospect of using these new tools and technologies to access the older tools and technologies, to bring us all together around capabilities that can help us get things done faster, get things done better, and understand each other in our work to an extent that we haven’t done before. Ross Dawson: That’s fantastic, and that’s really aligned in a lot of ways with my work. My most recent book was “Thriving on Overload,” which is about the idea of infinite information, finite cognition, and ultimately, sense making. So, the process of sense making from all that information to a mental model. We have our implicit mental models of how it is we behave, and one of the most powerful things is being able to make our own implicit mental models explicit, partly in order to be able to share them with other people. Currently, in the human-AI teams literature, shared mental models is a really fundamental piece, and so now we’ve got AI which can assist us in getting to shared mental models. Nicole Radziwill: Well, I mean, think about it—when you think about teams that you’ve worked in over the past however many years or decades, one of the things that you’ve got to do, that whole initial part of onboarding and learning about your company, learning about the work processes, that entire fuzzy front end, is to help you engage with the sense making of the organization, to figure out, “What is this thing I’ve just stepped into, and how am I supposed to contribute to it?” We’ve always allocated a really healthy or a really substantive chunk of time up front for people to come in and make that happen. I’m really enticed by, what are the different ways that we’re going to— for lack of a better word—mind meld, right? The organization has its consciousness, and you have your consciousness, and you want to bring your consciousness into the organization so that you can help it achieve greater things. But what’s that process going to look like? What’s the step one of how you achieve that shared consciousness with your organization? To me, this is a whole generation of tools and techniques and ways of relating to each other that we haven’t uncovered yet. That, to me, is super exciting, and I’m really happy that this is one of the things that I think about when I’m not thinking about anything else, because there’s going to be a lot of stuff going on. Ross Dawson: All right. Well, let me throw your question back. So what is the first step? How do we get going on that journey to melding our consciousness in groups and peoples and organizations? Nicole Radziwill: Totally, totally. One of the people that I learned a lot from since the very beginning of my career is Tom Redman. You know Tom Redman online, the data guru—he’s been writing the best data and architecture and data engineering books, and ultimately, data science books, in my opinion, since the beginning of time, which to me is like 1994. He just posted another article this week, and one of the main messages was, in our organizations, we have to build AI in, not bolt it on. As I was reading, I thought, “Well, yeah, of course,” but when you sit back and think about it, what does that actually mean? If I go to, for example, a group—maybe it’s an HR team that works with company culture—and I say to them, “You’ve got to build AI in. You can’t bolt it on,” what they’re going to do is look back at me and say, “Yeah, that’s totally what we need to do,” and then they’re going to be completely confused and not know what to do next. The reason I know that’s the case is because that’s one of the teams I’ve been working with the last couple of weeks, and we had this conversation. So together, one of the things I think we can do is make that whole concept of reimagining our work more tangible. The way I think we can do that is by consciously, in our teams, taking a step back and saying, rather than looking at what we do and the step one, step two, step three of our business processes, let’s take a step back and say, “Why are we actually doing this?” Are there groups of related processes, and the reason we do these things every day is because of some reason—can we articulate that reason? Do we believe in that reason? Is that something we still want to do? I think we’ve got to encourage our teams and the teams we work with to take that deep step back and go to the source of why we’re doing what we’re doing, and the
“This is the first time, really, humanity’s had the possibility open up to create a new way of life, a new society—to create this utopia. And I really hope we get it right.” –Joel Pearson About Joel Pearson Joel Pearson is Professor of Cognitive Neuroscience at the University of New South Wales, and founder and Director of Future Minds Lab, which does fundamental research and consults on Cognitive Neuroscience. He is a frequent keynote speaker, and is author of The Intuition Toolkit. Website: futuremindslab.com profjoelpearson.com LinkedIn Profile: Joel Pearson University Profile: Joel Pearson What you will learn How AI-driven change impacts society and the importance of preparing individuals and organizations for it Key principles from neuroscience and psychology for effective AI-specific change management The SMILE framework for when to trust intuition versus AI recommendations Why designing AI to augment, not replace, human skills is essential for a thriving future How visual mental imagery and AI-generated visuals can support cognition and personal development The risks and opportunities of outsourcing thinking to AI, and strategies for maintaining critical thinking The role of metacognition and emotional self-awareness in utilizing AI effectively and ethically Emerging therapeutic and creative potentials of AI in personal transformation and human flourishing Episode Resources Transcript Ross Dawson: Joel, it is awesome to have you on the show. Joel Pearson: My pleasure Ross. Good to be here with you. Ross: So we live in a world of pretty fast change where AI is a significant component of that, and you’re a neuroscientist, and I think with a few other layers to that as well. So what’s your perspective on how it is we are responding and could respond to this change engendered by AI? Joel: Yeah, so that’s the big question at the moment that I think a lot of us are facing. There’s a lot of change coming down the pipeline, and I think it’s going to filter out and change, over a long enough timeline, a lot of things in a lot of people’s lives—every strata of society. And I don’t think we’re ready for that, one, and two, historically, humans are not great at change. People resist it, particularly when they don’t have control over it or don’t initiate it. They get scared of it. So I do worry that we’re going to need a lot of help through some of these changes as a society, and that’s sort of what we’ve been trying to focus on. So if you buy into the AI idea that, yes, first the digital AI itself is going to take jobs, it’s going to change the way we live, then you have the second wave of humanoid robots coming down the pipeline, perhaps further job losses. And just, you know, we can go through all the kinds of changes that I think we’re going to see—from changes in how the economy works, how education works, what becomes the role of a university. In ten years, it’s going to be very different to what it is now, and just the quality of our life, how we structure our lives, what we have in our homes. All these things are going to change in ways that are, one, hard to predict, and two, the delta—the change through that—is going to be uncomfortable for people. Ross: So we need to help people through that. So what’s involved? How do we help organizations through this? Joel: We know a lot about change through the long tradition of corporate change management, even though it’s a corporate way to say it. But we do know that most companies go through this. When they want to change something, they get change management experts in and go through one of the many models on how to change these things, and most of them have certain things in common. Often they start with an education piece, or getting everyone on the same page—why is this happening, so people understand. You help people through the resistance to the change. You try things out. You socialize these changes to make them very normal—normalizing it. And we know that if you have two companies, let’s say, and one has help with the change and one doesn’t, there’s about a 600% increase in the success of that change when you help the company out. So if you apply that to AI change in a company or a family or a whole nation like Australia, the same logic should hold, right? If we want to go through a big national change—not immediately, but over a ten, fifteen, twenty-year period—then we are going to need change plans to help everyone through this, to help understand what’s happening, what the choices might be. And so that’s kind of the lens I look at the whole thing through—a change, an AI-specific change management kind of piece. Easier said than done. We probably need government to step up there and start thinking about that. There are so many different scenarios. One would be, what happens in ten or fifteen years if we are looking at, you know, 50% unemployment? Then that’s a radical change to the spaces we live in, the cities, our lifestyles, and we can unpack that further. A lot of people think of universal basic income as this idea, a bit like retirement, or this flavor, like they do when they outsource to AI—that once you outsource, or once AI does a job and you have some other sort of backed income, then you get to do nothing. And that worries me a lot, because we know that retirement is really bad for your health—not just mental health, but physical health. There’s a higher likelihood that you’ll get sick and die after you retire. And so we see this strange thing where people say they want to do nothing, but when they do nothing, it’s actually really bad for their health. Ross: Yeah, humansplusAI, I believe very much that AI is a complement to humans. It is not a replacement, if we design it effectively. And it’s really about designing well—how is it that we make, you know, the individual skills, what organizations function at, at a societal level—how can we make it that AI is not designed or enacted as a replacement to humans, but is a complement to augment us. Whether that’s in our work activities now, where we are rewarded, but also in whatever else we are working on. So I think that there’s, you know, not—you know, there are whatever chances there are that we start to have more people who need support because they’re not rewarded for work. But really, it’s around saying, how can we design, as much as possible, the implementation and use of AI so that it can augment and complement us, so that we expand our abilities, express abilities, and be rewarded for that? Joel: Yeah, I’m with you 100%. I mean, I guess the problem is that we are not designing it. We are not making it. You know, a handful of companies and just a handful of countries are doing the designing and making, and they are needing more and more capital and resources. And it just worries me that their end goal is to pull some of those jobs out of the—human jobs out of the economy, because they’ll need to find a way to recoup some of their capital investment. But we’ll see, maybe things will go a different direction. You know, it is hard to tell. We are seeing the numbers in graduate jobs dropping in the US at the moment, and we are seeing layoffs that are apparently linked to AI usage. But it’s hard to know, right? It really comes down. Ross: It’s about agency—human agency—as in, what is it that we can do as individuals, as leaders, in order to maximize the chances that we have that vision? And I think there’s, you know, for example, I’ve created this framework around how we change to redesign entry-level jobs—not what they used to be, where they can be very readily substituted by AI, but ways where you can accelerate the time to develop judgment, to be able to contribute actively, to be able to bring perspectives. So this is around how organizations reframe it. And if we continue to use the old models, then yes, we’ll change stuff. So it really is around, how do we re-envisage that? And I think, as the neuroscientist, I’m interested in your perspectives on how we can be thinking or designing AI as a complement to human cognition. Joel: Yeah. So let me throw something else out early on, because I tend to get—yeah, so pick me up if I get too dark and gloomy or too negative, because I do think of myself as an AI optimist. I do think we are on the way to utopia. I just think we’re going to have some speed bumps on the way to getting there. And so I feel like what I’m trying to do with my mission now is to help on the human side of what’s going on, rather than trying to influence the tech companies—trying to get people ready. And so the immediate thing is the uncertainty and all the changes coming down the pipeline, like I just said. And so when it comes to absolutely redesigning the tech itself, there are lots of centers—Tristan Harris’s Center for Humane Technology is working on that and trying to influence through sometimes lawsuits, legal means, other times trying to get more of a human-centered design aspect into these companies. And I think most of the companies have a pretty—you know, that’s what they want as well. They are trying to make human-centered, human-focused products and services. I think it’s just sometimes they’re racing so quickly that that gets relegated to the back burner, a little bit behind other things. So yeah, we need to put humans first, both in the design of the products, but also we need to educate and help people on the people side—understand what’s happening
“Our vision is that for well-being, we really want to prioritize human connection and human touch. We need to think about how to augment human capabilities.” –Diyi Yang About Diyi Yang Diyi Yang is Assistant Professor of Computer Science at Stanford University, with a focus on how LLMs can augment human capabilities across research, work and well-being. Her awards and honors include NSF CAREER Award, Carnegie Mellon Presidential Fellowship, IEEE AI’s 10 to Watch, Samsung AI Researcher of the Year, and many more. Website: Future of Work with AI Agents: The Ideation-Execution Gap: How Do AI Agents Do Human Work? Human-AI Collaboration: LinkedIn Profile: Diyi Yang University Profile: Diyi Yang What you will learn How large language models can augment both work and well-being, moving beyond mere automation Practical examples of AI-augmented skill development for communication and counseling Insights from large-scale studies on AI’s impact across diverse job roles and sectors Understanding the human agency spectrum in AI collaboration, from machine-driven to human-led workflows The importance of workflow-level analysis to find optimal points for human-AI augmentation How AI can reveal latent or hidden human skills and support the emergence of new job roles Key findings from experiments using AI agents for research ideation and execution, including the ideation-execution gap Strategies for designing long-term, human-centered collaboration with AI that enhances productivity and well-being Episode Resources Transcript Ross Dawson: It is wonderful to have you on the show. Diyi Yang: Thank you for having me. Ross Dawson: So you focus substantially on how large language models can augment human capabilities in our work and also in our well-being. I’d love to start with this big frame around how you see that AI can augment human capabilities. Diyi Yang: Yeah, that’s a great question. It’s something I’ve been thinking about a lot—work and well-being. I’ll give you a high-level description of that. With recent large language models, especially in natural language processing, we’ve already seen a lot of advancement in tasks we used to work on, such as machine translation and question answering. I think we’ve made a ton of progress there. This has led me, and many others in our field, to really think about this inflection point moving forward: How can we leverage this kind of AI or large language models to augment human capabilities? My own work takes the well-being perspective. Recently, we’ve been building systems to empower counselors or even everyday users to practice listening skills and supportive skills. A concrete example is a framework we proposed called AI Partner and AI Mentor. The key idea is that if someone wants to learn communication skills, such as being a really good listener or counselor, they can practice with an AI partner or a digitalized AI patient in different scenarios. The process is coached by an AI mentor. We’ve built technologies to construct very realistic AI patients, and we also do a lot of technical enhancement, such as fine-tuning and self-improvement, to build this AI coach. With this kind of sandbox environment, counselors or people who want to learn how to be a good supporter can talk to different characters, practice their skills, and get tailored feedback. This is one way I’m envisioning how we can use AI to help with well-being. This paradigm is a bit in contrast to today, where many people are building AI therapists. Our vision is that for well-being, we really want to prioritize human connection and human touch. We need to think about how to augment human capabilities. We’re really using AI to help the helper—to help people who are helping others. That’s the angle we’re thinking about. Going back to work, I get a lot of questions. Since I teach at universities, students and parents ask, “What kind of skills? What courses? What majors? What jobs should my kids and students think about?” This is a good reflection point, as AI gets adopted into every aspect of our lives. What will the future of work look like? Since last year, we’ve been thinking about this question. With my colleagues and students, we recently released a study called The Future of Work with AI Agents. The idea is straightforward: In current research fields like natural language processing and large language models, a lot of people are building agentic benchmarks or agents for coding, research, or web navigation—where agents interact with computers. Those are great efforts, but it’s only a small fraction of society. If AI is going to be very useful, we should expect it to help with many job applications, not just a few. With this mindset, we did a large-scale national workforce audit, talking to over 1,500 workers from different occupations. We first leveraged the O*NET database from the Department of Labor Statistics to access occupations that use computers in some part of their work. Then we talked to 10 to 15 workers from each occupation about the tasks they do, how technology can help, in what ways they want technology to automate or augment their work, and so on. Because workers may not know concretely how AI can help, we gave summaries to AI experts, who helped us assess whether, by 2025, AI technology would be ready for automation or augmentation. We got a very interesting audit. To some extent, you can divide the space into four regions: one where AI is ready and workers want automation; another where AI is not ready but workers want automation; a third where AI is ready but workers do not want automation; and a low-priority zone. Our work shows that today’s investment is pretty uniformly distributed across these four regions, whereas research is focused on just one. We also see potential skill transitions. If you look at today’s highly paid skills, the top one is analyzing data and information. But if you ask people what kind of agency they want for different tasks, moving forward, tasks like prioritizing and organizing information are ranked at the top, followed by training and teaching others. To summarize, thinking about how AI can concretely augment our capabilities, especially from a work and well-being perspective, is something that I get really very excited. Ross Dawson: Yeah, that’s fantastic. There are a few things I want to come back to. Particularly, this idea of where people want automation or augmentation. The reality is that people only do things they want, and we’re trying to build organizations where people want to be there and want to flourish. We need to be able to—it’s, to your point, some occupations don’t understand AI capabilities. With some change management or bringing it to them, they might understand that there are things they were initially reluctant to do, which they later see the value in. The paper, Future of Work with AI Agents, was really a landmark paper and got a lot of attention this year. One of the real focuses was the human agency scale. We talk about agents, but the key point is agency—who is in control? There’s a spectrum from one to five of different levels of how much agency humans have in combination with AI. We’re particularly interested in the higher levels, where we have high human agency and high potential for augmentation. Are there any particular examples, or how do we architect or structure those ways so that we can get those high-agency, high-augmentation roles? Diyi Yang: Yeah, that’s a very thoughtful question. Going back to the human agency you mentioned, I want to just provide a brief context here. When we were trying to approach this question, we found there was no shared language for how to even think about this. A parallel example is autonomous driving, where there are standards like L0 to L5, which is an automation-first perspective—L0 is no automation, L5 is full automation. Similarly, now we need a shared language to think about agency, especially with more human-plus-AI applications. So, H1 to H5 is the human agency scale we proposed. H1 refers to the machine taking all the agency and control. H5 refers to the human taking all the agency or control. H3 is equal partnership between human and AI. H2 is AI taking the majority lead, and H4 is human taking the majority lead. This framework makes it possible to approach the question you’re asking. One misunderstanding many people have about AI for work is that they think, “Oh, that’s software engineering. If they can code, we’ve solved everything.” The reality is that even in software engineering, there are so many tasks and workflows involved in people’s daily jobs. We can’t just view agency at the job level; we need to go into very specific workflow and task levels. For example, in software engineering, there’s fixing bugs, producing code, writing design documentation, syncing with the team, and so on. When we think about agency and augmentation, the first key step is finding the right granularity to approach it. Sometimes AI adoption fails because the granularity isn’t there. An interesting question is, how do we find where everyone wants to use AI in their work for augmentation? Recently, we’ve been thinking about this, and we’re building a tool called workflow induction. Imagine if I could sit next to you and watch how you do your tasks—look at your screen, see how you produce a podcast, edit and upload it, add captions, etc. I observe where you struggle, where it’s very demanding, and wher
“It’s very important to understand that human data is part of the training data for the algorithm, and it carries all the issues that we have with human data.” –Ganna Pogrebna About Ganna Pogrebna Ganna Pogrebna is a Research Professor of Behavioural Business Analytics and Data Science at the University of Sydney Business School, the David Trimble Chair in Leadership and Organisational Transformation at Queen’s University Belfast, and the Lead for Behavioural Data Science at Alan Turing Institute. She has published extensively in leading journals, while her many awards include Asia-Pacific Women in AI Award and the UK TechWomen100. Website: gannapogrebna.com turing.ac.uk LinkedIn Profile: Ganna Pogrebna University Profile: Ganna Pogrebna What you will learn The fundamentals of behavioral data science and how human values influence AI systems How human bias is embedded in algorithmic decision-making, with real-world examples Strategies for identifying, mitigating, and offsetting biases in both human and machine decisions Why effective use of AI requires context-rich prompting and critical thinking, not just simple queries Pitfalls of relying on generative AI for precise or factual outputs, and how to avoid common mistakes How human-AI teams can be structured for optimal collaboration and better outcomes The role of simulation tools and digital twins in improving strategic decisions and stakeholder understanding Best practices for training AI with high-quality behavioral data and safely leveraging AI assistants in organizations Episode Resources Transcript Ross Dawson: Ganna, it is wonderful to have you on the show. Ganna Pogrebna: Yeah, it’s great to be here. Thanks for inviting me. Ross Dawson: So you are a behavioral data scientist. Let’s start off by saying, what is a behavioral data scientist? And what does that mean in a world where AI has come along? Ganna Pogrebna: Yeah, that’s right. That’s a loaded term, I guess—lots of words there. But what that kind of boils down to is, I’m trying to make machines more human, if you will. Basically, making sure that machines and algorithms are built based on our values and things that we are interested in as humans. So that’s kind of what it is. My background is in decision theory. I’m an economist by training, but in 2013 I got a job in an engineering department, and my professional transformation started from there. I got involved in a lot of engineering projects, and my work became more and more data science-focused. Now, what I do is called behavioral data science. Back in the day, in 2013, they just asked me, “What do you want to be called?” and I thought, okay, I do behavior and I do data science, so how about behavioral data scientist? Ross Dawson: Sounds good to me. So unpacking a little bit of what you said before—you’re saying you make machines more like humans, so that means you are using data about human behavior in order to inform how the systems behave. Is that correct? Ganna Pogrebna: Yeah, that’s correct. I think in any setting—so in a business setting, for example—many people do not realize that practically all data we feed into machines, any algorithm you take, whether it’s image recognition or decision support, it’s all based on human data. Effectively, some humans labeled a dataset, and that normally goes into an algorithm. Of course, an algorithm is a formula, but at the core of it, there is always some human data, and most of the time we don’t understand that. We kind of think that algorithms just work on their own, but it’s very important to understand that human data is part of the training data for the algorithm, and it carries all the issues that we have with human data. For example, we know that humans are biased in many ways, right? All of these biases actually end up ultimately in the algorithm if you don’t take care of it at the right time. If you want, I can give you a classic example with the Amazon algorithm—I’m sure you’ve heard of it. Amazon trained an HR algorithm for hiring, specifically for the software engineering department, and every single person in that department was male. So if you sent this algorithm a female CV with something like a “Women in Data” award or a female college, it would significantly disadvantage the candidate based on that. It carried gender discrimination within the algorithm because it was trained on their own human data. Ross Dawson: Yeah, well, that’s one of the big things, as I’ve been saying since the outset, is that AI is trained on human data, so human biases get reflected in those. The difficult question is, there is no such thing as no bias. I mean, there’s no objective view—at least that’s my view. Ganna Pogrebna: Absolutely. Yeah. Ross Dawson: So we talk about bias auditing. All right, so we have an AI system trained with human data, whatever it may be. In this case, with the Amazon recruitment algorithm, you could actually look at it and say, “All right, it’s probably not making the right decisions,” with some degree of explainability. So how do we then debias? Or how do we have an algorithm which is trained on implicitly biased data? Are there ways that we can reduce at least those biases? Ganna Pogrebna: Yeah, a lot of my work is trying to understand human bias in organizations and trying to offset that with machine decision making, and equally, to understand machine bias and offset it with human decision making. Well, I now have, if you notice, a digital background with books. We’ve done some work with hiring algorithms. If you’re interviewing with a company, a lot of times you have pre-screening done by an algorithm, and in the interview process, you might have some automated interview where you record yourself and send a video. I bet many people have been through this process. What we found was, we had exactly the same recording of an individual answering questions, but in one case, we put a plain background—everything was shot on green screen—and in another, we put a background with books. The algorithm rated people with books in the background higher on the same questions and answers than the person against the plain background. So, going back to your point, how do we offset algorithmic problems? First, we need to understand what they are. If we know that an algorithm would rate exactly the same answers differently depending on the background, we should probably tell people to shoot all their answers against a plain background or something like this, to equalize it. So the first thing is understanding where this is coming from. Second is, do you really need an algorithm in the particular case, or can it be done by a simple process? Finally, you try to understand where the issues are with human decision making and how algorithms can potentially offset them—or is the algorithm making things worse? Because sometimes it does. I think it all comes to boils down to first understanding where the problems are, and then using the two systems—the human systems and algorithmic systems—to offset the issues. Ross Dawson: Which I think goes back to the point of humans plus AI. Either individually is not necessarily as well designed as a system of both. Ganna Pogrebna: Yeah, exactly. Oftentimes, organizations don’t have the possibility to implement generative AI or AI systems. If you’re doing all your analytics on an Excel sheet, it’s probably not a great idea to think straight away about implementing AI. But on the other hand, there are some great applications where algorithms can facilitate better, more structured decision making. I work a lot with executive teams and leaders, and in the majority of cases, they expect precision from algorithmic output. If they put something into ChatGPT or Claude, they expect precise statistics, everything to be impeccably well researched. These tools are completely inappropriate for that. They are good for thinking outside the box. For example, we were recently hiring people into my team—engineers, software engineers. We had four candidates who came to the interview, and three of them, when we got to the point where we asked, “Do you have any questions for us?”—three people asked exactly the same questions. So what happened is like they went to ChatGPT, asked for questions, memorized them, and gave us exactly what the algorithm told us. The fourth person asked more creative questions. I don’t know whether this fourth person used a different algorithm or just used the algorithm more creatively, but we hired the fourth person because they thought outside the box in terms of what questions to ask us. You need to be careful, because one of the problems is algorithms can make us all the same. You can tell that by looking at, for example, LinkedIn posts, when they start with, “I’m excited to tell you,” or “I’m so thrilled to inform you.” That’s probably written by ChatGPT, and you know that straight away. But a smart person who understands how algorithms think would structure it differently. They can still use input from the algorithm, but at the same time appear as if the content is unique and nicely positioned. Ross Dawson: Let’s dig into that. The way I think of it is humans plus AI workflows. There are obviously many sequencings, but one is: you’ve got a human, they’ve got a situation—job interview, decision, whatever—and they use AI to help them. What are the specific capabilities, attitudes, or techniques that people need to use to make sure they’re ta
“Our Great Barrier Reef is the size of Italy. We don’t have enough people to really go out there and dive and do the work that needs to be done to help protect it.” –Sue Keay About Sue Keay Dr Sue Keay is Director of UNSW AI Institute and Founder and Chair of Robotics Australia Group, the peak body for the robotics industry in the country. Sue is a fellow of the Australian Academy of Technology and Engineering and serves on numerous advisory boards. She was featured on the 2025 H20 AI 100 list, and the Cosmos list of Remarkable and Inspirational Women in Australian Science. Website: suekeay.com roboausnet.com.au futurewg.com LinkedIn Profile: Dr Sue Keay University Profile: Dr Sue Keay What you will learn How AI and robotics can address complex environmental challenges, such as preserving the Great Barrier Reef The importance of open-minded leadership and organizational experimentation in AI transformation Strategies for implementing effective AI governance and leveraging diverse expertise within organizations Balancing cognitive augmentation and cognitive offloading with AI tools in education and work The evolving impact of AI and robotics on future job roles, emphasizing augmentation rather than full replacement Risks and opportunities associated with relying on external AI models, highlighting the case for sovereign AI The significance of investing in public AI infrastructure and retaining AI talent for national competitiveness Approaches to fostering a vibrant domestic AI ecosystem, including talent attraction, infrastructure, and unique local advantages Episode Resources Transcript Ross Dawson: So it is wonderful to have you on the show. Sue Keay: Yeah, thanks very much for having me, Ross. Ross Dawson: So you’ve been doing so much and getting some wonderful accolades for your work, and I think that’s with this positive framing. So at a high level, how can AI best augment humanity? Or what are the things we can imagine? Sue Keay: Well, you know, one of the best examples that I often share with people is around how AI could be applied to solve environmental challenges. I think the key aspects of AI that people are only just really starting to grasp are not only the velocity with which AI is happening and starting to have an impact on the world at the moment, but also the scale. I really look at this more from the perspective of robotics, where AI is having a physically active role in the environment. Where I see the big opportunities are in solving problems that humans to date have been unable to solve on our own. When I was in Queensland, one of the research groups I worked with had developed an underwater vision-guided robot that could do a number of things and was looking at how it could play a role in helping to preserve our Great Barrier Reef. Our Great Barrier Reef is the size of Italy. We don’t have enough people to really go out there and dive and do the work that needs to be done to help protect it. There are a number of threats to the Great Barrier Reef, such as the proliferation of crown-of-thorns starfish that are literally eating all of the reef. At the moment, we try and control their numbers using human divers, but that’s actually inherently unsafe, and we can only do it in areas where tourists go, so the rest of the reef is laid to ruin. But also, as ocean temperatures rise, coral is currently spawning in temperatures that are not conducive to coral growth. The robot was developed so that it could collect coral spawn and essentially move it further south into ocean temperatures that are more conducive to coral growth. To my mind, if we could find a commercial rationale to invest, then we could have a whole bunch of these robots working as a swarm, helping to collect coral spawn and rejuvenate the coral reef, encouraging coral growth a bit further south in conditions that are conducive. It’s just something we can’t tackle on our own. To me, being able to solve some of these challenges—like climate change, where we’re desperately needing solutions to problems and as a species, we haven’t done a great job of solving them on our own today. Ross Dawson: That’s a fantastic example. Obviously, environmental challenges and the broad things are described as wicked problems, as in, there is no ready solution. So there’s a cognitive aspect to the sense of, how can we not find the solution, but be able to find pathways to work out what are the ways in which we can address impact, or move against climate change? That’s a really wonderful example of where you’re actually putting that into practice, manifesting that with robotics. Sue Keay: Yeah, that’s right. It’s just, what’s the commercial imperative? There are a lot of challenges that we can imagine solving, but at the end of the day, someone does have to invest in making it happen. Ross Dawson: So one of the other things, which is, I suppose, not quite as wicked a problem as climate change, but is organizational transformation. The world is changing faster than organizations are. I suppose a lot of leaders suddenly say, oh, we’ve got AI, how do we put this into practice? You do a lot with leaders and communicating and engaging with them. How do you help leaders to understand the ways in which they can transform organizations in an AI world? Sue Keay: Yeah, well, there’s no simple answer to that question, is there? But I think the most important thing that is becoming increasingly clear is that leaders have to have an open mindset. No transformation works if the organization doesn’t have leadership that sends clear messaging that experimenting with artificial intelligence, and that the use of artificial intelligence within the business, is a priority and act accordingly. I think that’s the biggest role that leaders can play, as well as modeling the sort of behavior that they’re expecting from their employees. In many cases, that just means experimenting with AI on a personal level. But it’s very hard to do that if you can’t engage with having an open mindset. Because I think it’s a very challenging time—people are having to make decisions at a very rapid pace, and it makes people feel very uncomfortable. But at the end of the day, that’s the leader’s responsibility: to guide organizations through these tumultuous times, encouraging and empowering people at the individual level to do what they can to understand how artificial intelligence is going to impact the business. So I think leadership is vital, but also making room for people from all parts of the business to be able to play a role and bring their imagination to the table in terms of how artificial intelligence can be applied. As I said, I don’t think anyone’s got all of the answers. The people who understand the domain best are the people working in the business. So giving them the tools and understanding about AI and how it might be used in the business is critical if you want to survive the AI transformation that we’re all living through at the moment. Ross Dawson: Thriving overload. I talk about openness to experience being what enables our ability to synthesize things, make sense of the world, and take action. So that’s one of the questions: how do we then make ourselves more open to experience or ideas? In what you’ve said, and also more generally in your communication, you talk about experimentation being a fundamental piece for leaders and throughout organizations. But that needs to be balanced with some sort of governance, in the sense of saying, well, what experiments go too far? Or how do you build the learning loops from experiments? So if a leader says, all right, we are going to experiment and learn and get ideas to come up from all parts of the organization and see what works, how can that be best structured? Sue Keay: Yeah, I think it does open the door for some new styles of governance. Increasingly, we’re seeing companies reach out—if they don’t have internal AI expertise—to bring AI expertise in, in the form of external advisory roles. I think it is also a real opportunity for reverse mentoring in many cases, where some of the answers might actually lie with more junior members of the staff who wouldn’t typically get a seat at the table in some of the decision-making roles. Being able to find effective ways that those people, particularly if they have knowledge about artificial intelligence, can play a more productive leadership role is important. So really, it’s about harnessing whatever resources are at your disposal, whether they actually be within the organization or external to the organization, to help make things happen. Ross Dawson: So essentially being more AI aware and AI capable to help design some new governance as well as drive the experimentation. Sue Keay: Well, I think at the end of the day, what it involves is having a good, long hard look at where the organization is at today, and making that assessment of how well positioned the organization is for all of these rapid changes that are occurring. Where there are deficits, putting things in place to help fill those gaps and to make sure that staff feel supported through the process. But I think one of the things—because, in essence, this is just a huge change management process—that is really vital is ensuring that people feel that they have a voice in the future. Just to give you an example from where I work, that also includes being flexible enough to accept when people do not want to engage wit
“But an interesting part here, and it’s linked to strategy, is how much AI will change the relationship between management, the executive team, and the board.” –Dominique Turcq About Dominique Turcq Dominique Turcq is founder of the Paris-based research and advisory center Boostzone Institute. His roles have included as professor at a number of business schools including INSEAD, head of strategy for major organizations including Manpower, partner at McKinsey & Co, special economic advisor to the French government, and board member of Société Française de Prospective. He is author of 8 books on strategy and the impact of technology. Website: boostzone.fr LinkedIn Profile: Dominique Turcq Books: Dirigeants et conseils d’administration : Augmented Management: The Fractal Nature of Enterprise 2.0:   What you will learn How the role of strategy in organizations has shifted from focusing solely on shareholders to considering broader societal and environmental stakeholders Why long-term foresight and scenario planning are increasingly critical for effective strategic decisions How new legal and societal expectations are reshaping the responsibilities of executives and boards The evolving relationship between boards and executive teams as AI advancements introduce new governance challenges and opportunities Practical ways generative AI is changing decision-making, communications, and risk management at the board level The potential for AI to transform work, skills development, and organizational structures—and the risks of cognitive atrophy from overreliance The importance of fostering an “ecology of mind” in organizations to balance technology use, creativity, learning, and collective cognition Why ongoing reflection, adaptability, and diverse mental engagement are essential for individuals and leaders amid rapid AI-driven change Episode Resources Transcript Ross Dawson: Dominique, it’s wonderful to have you on the show. Dominique Turcq: Thank you, Ross. It’s very nice to be invited by you on such a prestigious podcast. Ross: So you have been working in strategy for a very, very long time, and along that journey, you have recognized the impact of AI before many other people, I suppose. I’d like to start off with that big frame around strategy and how it’s evolving. Maybe we can come back to the AI piece, but how have you seen the world of strategy evolving over the last decades? Dominique: Several things have happened in the last two or three decades. First, an anecdote. I was the head of the French Strategic Association, and we closed this association in 2008. You know why? Because we had no members anymore. In other words, less and less companies had a Chief Strategy Officer. Why? Because people in the executive team or on the board thought they were all good at strategy and didn’t need a strategy officer. The problem is, when you are operational, whichever part of the executive team you are in, you don’t have the mind or the time to look at the long term, therefore to really look at the strategy. You may be competent at strategy execution, but are you good at strategic planning, at forecasting, at long-term planning and futurology? You’re not, because you don’t have time to do that. So we closed this association, and frankly, it’s very interesting to see that it has not been reborn. We still have very few real Chief Strategy Officers in French companies. And I’m sure it’s the same all over Europe. I don’t know about the US, but in Europe, we see it everywhere. So to me, that’s a big change. Another big change is that we have clearly entered, for the last 10 years and for the next 20 years, into a major era of change—a change in paradigm. Until 10 or 20 years ago, let’s say until 2000, the basic paradigm was, by the way, Ricardo’s paradigm of the 19th century. In other words, the Earth has all the resources we need, the Earth can handle all our waste, and all this is free. Remember Ricardo said the Earth’s resources are free, and we have no limit. Until 2000, that was the thinking. Since 2000 until today, more or less, people have started to realize that, well, some resources are infinite or look infinite, but most resources are finite, and the way the Earth is able to sort our waste is not as good as we thought. Now we are entering a new paradigm, which will become very clear in the next few years and is very important for strategy. We are entering a finite world. Companies have a sociological role to play, both for the Earth and for society. This is very new. In France, we have a law called the “Loi PACTE”, which changed the legal code of corporations. Before that, it said a corporation is here to enrich the shareholders, more or less. Now it says, yes, we have to enrich the shareholders, but we also have to take into consideration the impact the corporation has on society and on the environment. It’s a huge legal change. Therefore, if you are in strategy today, you have to enrich your shareholders, but also be careful not to harm the planet, not to harm society, and to express your concern for what is called stakeholders. This is an interesting part in strategy, because until recently, stakeholders were more or less your employees, your suppliers, your customers. Now, obviously, you also have the environment and society, and even the local place you work in. If you are in a city where you are the most important employer, you have a relationship with this city, and you are responsible for the health of this city. So it’s a stakeholder. We have a lot of new stakeholders, and I think from a strategy point of view, this has big implications. How do we handle all these stakeholders at the same time, and to which stakeholders should we listen? Because today, most stakeholders are not in the General Assembly. They are not even on the board. So how do we listen to them? How do we respect them? How do we manage our long-term relationship with them? So yes, strategy is changing a lot. Ross: One of the things you’ve always said over the years is that in order to build effective strategy, you have to have a long-term view. You have to use effective foresight, or, in French, la prospective. And that is a fundamental capability in order to be effective at strategy. Dominique: Yeah, I always defended that, because I think you can only work with strategy if you have a real long-term view. The issue with a long-term view is several, but one is the complexity, because we don’t have a crystal ball. So we have to understand what will really happen, and therefore what the consequences are. We have to make hypotheses on discrete variables. Continuous variables are okay, in a way. Discrete variables—you can’t, you have to make scenarios. How will the war in Ukraine unfold? You have to make a scenario; you cannot have a definite idea. So this is a discrete variable. Continuous variables are almost more interesting because we know we have a certain number of variables, and we know where they go, like population increase—we know where it goes. Climate change—we know where it goes and some of the implications, like we are going to have less water, maybe we are going to have resource issues with rare earths or whatever. Sorry, my cat is disturbing me. The great thing in strategy today is, let’s work on these long-term continuous variables and see how they impact today’s strategy. There are many of them, by the way, but even these variables, I see a lot of Chief Strategy Officers, when they exist, not taking them into consideration. I’ll give you two or three examples. When you speak about the labor market and the size and distribution of the labor market, very few people realize that we have more and more older people in the labor market. How do we deal with this aging? It’s a real strategic issue, because it means the whole organization will be changed. That’s a very classic example. Another one is, I had a very interesting meeting recently with people in the agricultural field—cooperatives. These are big companies, and I discussed with them and asked, do you realize that within your warehouses, because of climate change, the temperature might go up to 50 degrees inside? Even if you have 48 outside, it may be 50 inside. What happens at 50 degrees? When you have chemical products stored together, they explode. Therefore, you have to plan how you are going to build your warehouses, how you’re going to change your warehouses. This is a long-term step. It’s not two years; it’s within the next 10 or 20 years, and we didn’t realize that. So while this is a continuous variable—we know we are going to have a temperature increase for sure, and we know very closely what will happen—we have to plan for it. So this, population, and a few others, we can plan for, and few people do it today. That’s why, Ross, you’re right. I always wanted to work on the long term and its implication on the short term. Ross: One of the very interesting things—there was a great book, or book title, particularly by Peter Schwartz, “Inevitable Surprises,” where you can say, well, yes, we know this is going to happen. It’s just a question of how long it’s going to take. And they are still surprises to most people, but we can map this out and start to plan ahead. And that’s what strategy is: to be able to plan ahead. Dominique: It’s more futurology, prospective, than immediate strategy, because some of this stuff doesn’t have an immediate impact. For instance, w
“I call it the AI sandwich. When we want to use augmentation, we’re always the bread and the LLM is the cheese in the middle.” –Beth Kanter About Beth Kanter Beth Kanter is a leading speaker, consultant, and author on digital transformation in nonprofits, with over three decades experience and global demand for her keynotes and workshops. She has been named one of the most influential women in technology by Fast Company and was awarded the lifetime achievement in nonprofit technology from NTEN. She is author of The Happy Healthy Nonprofit and The Smart Nonprofit. Website: bethkanter.org LinkedIn Profile: Beth Kanter Instagram Profile: Beth Kanter What you will learn How technology, especially AI, can be leveraged to free up time and increase nonprofit impact Strategies for reinvesting saved time into high-value human activities and relationship-building A practical framework for collaborating with AI by identifying automation, augmentation, and human-only tasks Techniques for using AI as a thinking partner—such as Socratic dialog and intentional reflection—to enhance learning Best practices for intentional, mindful use of large language models to maximize human strengths and avoid cognitive offloading Approaches for nonprofit fundraising using AI, including ethical personalization and improved donor communication Risks like ‘work slop’ and actionable norms for productive AI collaboration within teams Emerging human skills essential for the future of work in a humans-plus-AI organizational landscape Episode Resources Transcript Ross Dawson: Beth, it is a delight to have you on the show. Beth Kanter: Oh, it’s a delight to be here. I’ve admired your work for a really long time, so it’s really great to be able to have a conversation. Ross Dawson: Well, very similarly, for the very, very long time that I’ve known of your work, you’ve always focused on how technologies can augment nonprofits. I’d just like to hear—well, I mean, the reason is obvious, but I’d like to know the why, and also, what is it that’s different about the application of technologies, including AI, to nonprofits? Beth Kanter: So I think the why is, I mean, I’ve always—I’ve been working in the nonprofit sector for decades, and I didn’t start off as a techie. I kind of got into it accidentally a few decades ago, when I started on a project for the New York Foundation for the Arts to help artists get on the internet. I learned a lot about the internet and websites and all of that, and I really enjoyed translating that in a way that made it accessible to nonprofit leaders. So that’s sort of how I’ve run my career in the last number of decades: learn from the techies, translate it, make it more accessible, so people have fun and enjoy the exploration of adopting it. And that’s what actually keeps me going. Whenever a new technology or something new comes out, it’s the ability to learn something and then turn around and teach it to others and share that learning. In terms of the most recent wave of new technology—AI—my sense is that with nonprofits, we have some that have barreled ahead, the early adopters doing a lot of cutting-edge work, but a lot of organizations are just at that they’re either really concerned about all of the potential bad things that can happen from the technology, and I think that traps them from moving forward, or others where there’s not a cohesive strategy around it, so there’s a lot of shadow use going on. Then we have a smaller segment that is doing the training and trying to leverage it at an enterprise level. So I see organizations at these different stages, with a majority of them at the exploring or experimenting stage. Ross Dawson: So, you know, going back to what you were saying about being a bit of a translator, I think that’s an extraordinarily valuable role—how do you take the ideas and make them accessible and palatable to your audience? But I think there’s an inspiration piece as well in the work that you do, inspiring people that this can be useful. Beth Kanter: Yeah, to show—to keep people past their concerns. There’s a lot of folks, and this has been a constant theme for a number of decades. The technology changes, but the people stay the same, and the concerns are similar. It’s going to take a long time to learn it, I feel overwhelmed. I think AI adds an extra layer, because people are very aware, from reading the headlines, of some of the potential societal impacts, and people also have in their heads some of the science fiction we might have grown up with, like the evil robots. So that’s always there—things like, “Oh, it’s going to take our jobs,” you name it. Usually, those concerns come from people who haven’t actually worked with the technology yet. So sometimes just even showing them what it can do and what it can’t do, and opening them up to the possibilities, really helps. Ross Dawson: I want to come back to some of the specific applications in nonprofits, but you’ve been sharing a lot recently about how to use AI to think better, I suppose, is one way of framing it. We have, of course, the danger of cognitive offloading, where we just stick all of our thinking into the machine and stop thinking for ourselves, but also the potential to use AI to think better. I want to dig pretty deep into that, because you have a lot of very specific advice on that. But perhaps start with the big framing around how it is we should be thinking about that. Beth Kanter: Sure. The way I always start with keynotes is I ask a simple question: If you use AI and it can give your nonprofit back five hours of time—free up five hours of time—how would you strategically reinvest that time to get more impact, or maybe to learn something new? I use Slido and get these amazing word clouds about what people would learn, or they would develop relationships, or improve strategies, and so forth. I name that the “dividend of time,” and that’s how we need to think about adopting this technology. Yes, it can help us automate some tasks and save time, but the most important thing is how we reinvest that saved time to get more impact. For every hour that a nonprofit saves with the use of AI, they should invest it in being a better human, or invest it in relationships with stakeholders. Or, because our field is so overworked, maybe it’s stepping back and taking a break or carving out time for thinking of more innovative ideas. So the first thing I want people to think about is that dividend of time concept, and not just rush headfirst into, “Oh, it’s a productivity tool, and we can save time.” The next thing I always like to get people to think about is that there are different ways we can collaborate with AI. I use a metaphor, and I actually have a fun image that I had ChatGPT cook up for me: there are three different cooks in the kitchen. We have the prep chef, who chops stuff or throws it into a Cuisinart—that’s like automation, because that saves time. Then we have the sous chef, whose job is tasting and making decisions to improve whatever you’re cooking. That’s a use case or way to collaborate with AI—augmentation, helping us think better. And the third is the family recipe, which is the tasks and workflows that are uniquely human, the different skills that only a human can do. So I encourage nonprofits to think about whatever workflow they’re engaged with—whether it’s the fundraising team, the marketing team, or operations—to really think through their workflow and figure out what chef hat they’re wearing and what is the appropriate way to collaborate with AI. Ross Dawson: So in that collaboration or augmentation piece, what are some specific techniques or approaches that people can use, or mindsets they can adopt, for ideation, decision making, framing issues, or developing ideas? What approaches do you think are useful? Beth Kanter: One of the things I do when I’m training is—large language models, generative AI, are very flexible. It’s kind of like a Swiss army knife; you could use it for anything. Sometimes that’s the problem. So I like to have organizations think through: what’s a use case that can help you save time? What’s something that you’re doing now that’s a rote kind of task—maybe it’s reformatting a spreadsheet or helping you edit something? Pick something that can save you some time, then block out time and preserve that saved time for something that can get your organization more impact. The next thing is to think about where in your workflow is something where you feel like you can learn something new or improve a skill—where your skills could flourish. And then, where’s the spot where you need to think? I give them examples of different types of workflows, and we think about sorting them in those different ways. Then, get them to specifically take one of these ways of working—that is, to save time—and we’ll practice that. Then another way of working, which is to learn something new, and teach them, maybe a prompt like, “I need to learn about this particular process. Give me five different podcasts that I should listen to in the right order,” or “What is the 80/20 approach to learning this particular skill?” So it’s really helping people take a look at how they work and figuring out ways where they can insert a collaboration to save time, or a collaboration to learn something new. Ross Dawson: What are ways that you use LLMs in your work? Beth Kanter: I use them a lot, and I tend to stay on the—I never
“It is our duty to find out how we can best use it, where humans are first and Humans + AI are more together.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Levels of Humans + AI in Organizations futuristevent.com LinkedIn Profile: Ross Dawson Books Thriving on Overload Living Networks 20th Anniversary Edition Implementing Enterprise 2.0 Developing Knowledge-Based Client Relationships What you will learn How organizations can transition from traditional models to Humans Plus AI structures An introduction to the six-layer Humans Plus AI in Organizations framework Ways AI augments individual performance, creativity, and well-being The dynamics and success factors of human-AI hybrid teams The role of scalable learning communities integrating human and AI learning How fluid talent models leverage AI for dynamic task matching and skill development Strategies for evolving enterprises using AI and human insight for continual adaptation Methods for value co-creation across organizational ecosystems with AI-facilitated collaboration Real-world examples from companies like Morgan Stanley, Schneider Electric, Siemens, Unilever, Maersk, and MELLODDY Practical steps to begin and navigate the journey toward Humans Plus AI organizations Episode Resources Transcript Ross Dawson: If you have been hanging out for new episodes of Humans Plus AI, sorry we’ve missed a number of those. We will be back to weekly from now on, and from next week, we’ll be coming back with some fantastic interviews with our guests. I’ll just give you a quick update and then run through my Levels of Humans Plus AI in Organizations framework. So, just a quick update: the reason for the big gap was that I was in Dubai and Riyadh giving keynotes at the Futurist X Summit in Dubai. It was an absolutely fantastic event organized by Brett King and colleagues, where I gave a keynote on “Humans Plus AI: Infinite Potential,” which seemed to resonate very well and fit with the broader theme of human potential and how we can create a better future. Then I went to Riyadh, where I gave a keynote at the Public Investment Forum, PMO Forum, which is the organization of the sovereign wealth fund of Saudi Arabia. There, we were again looking at macro themes of organizational performance, including specifically Humans Plus AI. When I got back home from those, I had to move house. So, it’s been a just digging myself out of the travel and moving house and getting back on top of things. We won’t have a gap in the podcast again for quite a while. We’ve got a nice compilation of wonderful conversations with guests coming up soon. So, just a quick state of the nation: Humans Plus AI is a movement, and by listening to this, you are part of that movement. We are all together in believing that AI has the potential to amplify individuals, organizations, society, and humanity. Thus, it is our duty to find out how we can best use that, where humans are first and humans plus AI are together. The community is the center of that. Go to humansplus.ai/community and you can join the community if you’re not there already. We have some amazing people in there, great discussions, and we are very much in the process of co-creating that future of Humans Plus AI. We also have a new application coming out soon, Thought Weaver. In fact, it’s actually a redevelopment of a project which we launched at the beginning of last year, and we’re rebuilding that to create Humans Plus AI thinking workflows and provide a tool to do that to the best effect. In the community, people will be testing, using, and helping us create something as useful as possible. I want to run through my Levels of Humans Plus AI in Organizations framework. This comes from my extensive work with organizations—essentially, those who understand that they need to become Humans Plus AI organizations, not just what they have been. It’s based on moving from humans, technology, and processes to organizations where AI is a complement, supporting them not just to tack on AI, but to transform themselves into very high-potential organizations. There are six layers in the framework. It starts with augmented individuals, then humans-AI hybrid teams, learning communities, fluid talent, evolutionary enterprise, and ecosystem value co-creation. Each of those six layers is where organizations, leaders, and strategists need to understand how they can transform from what they have been to apply the best of Humans Plus AI, and how those come together to become the organizations of the future. I’ll run through those levels quickly. The first one is augmented individuals, which is where most people are still playing as individuals. We’re using AI to augment us. Organizations are giving various LLMs to their workforce to help them improve, but this can be done better and to greater effect by being intentional about how AI can augment reasoning, creativity, thinking, work processes, and the well-being of individuals. The framework lays out the features and some of the success factors of each of those layers. I won’t go into those in detail here, but I’ll point to some examples. In augmented individuals, a nice example is Morgan Stanley Wealth Management, where they’ve used LLMs to augment their financial advisors, providing analysis around client portfolios and ways to communicate effectively. They rely on humans for strong relationships and understanding of client context and risk profiles, but they’re supported by AI. The second layer is human-AI hybrid teams. This is really the focus of my work, and I’ll be sharing a lot more on the frameworks, structures, and processes that support effective Humans Plus AI teams. Now we have teams that include not just humans, but also AI agents—not just multi-agent systems, but multi-agents where there are both humans and AI involved. We can design them as effective swarms that learn together and are highly functional, based on trust and understanding of relative roles, dramatically amplifying the potential of people and organizational performance. One example is Schneider Electric, which has used its teaming approach both on the shop floor of its manufacturing plants—explicitly providing AI complements to humans to assist in their work—and with knowledge workers in designing and building human-AI teams. The third layer is that of learning communities. I often refer to John Hagel’s mantra of scalable learning, which is the foundation of successful organizations today. This is based on not just individuals learning, but also organizations effectively learning. As John points out, this is not about learning static content, but learning by doing at the edge of change. AI can provide an extraordinary complement to humans, of course, in classic things such as AI-personalized learning journeys, but also in providing matching for peer learning, where individuals can be matched around the challenges they are facing or have faced, to communicate, share lessons learned, and learn together. We can start to capture these lessons in structures such as ontologies, where AI and humans are both learning together, individually and as a system. An example is Siemens, which has created a whole array of different learning pathways that include not just curated, personalized AI learning, but also a variety of ways to provide specific insights to individuals on what’s relevant to them. The fourth layer is fluid talent. For about 15 years, I’ve been talking about fluid organizations and how talent is reapplied, where the most talented people can be applied to whatever the challenge or opportunity is, wherever it is across the organization. This becomes particularly pertinent as we move from jobs to task level—jobs are being decomposed into tasks. Some can be done very well by AI, others less so. When we move to the task level, we have to reconfigure all the work that needs to be done and where humans come in. Instead of being at a job role, we’re now using the talent of the organization wherever and whenever it has the greatest value, using AI to match individuals with their ability to do that work. One aspect is that we can use AI to augment learning capabilities, so all work done by individuals in this fluid talent model is designed not just to use their existing talent, but to develop new relevant skills for new situations moving forward. One example is Unilever’s FLEX program, which has been more classically based on longer-term, around six-week assignments to different parts of the organization. It’s absolutely designed for learning and growth—not just to connect people into different parts of the organization to apply their talents in specific ways, but also to develop new skills that will make them more valuable in their own careers and to the organization. Moving above that to the higher level of the evolutionary enterprise: AI is moving fast, the competitive landscape is moving fast, and the shape of organizations needs to be not just re-architected for what is relevant, but so that it can continually evolve. We need both human and AI insight and persp
“I really believe that we need to design friction into the system, not what is usually the goal in digital spaces, where you try to remove all the friction.” –Iskander Smit About Iskander Smit Iskander Smit is founder and chair of Cities of Things Foundation, a research program originating at Delft University. He works as an independent researcher and creative strategist at the intersection of design, technology, and society, focusing on the evolving relationship between humans and AI in physical environments. Website: citiesofthings.nl thingscon.org iskandersmit.nl LinkedIn Profile: Iskander Smit   What you will learn How human, AI, and ‘things’ relationships are evolving beyond digital tools into physical environments The concept of collaborative intelligence—how human and AI co-performance shapes creativity and productivity Ways AI can mirror human thinking, deepen reflection, and reveal cognitive biases when used intentionally Designing AI interfaces for meaningful interaction, including the value of friction, interruption, and transparency How the role of designers is shifting from crafting static products to directing co-creative, adaptive systems with AI Why deliberately designing for thoughtful, exploratory, and emancipatory conversations with AI matters Challenges and insights from experimenting with AI in team settings and educational contexts The importance of treating AI as a collaborator or team member rather than simply as a tool How thoughtful human-AI relationships can unlock greater collective intelligence and transform work in sectors like health and education Episode Resources Transcript Ross Dawson: Iskander, it’s fantastic to have you on the show. Iskander Smit: Yeah, thanks for inviting me. Really excited to talk about this topic, of course. Ross: One of the things is you very much focus on collaborative intelligence, and I think that happens in conversation. So hopefully we can have a good conversation. Iskander: Yeah, me too. Ross: One of the starting points is you talk about human, AI, and things—relationships. So tell me about the human, the AI, and the things. What are the relationships? Iskander: Yeah, it really originated from the research program I started back in 2017 at the University in Delft. It was called Cities of Things—how we are going to live together with intelligent, autonomous things. We were thinking about what will happen, what the consequences are, if we live together with more autonomous things. That was before we had these generic LLMs and the developments happening now. But even then, we were already curious: how are we going to have a kind of co-performance with things? That’s why I added the “things” relation—because I really see now, of course, there’s a lot of use of AI in the digital space and in digital life. But it also starts to pop up in the physical space. So authentic AI for the physical space, I think, is a very interesting domain to look into. What will happen when we live within AI, when we are immersed in AI? That’s why I really look not so much at the specific function of the AI or the tool, but more at what kind of relationship we are building with these machines or things—or whatever we want to call them. Ross: Yeah. That’s why I dig into the relationships in the sense of the extended mind idea. Part of it is things we use, which enable us to do more. We’ve long had relationships with things. As those things become more autonomous, that changes. And the relationship with AI, which is far more human-like by design, also changes. So what are the types of relationships? When it’s not just humans and AI but also the things, what is the nature of these? Iskander: Yes, a good question. What type of relationships do we have? I’m really thinking about what the interaction is we have with things, and how we can define which are best suited for AI, which for humans, and how we relate to that. How do we perform together in a certain way? It’s an interesting question. Some people think that AI is just an early stage of being human-like. But I think we have evolved for such a long time that AI is definitely a different type of breed, maybe. So, what types of relations can we have here? There is, of course, a lot—especially when we had these conversational devices starting to pop up in our relationships. Ross: So one of the strongest relationships, I suppose, is collaboration. And so that’s kind of this idea around intelligence—collaboration—where we have collective human intelligence between humans, which we’ve had since we’ve gathered around fires. And now, of course, as you say, this intelligence is different but hopefully complementary to us. And so there’s a whole set of relationships with a set of humans, a set of AI. And so intelligence, I think you’re suggesting, emerges from that collaboration. Iskander: Definitely, yes. That’s an interesting point indeed, because also when you use it yourself, or even the current iteration of it, there’s this reflection that you have, or the interaction that you have with the current tools already. It’s also how I use them myself, mainly for writing now. In my weekly column that I write, I try to always put my first stream of consciousness in the AI and see how it responds to it. And that’s not so much that it makes something for me, but it’s really reflecting on myself. So it’s an interesting one—how it’s mirroring my own thinking, and how it can deepen that. So it’s in-depth collaboration, more like a co… Ross: So have you designed the tools to be digital twins, to mirror yourself, or to be a complement to you? Or, if so, how have you done that? Iskander: Not mirroring, but more like a co-author or intern. Different levels. I think it’s a way to make it more accessible. I’d say, well, I just have some support, based on what I see. How can I put it in a little bit more structure and use these capabilities of the AI tools for that? But also, if the right ones are used, they could give more real reflections—whether it’s a good stream of thoughts, or introducing new things. That would be the ideal case, of course. I think you can really open a path that you didn’t see yet, or challenge your own biases. I think that’s the real value of a good human-AI team: you can correct each other. Ross: So, how can you best get it to open up new pathways for you, or to uncover or reflect on your biases? Specifically, how do you use it to do that? Iskander: Well, it’s just pointing it in certain directions, asking certain questions. You put some sources into it and see if it finds similar things. And it’s always an interesting question—if it’s really doing new things and coming up with new stuff, or if it’s more like taking what you’re already thinking about yourself and just structuring it more. That’s still the phase we’re in now, I guess. Ross: So that’s really about the intent. You’ve got your interfacing with an LLM. So this is one relationship at this point. We’re talking about a human—you in this case—with an LLM. And so you’re saying it’s around the intent, that you’re always looking for it to open up new pathways for you, or to compensate for your biases, and so on. So it’s really the way you guide your conversations to get the value. Is that right? Iskander: True. Yeah, I think that’s true. And of course, I’ve been thinking about the research on predictive relations. I call it: what will happen when this AI becomes more intelligent, or when we have more information from similar situations? It’s not really predicting, but more like having a sort of knowledge beforehand. How will it change your relation to that one thing you’re using? The mental model can change because it can add some extra information. So if you ask what type of relations we have—this is what I now described as the positive version. You use it, and you reflect on it. But it could also become, of course, something like a chilling effect, where you adapt to it because you expect it will start to behave in a certain way. That’s maybe not happening—but you are. That’s the other side of the coin. Ross: You start multiple frames here. One phrase that you used in your writing is hypothesizing that humans may not be, as you describe it, at the top of the cognitive hierarchy. I mean, I guess one of the points I always make is that cognition, or intelligence, is not one-dimensional. There are some dimensions where AI is far more intelligent than humans, and others where humans are far superior. I still don’t necessarily see that every single dimension of human intelligence will be transcended. But just looking at that point, saying, all right, well, let’s say AI has better and better cognition, better and better intelligence. What does that then do to the human-AI relationship in collaborative intelligence? Iskander: That’s an interesting question. Of course, is cognitive knowledge the same as intelligence? I think what you are also saying is that it’s not a kind of general “on top of the cognitive hierarchy,” but maybe more on specific topics. You can use it almost more as a tool to find out more things. You cannot read everything, you cannot do everything. But you can make more sense. I think humans still have more intelligent capability to synthesize and make sense of stuff, to come up with new ideas. Even if some of these tools can help you with that, or be creative in a certain way, it’s still related to what you feed them. I don’t know if this was a
“If you’re not moving quickly to get these ideas implemented, your smaller, more agile competitors are.” –Brian Kropp About Brian Kropp Brian Kropp is President of Growth at World 50 Group. Previous roles include Managing Director at Accenture, Chief of HR Research at Gartner and Practice Leader at CEB. His work has been extensively featured in the media, including in Washington Post, NPR, Harvard Business Review, and Quartz. Website: world50.com LinkedIn Profile: Brian Kropp X Profile: Brian Kropp What you will learn Driving organizational performance through AI adoption Understanding executive expectations versus actual results in AI performance impact Strategies for creating effective AI adoption incentives within organizations The importance of designing organizations for AI integration with a focus on risk management Middle management’s evolving role in AI-rich environments Redefining organizational structures to support AI and humans in tandem Building a culture that encourages AI experimentation Empowering leaders to drive AI adoption through innovative practices Leveraging employees who are native to AI to assist in the learning process for leaders Learning from case studies and studies of successful AI integration Episode Resources Transcript Ross Dawson: Brian, it’s wonderful to have you on the show. Brian Kropp: Thanks for having me, Ross. Really appreciate it. Ross: So you’ve been doing a lot of work for a long time in driving organizational performance. These are perennials, but there’s this little thing called AI, which has come along lately, which is changing. Brian: You might have heard of it somewhere. I’m not sure if you’ve been alive or awake for the last couple of years, but you might have heard about it. Ross: Yeah, so we were just chatting before, and you were saying the pretty obvious thing, okay, got AI. Well, it’s only useful when it starts to be used. We need to drive the adoption. These are humans, humans who are using AI and working together to drive the performance of the organization. So love to just hear a big frame of what you’re seeing in how it is we drive the useful use of AI in organizations. Brian: I think a good starting point is actually to try to take a step back and understand what is the expectation that executive senior leaders have about the benefit of these sorts of tools. Now, to be honest, nobody knows exactly what the final benefit is going to be. There is definitely guesswork around. There are different people with different expectations and all sorts of different viewpoints on them, so the exact numbers are a little bit fuzzy at best in terms of the estimates of what performance improvements we will actually see. But when you think about it, at least at kind of orders of magnitude, there are studies that have come out. There’s one recently from Morgan Stanley that talked about their expectation around a 40 to 50% improvement in organizational performance, defined as revenue and margin improvements from the use of AI tools. So that’s a really big number. It’s a very big number. When you do analysis of earnings calls from CEOs and when they’re pressed on what their expectation is, those numbers range between 20 and 30%. That’s still a really big number, and this is across the next couple of years, so it’s a timeframe. What’s fascinating is that when you survey line executives, senior executives—so think like vice president, people three layers down from the CEO—and you look at some of the actual results that have been achieved so far, it’s in that single digits range. So the challenge that’s out there, there’s a frontier that says 50, CEOs say 30, the actualized is, call it five. And those numbers, plus or minus a little bit, are in that range. And so there’s enormous pressure on executives in businesses to actually drive adoption of these tools. Not necessarily to get to 50—I think that’s probably unrealistic, at least in the next kind of planning horizon—but to get from five to 10, from five to 15. Because there are billions of dollars of investments that companies are making in these tools. There are all sorts of startups that they’re buying. There are all sorts of investments that they’re making. And if those executives don’t start to show returns, the CFO is going to come knocking on the door and say, “Hey, you wrote a check for $50 million and the business seems kind of the same. What’s up with that?” There’s enormous pressure on them to make that happen. So if you’re, as an executive, not thinking hard about how you’re actually going to drive the adoption of these tools, you’re certainly not going to get the cost savings that are real potential opportunities from using these tools. And you will absolutely not get the breakthrough performance that your CEO and the investment community are expecting from use of these tools. So there’s an absolute imperative that executives figure out the adoption problem, because right now the technology, I think, is more than good enough to achieve some of these savings. But at the end of the day, it’s really an adoption, use, application problem. It’s not a “Can we afford to buy it or not” problem. It’s “We can afford to buy it. It’s available. We have to use it as executives to actually achieve some sort of cost savings or revenue improvements.” And that, I think, is the size of the problem that executives are struggling with right now. Ross: Yeah. Well, the thing is, the old adage says you can take a horse to water, but you can’t make it drink. And in an organizational context, again, I think the drive to use AI in organizations needs to be intrinsic, as in people need to want to do it. They can see that it’s part of the job. They want to learn. It gives them more possibilities and so on. And there’s a massive divergence where I think there are some organizations where it truly is now part of the culture. You try things. You tell people you’re using it. You share prompts and so on. That’s probably the minority, but they absolutely exist. In many organizations, it’s like, “I hate it. I’m not going to tell anybody I’m using it if I am using it.” And top-down, telling people to use it is not going to get there. Brian: It’s funny, just as a quick side note about not telling people they’re using it. There’s a study that just came out. I think it was from ChatGPT, I can’t remember those folks. But one of the things that they were looking at was, are teachers using generative AI tools to grade papers? And so the numbers were small, like seven or eight percent or something like that, less than 10%. But it just struck me as really funny that teachers have spent all this time saying, “Don’t use generative AI tools to write your papers,” but some are now starting to use generative AI tools to grade those papers. So it’s just a little funny, the whole don’t use it, use it, not use it, don’t tell people you’re using it. I think those norms and the use cases will evolve in all sorts of places. Ross: So you have a bit of a high-level framework, I believe, for how it is we think through driving adoption. Brian: Yes. There are three major areas that I think are really important. One, you have to create the right incentive structure. And that, to your point, is both intrinsic incentives. You have to create reasons for people to use it. In a lot of cases, there’s some fear over using it—“I don’t know how,” “Am I going to eliminate my own job?” Those sorts of things. So you have to create an incentive structure to use it. Two, you have to think about how the organization is designed. Organizations from a risk aversion perspective, from a checks-and-balances perspective, from who gets to say no to stuff, from a willingness-to-experiment perspective, are designed to minimize risk in many cases. And in order to really drive AI adoption, there is risk that’s involved. It’s a different way of doing things that will disrupt the old workflows that exist in the organization. So you have to really think hard about what you do from an org design perspective to make that happen. And then three, you could have the right incentives in place, you could have the right structure in place, but leaders need to actually create the environment where adoption occurs. One of the great ironies here is that the minority of leaders—there was a Gartner study that came out just a little bit ago—showed that, on average, only about 15% of leaders actually feel comfortable using generative AI tools. And that’s the ones that say they feel comfortable doing it, which might even be a little bit of an overestimate. So how do you work with leaders to actually create an environment where leaders encourage the adoption and are supportive of the adoption, beyond “You should go use some AI tools”? Those are the three categories that companies and executives need to be thinking about in order to get from what is now relatively low levels of adoption at a lot of organizations to even medium levels of adoption, to close that gap between the 50% and 5% around the delta in expectations that people have. Ross: So in particular, let’s go through those one by one. I’m particularly focused on the organizational design piece myself. For leaders, I think we can get to some solutions there. But let’s start with the incentives. I’d love to hear any specifics around what you have seen that works, that doesn’t work, or any suggestions or ideas. How do you then d
“There’s a significant opportunity for us to redesign the technology rather than redesign people.” –Suranga Nanayakkara About Suranga Nanayakkara Suranga Nanayakkara is founder of the Augmented Human Lab and Associate Professor of Computing at National University of Singapore (NUS). Before NUS, Suranga was an Associate Professor at the University of Auckland, appointed by invitation under the Strategic Entrepreneurial Universities scheme. He is founder of a number of startups including AiSee, a wearable AI companion to support blind & low vision people. His awards include MIT TechReview young inventor under 35 in Asia Pacific and Outstanding Young Persons of Sri Lanka. Website: ahlab.org intimidated.info LinkedIn Profile: Suranga Nanayakkara University Profile: Suranga Nanayakkara What you will learn Redefining human-computer interaction through augmentation Creating seamless assistive tech for the blind and beyond Using physiological sensors to detect cognitive load Adaptive learning tools that adjust to flow states The concept of an AI-powered inner voice for better choices Wearable fact-checkers to combat misinformation Co-designing technologies with autistic and deaf communities Episode Resources Transcript Ross Dawson: Suranga, it’s wonderful to have you on the show. Suranga Nanayakkara: Thanks, Ross, for inviting me. Ross: So you run the augmented human lab. So I’d love to hear more about what does augmented human mean to you, and what are you doing in the lab? Suranga: Right? I started the lab back in 2011 and part of the reasoning is personal. And my take on augmentation is really, everyone needs assistance. All of us are disabled, one way or the other. It may be a permanent disability. It may be you’re in a country that you don’t speak the language, you don’t understand the culture. For me, when I first moved to Singapore, I never spoke English. I was very naive to computers, and to the point that I remember very vividly back in the day, Yahoo Messenger had this notification sound of knocking, and I misinterpreted that as being somebody knocking on my door. That was very, very intimidating. I felt I’m not good enough, and that could have been career-defining. And with that experience, as I got better with the technology, and when I wanted to set up my lab, I wanted to think of ways. How do we redefine these human-computer interfaces such that it provides assistance and everyone needs help? And how do we, instead of just thinking of assistive tech, think of augmenting our ability depending on your context, depending on your situation, how to use that? I started the lab as augmented sensors. We were focusing on sensory augmentation, but a couple of years later, with the lab growing, we created a bit more broad definition of augmenting human, and that’s when the name became augmented human lab. Ross: Fantastic. And there’s so many domains in which so many projects which you have on which are very interesting and exciting. So just one. We would just like to go through some of those in turn. But the one you just mentioned was around assisting blind people. I’d love to hear more about what that is and how that works. Suranga: Right. So the inspiration for that project came when I was a postdoc at MIT Media Lab, and there was a blind student who took the same assistive tech class with me. The way he accessed his lecture notes was he was browsing to a particular app on his mobile phone, then he opened the app and took a picture, and the app reads out notes for him. For him, this was perfect, but for me, observing his interactions, it didn’t make sense. Why would he have to do so many steps before he can access information? And that sparked a thought: what if we take the camera out and put it in a way that it’s always accessible and you need minimum effort? I started with the camera on the finger. It was a smart ring. You just point and ask questions. And that was a golf ball-sized, bulky interface, just to show the concept. As you iterate, it became a wearable headphone which has the camera, speaker, and a microphone. So the camera sees what’s in front of you. The speaker can speak back to you, the microphone listens to you. With that, you can enable very seamless interaction for a blind person. Now you can just hold the notes in front of you and just ask, please read this for me. Or you might be in front of a toilet, you want to know which one is female, which one is male. You can point and ask that question. So essentially, this device, now we call ISee, is a way of providing this very seamless, effortless interaction for blind people to access visual information. And now we realize it’s not just for blind people. For me, I actually used it. Recently I went to Japan, and I don’t read anything Japanese, and pretty much everything is in Japanese. I went to a pharmacy, I wanted to buy this medicine for headache, and ISee was there for me to help. I can just pull out a package and ask, ISee, hey, help me translate this, what is in this box? So it translates for me. So the use cases, as I said, although it started with a blind person, cut across various abilities. And again, it is supporting people to achieve things that are otherwise hard to achieve. Ross: Fantastic. So just hopping to one of the many other projects or research which you’ve done, and is around AI-augmented reasoning. This is something which can assist anybody, and you particularly focus on this area of flow. We understand flow from the original work of Csikszentmihalyi and so on, how to get into this flow state. I understand that you have sensors that can understand when people are in flow states, to be able to help them in their reasoning as appropriate. Suranga: Right. So this is very early stage. We just started this a few months ago. The idea is we have been working with some of the physiological sensors — the skin conductance, heart rate variability — and we understand that based on this, you can infer the cognitive state. For example, when you are at a high cognitive state, or when you are at a low cognitive state, these physiological sensors have certain patterns, and it’s a nice, non-invasive way of getting a sense of your cognitive load. As the flow theory says, this is about making the task challenging enough — not too challenging or too easy. We can measure the load based on these non-invasive signals, at least get an estimate, so that you can adjust the difficulty level of the task. That’s one of the very early stage projects where we want to have these adaptive interfaces. The user doesn’t drop the task because it’s too difficult, or drop the task because it’s too easy. You can adjust the task difficulty based on the perceived cognitive load. Ross: So interested. Where do you think the next steps are there? What is the potential from being able to sense degree of cognitive load or your frame of mind, so that you can interact differently? Suranga: One of the things I’m really excited about is lifelong learning, continuous learning. Because of the emergence of technology, there’s a lot of emphasis on needing to upskill and reskill. I’m also overseeing some of our university adult learning courses. If you think of adults who are trying to re-upskill themselves, the way to teach and provide materials is very different from teaching, say, regular undergraduate classes. For those, there is a possibility of providing certain learning materials when the adult learner is ready to learn. They’re busy with lots of other responsibilities — work, families, and all these things. So if we can have a way of providing these learning opportunities based on when they are ready to learn, it may be partly based on cognitive state, partly based on their schedules. I think one way to use this information is to decide when to initiate and how to increase or decrease the level of difficulty of the learning material as you go. If you can detect the cognitive load and then maintain the flow, that’s a hugely potential area. Ross: Yeah, absolutely. So one of the projects was called Prospero, which is, I think, on the lines which you’re discussing. It’s a tool to help memorize useful material, but it understands your cognitive context as to when and how to feed you things for learning. Suranga: Right. This we started specifically for older adults, and the idea was we wanted to help train their prospective memory. One of the techniques that has been reported as effective in literature is called intention implementation. So basically, if I want to remember that when I meet Ross, I need to give you something, you mentally visualize that as an if-then technique. Firstly, we tried, okay, can we digitize that without a human through a mobile app? I provide what I would like to do, break it down to if-then statement, and get me to visualize that. That was the first part. We saw that digitization does retain the effectiveness. Then the next question was, is there a better timing to initiate this training? That’s where we brought in the cognitive load estimation. Instead of doing a time-based or user pre-assigned time to train, we compared our technique, which is based on the cognitive load. We found that when you provide this nudging to start training when the user has less load, they are more likely to notice this and more likely to actually start the training. I think this principle probably goes beyond just training memory. It could be used
“The fact is that its input came from billions of humans… When you’re interacting with an LLM, you are interacting with a collective, not a singular intelligence sitting out there in the universe.” –Michael I. Jordan About Michael I. Jordan Michael I. Jordan is the Pehong Chen Distinguished Professor in Electrical Engineering and Computer Science and professor in Statistics at the University of California, Berkeley, and chair of Markets and Machine Learning at INRIA Institute in Paris. His many awards include the World Laureates Association Prize, IEEE John von Neumann Medal, and the Allen Newell Award. He has been named in the journal Science as the most influential computer scientist in the world. Website: arxiv.org LinkedIn Profile: Michael I. Jordan University Profile: Michael I. Jordan What you will learn Redefining the meaning of intelligence The social and cultural roots of human genius Why AI is not true superintelligence Collective genius as the driver of innovation The missing link between economics and AI Decision making under uncertainty and asymmetry Building AI systems for social welfare Episode Resources Transcript Ross Dawson: Michael, it’s wonderful to have you on the show. Michael I. Jordan: My pleasure to be here. Ross: Many people seem to be saying that AI is going to beat all human intelligence very soon. And I think you have a different opinion. Michael: Well, there’s a lot of problems with that framing for technology. First of all, we don’t really understand human intelligence. We think we do because we’re intelligent, but there’s depths we haven’t probed, and there’s the field of psychology just getting going—not to mention neuroscience. So just saying that something that mimics humans, or took a vast amount of data and brute-forced mimicked humans, seems like a kind of leap to me—that it has human intelligence nailed. Moreover, the idea that it was a sequence of logic doesn’t particularly work for me. We figured out human intelligence, now we can put it in silicon and scale it, and therefore we’ll get superintelligence. Every step there I mean the scaling part, I guess, is okay, but we have not figured out human intelligence. Even if we had, it’s not really clear to me as a technology that our goal should be to mimic or replace humans. In some jobs, sure, but we should think more about overall social welfare and what’s good for humans. How do we complement humans? So, no, I don’t think we’ve got human intelligence figured out at all. It’s not that it’s a mystical thing, but we have creativity. We have experience and shared experience, and we plumb the depths of that when we interact and when we create things. Those machines that are doing brute force gradient descent on large amounts of text and even images or whatever—they’re not getting there. It is brute force. I don’t think sciences have really progressed by just having brute force solutions that no one understands and saying, “That’s it, we’re done.” So if you want to understand human intelligence It’s going to be a while. Ross: There’s a lot to dig into there, but perhaps first: just intelligence. You frame that as, among other things, social and cultural, not just cognitive? Michael: Absolutely. I don’t think if you put me on a desert island, I’d do very well. I need to be able to ask people how to do things. And if you put me not just on a desert island, but in a foreign country, and you don’t give me the education—the 40 years of education I had as well—that imbued me with the culture of our civilization. Anytime I’m not knowledgeable about something, I can go find it, and I can talk to people. Yes, I can now use technology to find it, but I’m really talking to people through the technology. I don’t think we appreciate how important that cultural background is to our thinking, to our ability to do things, to execute, and then to figure out what we don’t know and what we’re not good at. That’s how we trade with others who are better at it, how we interact, and all that. That’s a huge part of what it means to be human, and how to be a successful and happy human. This mythological Einstein sitting all by himself in a room, thinking and pondering—I think we’re way too wedded to that. That’s not really how our intelligence is rolled out in the real world. Generally, we’re very uncertain about things in the real world. Even Einstein was uncertain, had to ask others, learn things, and find a path through the complexity of thought. Also, I’ve worked on machine learning for many years, and I’m pretty comfortable saying that learning is a thing we can define, or at least start to define: you improve on certain tasks. Intelligence—I’m just much less happy with trying to define it. I think there’s a lot of social intelligence, so I’m using that term loosely. But human, single intelligence—what is that? What does it mean to generalize it? Talking about thought in the computer is the old dream of AI. I don’t know if we have thought in a computer. Some people sort of say, “Yeah, we have it,” because it’s doing these thinking-like things. But it’s still making all kinds of errors. You can brute force around them for as long as you can and get humans to aid you when you’re making errors. But at some point you have to say, “Wait a minute, I haven’t really understood thought. I’m not getting it. I’m getting something else. What am I getting? How do I understand that? How does it complement things? How does it work in the real world?” Then you need to be more of an engineer—try to build it in a way that actually works, that is likely to help out human beings, and think like an engineer and less like a science fiction guru. Ross: So you’ve used the phrase “human genius” as a sort of what we compare AI with. And the phrase “human collective genius,” I suppose, ties into some of your points here—where that genius, or that ability to do exceptional things, is a collective phenomenon, not an individual one. Michael: Oh no, without a doubt. I’ve known some very creative people, and every time you talk to them, they make it very clear that the ideas came from the ether—from other people. Often, they just saw the idea develop in their brain, but they don’t know why. They are very aware of the context that allowed them to see something differently, execute on it, and have the tools to execute. So my favorite humans are smart and humble. Right now in technology, we have a lot of people who are pretty smart but not very humble, and they’re missing something of what I think of as human genius: the ability to be humble, to understand what you don’t know, and to interact with other humans. Ross: One of the other things you emphasize is when we’re designing these systems. We’ve created some pretty amazing things. But as you suggest, there seems to be this very strange obsession with artificial general intelligence as a focus. For all of the reasons that’s flawed, one of them is being able to imbue social welfare as a fundamental principle that we should be using to design these. Michael: I think you’ve just hit on it. To me, that’s the fundamental flaw with it. I mean, you can say the flaw is that you can’t define it, and so on and so forth. But for me, the flaw is really that it’s an overall system. In fact, if you think about an LLM, whether it’s smart or not, or intelligent or not, it’s almost beside the point. The fact is that its input came from billions of humans, and those humans did a lot of thinking behind that. They worked out problems, they wrote them down, they created things. Sometimes they agreed, sometimes they disagreed, and the computer takes all that in. To the extent that there’s signal, and there’s a lot of agreement among lots of humans, it’s able to amplify that and create some abstractions that characterize that. But when you’re interacting with an LLM, you are interacting with essentially all those humans. You’re interacting with a collective. You are not interacting with a singular intelligence sitting out there in the universe. You’re interacting with all of humanity—or at least a lot of humanity—and all of the background that those people brought to it. So if you’re interacting with a collective, then you have to ask: is there a benefit to the collective, and what’s my contribution? What’s my role in that overall assemblage of information? It’s not just that the whole goal is the Libertarian goal of the individual being everything. Somehow, the system should work such that there are overall good outcomes for everyone. It’s kind of obvious. It’s obvious like traffic. All of us want to get as fast as possible from point A to point B. But a designer of a good traffic and highway system does not just think about the individual and how fast the car will go. They think about the overall flow of the system, because that may slow down some people, but it’ll make everybody ideally get there as fast as possible. It’s a sum over all the travel times of all the people. Let’s call that social welfare. The design is usually a huge amount of hard work to achieve such a thing, and then empirically test it out and work out some theory of that. And that’s going to be true of just about any domain. Think of the medical domain. It’s really not just the doctor and a patient and focusing on one relationship. It’s the overall system. Does it bring all the tools to the right place, at the right time? Has it tested things out in the r
“The potential is boundless, but it doesn’t come automatically; it comes intentionally.” –Paula Goldman About Paula Goldman Paula Goldman is Salesforce’s first-ever Chief Ethical and Humane Use Officer, where she creates frameworks to build and deploy ethical technology for optimum social benefit. Prior to Salesforce she held leadership roles at global social impact investment firm Omidyar Network. Paula holds a Ph.D. from Harvard University, and is a member of the National AI Advisory Committee of the US Department of Commerce. Website: salesforce.com LinkedIn Profile: Paula Goldman X Profile:  Paula Goldman What you will learn Redefining ethics as trust in technology Designing AI with intentional human oversight Building justifiable trust through testing and safeguards Balancing automation with uniquely human tasks Starting small with minimum viable AI governance Involving diverse voices in ethical AI decisions Envisioning AI that enhances human connection and creativity Episode Resources Transcript Ross Dawson: Paula, it is fantastic to have you on the show. Paula Goldman: Oh, I’m so excited to have this conversation with you, Ross. Ross: So you have a title which includes your the chief of ethical and humane use. So what is humane use of technology and AI? Paula: Well, it’s interesting, because Salesforce created this Office of Ethical and Humane Use of Technology around seven years ago, and that was kind of before this current wave of AI. But it was with this—I don’t want to say, premonition—this recognition that as technology advances, we need to be asking ourselves sophisticated questions about how we design it and how we deploy it, and how we make sure it’s having its intended outcome, how we avoid unintended harm, how we bring in the views of different stakeholders, how we’re transparent about that process. So that’s really the intention behind the office. Ross: Well, we’ll come back to that, because I just—humane and humanity is important. So ethics is the other part of your role. Most people say ethics are, let’s work out what we shouldn’t do. But of course, ethics is also about having a positive impact, not just avoiding the negative impact. So how do you frame this—how it is we can build technologies and implement technologies in ways that have a net benefit, as opposed to just not avoiding the negatives? Paula: Well, I love this question. I love it a lot because one of my secrets is that I don’t love the word ethics to describe our work. Not that—it’s very appropriate—but the word I like much more than that is trust, trustworthy technology. So what happens when you build—especially given how quickly AI is evolving, how sometimes it’s hard for people to understand what’s going on underneath the hood and so on—how do you design technology that people understand how it works? They know how to get the best from it, they know where it might go wrong and what safeguards they should implement, and so on. When you frame this exercise like that, it becomes a source of innovation. It becomes a design constraint that breeds all kinds of really cool, what we call trust patterns in our technology—innovations like we have a set of safeguards, customizable safeguards for our customers, that we call our trust layer. And this is one of our differentiators as we go to market. It’s things that allow people—features that allow people—to protect the privacy of their data, or make sure that the tone of the output from the AI remains on brand, or look out for accuracy and tune the accuracy of the responses, and so on. So when you think about it like that, it becomes much less of this mental image of a group of people off in the corner asking lofty questions, and much more of an all-of-company exercise where we’re asking deeply with our customers: How do we get this technology to work in a way that really benefits everyone? Ross: That’s fantastic. Actually, I just created a little framework around trust in AI adoption. So it’s like trust that I can use this effectively, trust that others around me will use it well in teams, trust that my leaders will use it in appropriate ways, trust from customers, trust in the AI. And in many ways, everything’s about trust. Because a lot of people don’t trust AI, possibly justifiably in some domains. So I’d love to dig a little bit into how it is you frame and architect that ability—this ability to have justifiable trust. Paula: Do you mean the justifiable trust from the customers, the end users? Ross: Well, I think at all those layers. I think these are all important, but that’s a critical one. Paula: Yeah, I think a lot of it is about—I actually think about our work as sort of having two different levels to it. One is the objective function of reviewing a product. We do something called adversarial testing, where we’ll take, let’s say, an AI agent that’s meant for customer service, and we’ll try all these different variations on it to see if we can get it to say things that it shouldn’t say. We’ll involve our employees in that, and we’ll take people from lots of diverse backgrounds and say, “Hey, try to break this product.” And we measure: How is it performing, and what are the improvements that we can make to get it to perform? That’s a big part of trust, right? When we think about AI, is the product doing what it says it should do? Is it doing what we’re asking it to do? And with a non-deterministic technology like this wave of AI, that’s a very important question. You want to harness the creative potential of AI—its ability to generate and communicate in human-sounding terms—but also marry it to accuracy and outcomes that are more predictable. So that’s one side of it. But the second side, the second part of the job, is really a culture job. It’s about listening—listening to our employees, our customers, our end users. It’s about participating in these multi-stakeholder groups. I was a member of the National AI Advisory Committee in the US. In many jurisdictions, we’re part of these multidisciplinary forums where people are bringing up different concerns about AI, whether that’s about how work is changing or particular questions about privacy. We integrate those questions into the work itself and integrate solutions into the work itself, but really have it be so that everyone owns it—so that the solutions are generated by everyone. That’s, I think, the cultural part of it. I’m an anthropologist by training, and I always think about it like that. If you want technology to serve people, people have to be involved in determining those goals. Ross: Which goes to the next point. This is Humans Plus AI podcast, and I’ve heard you use the term “human at the helm.” AI capabilities are pretty damn good, and they’re getting better all the time. So how do we architect that humans remain at the helm? Paula: We coined the phrase “human at the helm” a couple of years ago, as we realized there were these older frameworks about having a human in the loop for consequential decision making. Back with machine learning, you have predictions or recommendations on a consequential decision. You want a human to take responsibility for that decision and exercise oversight. We realized that with agentic AI, and with AI increasingly empowered to take tasks autonomously—not just make a recommendation, but carry out a task from start to finish—we needed a new way of conceptualizing how people work alongside AI but remain in control. Know how to get the right outcomes from AI. Know what to ask for and what not to ask for, and what tasks should remain uniquely human. I think it’s an ever-evolving framework. I know you’re deeply looking at those sets of questions. I honestly think, going back to the ethics exchange, that’s one of the most important ethical questions of our time: How do people work alongside AI? How do we implement AI in work in a way that keeps people at the center of that? So that’s what we’re doing, discipline by discipline. For example, going back to AI in customer service—AI is very good at answering questions that are routine, that have been answered a number of times before, like “Where’s my order?” or “Where’s my return?” or “Have I gotten the money for my return?” When it comes to unusual circumstances or emotionally challenging circumstances for the customer, human touch can make the world of difference between a terrible interaction, an interaction that does maybe okay, or an interaction that leaves a lasting impression and causes a customer to go talk to 10 other customers about how important your company is to them. That is the kind of marriage we’re talking about between the capabilities of AI and the capabilities of people. It’s a very simple example, but we see them across every single discipline. I think the more clear we are about how these combinations work and how people can—we have a function called “command center,” a feature that allows people to see exactly what’s going on across hundreds of agents and millions of interactions and summarize what’s going on and find anomalies—the more that people can stay in control and understand what’s going on, the more trust they’ll have, the more they’ll use AI. And it’s sort of a virtuous loop. Ross: Yes, absolutely. The phrase “human in the loop” kind of suggests all they do is press “approve, approve.” Whereas a reframing I heard recently, a little while ago, which I think is really lovely, is “AI in the lo
loading
Comments 
loading