DiscoverAI Literacy for Entrepreneurs
AI Literacy for Entrepreneurs
Claim Ownership

AI Literacy for Entrepreneurs

Author: Northlight AI

Subscribed: 7Played: 32
Share

Description

"AI Literacy for Entrepreneurs", with host Susan Diaz, helps you integrate artificial intelligence into your business operations. We'll help you understand and apply AI generative in a way that is accessible and actionable for entrepreneurs at all levels. With each episode, you'll gain practical insights into effective AI strategies and tools, hear from leading practitioners with deep expertise and diverse use cases, and learn from the successes and challenges of fellow business owners in their AI adoption journey. Join us for the simplified knowledge and inspiration you need to leverage AI effectively to level up your business.
270 Episodes
Reverse
Host Susan Diaz sits down with sales strategist Gazzy Amin, founder of Sales Beyond Scripts, to talk about the real ways AI is changing revenue, planning, and scale. They cover AI as a thinking partner, how to use it across departments in a small business, why audits matter more than hype, and how mindset quietly determines whether you treat AI as a threat or an advantage. Episode summary This episode is part of Susan's 30-episodes-in-30-days "podcast to book" sprint for Swan Dive Backwards. Susan and Gazzy zoom in on the selling process first. Then they zoom out to the whole business. They talk about three camps of AI users (anti, curious, invested), and why the curious group has a huge edge right now. Gazzy shares how she uses AI as a co-pilot across marketing, sales, and operations. Not just for captions. For thinking, planning, campaign creation, and building repeatable systems. They also go into mindset. Gazzy's approach is clear: protect your mental real estate. Don't let recession talk, doom narratives, or fear-based chatter shape your decisions. Use AI to help you widen perspective, challenge limiting beliefs, and plan like a CEO. Key takeaways AI is more than a copywriting tool. It's a strategic brainstorming partner that reduces burnout and speeds up decision-making. If you want to scale, map your departments and ask AI how it can support each one. Marketing, sales, finance, operations, hiring, delivery, and client experience. Do a year-end audit before you set new goals. Feed AI your revenue data, launches, offers, and calendar patterns. Then let it ask you smart questions you wouldn't think to ask yourself. AI doesn't erase experts. It raises your baseline. You show up to expert conversations more informed, so you can go deeper faster. Documentation and playbooks become an unfair advantage. When knowledge lives only in your head, your business is fragile. AI helps you turn what's in your brain into systems other people can run. Scale is doing more with less. AI can increase output without needing to triple headcount, if you're intentional about workflows and training your team. Women have a big opportunity here. AI can reduce the invisible workload, expand access to expert-level thinking, and help women-led businesses grow faster - if women stay in the conversation and keep learning. Timestamps  00:00 — Susan introduces the 30-day podcast-to-book sprint and today's guest, Gazzy Amin 01:10 — The three types of entrepreneurs using AI (anti / curious / invested) 02:10 — AI as a thinking partner vs a task-doer 03:50 — Why most people don't yet grasp AI's full capability (and why curiosity matters) 05:00 — Using AI for personal life tasks as a low-pressure entry point 06:46 — Recession narratives, standing out, and using AI to challenge limiting beliefs 07:40 — "Create an AI per department" and train it for specific use cases 09:10 — The opportunity window: access to expertise that used to cost tens of thousands 10:10 — The 2025 audit: asking AI to interview you and pull out patterns 12:55 — Will AI devalue experts? Why the human layer still matters 16:45 — How AI changes your conversations with experts (you go deeper, faster) 18:20 — Mindset tools: music, movement, and protecting your "mental real estate" 21:00 — Documentation, playbooks, and why small teams need systems 24:00 — Training AI on your real sales process to improve onboarding + client experience 27:20 — Do AI-enabled businesses become more valuable? Scale, output, and leverage 30:00 — Why people default to content (and how to make AI content actually sound like you) 34:15 — Women, AI, the wage gap, and why this moment is non-negotiable 40:00 — Gazzy shares her CEO Growth Plan Intensives and how she uses AI in sessions 42:55 — Where to connect with Gazzy (Instagram + LinkedIn) Guest info Gazzy AminFounder, Sales Beyond Scripts Best place to connect: Instagram (behind-the-scenes and real-time business building) - https://www.instagram.com/authenticgazzy/ If you want to use AI to scale in 2026, start here: Run a 2025 audit with AI. Pick one department and ask AI how to improve that workflow. Document one process that currently lives only in your head.   Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.
If you're measuring AI success by "hours saved" you're playing the easiest game in the room. In this episode, Host Susan Diaz explains why time saved is weak and sometimes harmful, then shares a better "AI ROI stack" with five metrics that map to real business value and help you build dashboards that actually persuade leadership.   Episode summary Time saved is fine. It's also table stakes. Susan breaks down why "we saved 200 hours" is the least persuasive AI metric, and why it can backfire by punishing your early adopters with more work. She then introduces a smarter approach: a set of five metrics that connect AI usage to quality, risk, growth, decision-making, and compounding capability. If you want your AI work funded, supported, and taken seriously, you need to move the conversation from cost to investment. This episode shows you how.   Key takeaways Time saved doesn't automatically convert to value. If no one reinvests the saved time, you just made busy work faster. Hours saved can punish high performers. Early adopters save time first. They often get "rewarded" with more work. Time saved misses the second-order benefits. AI's biggest wins often show up as fewer mistakes, better decisions, faster learning, and faster response to opportunity. Susan's "AI ROI stack" has five stronger metrics: Quality lift Is the output better? Track error rate, revision cycles, internal stakeholder satisfaction, customer satisfaction, and fewer rounds of revisions (e.g., proposals going from four rounds to two). Risk reduction AI can reduce risk, not only create it. Track compliance exceptions, security incidents tied to content/data handling, legal escalations/load, and "near misses" caught before becoming problems. Speed to opportunity Measure time from idea → first draft → customer touch. Track sales cycle speed, launch time, time to assemble POV/brief/competitive responses, and responsiveness to RFPs (the "game-changing" kind of speed). Decision velocity AI can reduce drag by improving clarity. Track time-to-decision in recurring meetings, stuck work/aging reports, decisions per cycle, and decision confidence. Learning velocity This is the compounding one. Track adoption curves, playbooks/workflows created per month, time from new capability introduced → used in production, and how many documented workflows are adopted by 10+ people. Dashboards should show three layers: Leading indicators (adoption, workflow usage, learning velocity). Operational indicators (cycle time). Business outcomes (pipeline influence, time to market, cost of service). You're not investing in AI to save hours. You're building a system that produces better work, faster, with lower risk, and gets smarter every month.   Timestamps 00:01 — "If you're measuring AI success by hours saved… that's table stakes." 00:51 — Why time saved doesn't translate cleanly into value 01:12 — Time saved doesn't become value unless reinvested 01:29 — Hours saved can punish high performers (they get more work) 02:10 — Time saved misses second-order benefits (mistakes, decisions, learning) 02:45 — Introducing the "AI ROI stack" (five better metrics) 02:59 — Metric 1: Quality lift (error rate, revision cycles, satisfaction) 03:31 — Example: proposal revisions drop from four rounds to two 04:14 — Metric 2: Risk reduction (compliance, incidents, legal load, near misses) 05:19 — Metric 3: Speed to opportunity (idea to customer touch, sales cycle, launches) 06:11 — Example: RFP response in 24 hours vs five days 06:34 — Metric 4: Decision velocity (time to decision, stuck work, confidence) 07:30 — Metric 5: Learning velocity (adoption curve, workflows, time to production) 08:57 — Dashboards: leading indicators vs lagging indicators 09:15 — Dashboards should include business outcomes (pipeline, time to market, cost) 09:32 — Reframe: AI as a system that improves monthly 10:08 — "Time saved is the doorway. Quality/risk/speed/decisions/learning is the house." 10:36 — Closing + review request   If your AI dashboard is only "hours saved" keep it - but don't stop there. Add one metric from the ROI stack this month. Start with quality lift or speed to opportunity. Then watch how fast the conversation shifts from cost to investment. Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.
Host Susan Diaz sits down with Jennifer Hufnagel (Hufnagel Consulting), an AI educator and AI readiness consultant who's trained 4K+ people. They break down what "AI readiness" actually means (spoiler: it's not buying Copilot), why AI doesn't fix broken processes or dirty data, and how leaders can build real capability through training programs, communities of practice, and properly resourced AI champions. Episode summary Susan Diaz and Jennifer Hufnagel met in "the most elite way possible": both were quoted in The Globe and Mail about women and AI. Jennifer shares her background as a business analyst and digital adoption / L&D consultant, and how she pivoted when clients began asking for AI workshops right after ChatGPT's release. Together, they map a simple but powerful framework: AI awareness (practice + play, foundational learning, early change management) AI readiness (software stack, data quality, workflows, current state, and - quietly - the "people audit") AI adoption (implementation, strategy, and ongoing integration) Jennifer explains why "audit" language scares people, but the work is essential - especially talking to humans about what's frustrating, what takes time, and where fear is showing up. She shares what she's seeing after training thousands: AI fluency is still low, people obsess over tools, and many assume AI will solve problems that are actually process or data issues. The second half gets practical: what "workflows" really mean (step-by-step checklists), how AI now makes documenting processes easier than ever (voice → SOPs), why prompt engineering isn't dead but "100 prompts for your bookkeeping business" is mostly snake oil, and why one-off training sessions don't create real fluency. They close with how to build sustainable AI capability: proper training programs, leadership-led culture, communities of practice, and protecting champions from becoming unpaid help desks. Key takeaways AI readiness is the middle of the journey. Jennifer frames AI maturity as: awareness → readiness → adoption. Most organisations skip readiness and wonder why adoption stalls. Readiness includes software, data, process… and people. You can call it a software/data/process audit, but you still have to talk to humans about their day-to-day work, pain points, and fears. That's where the truth lives. AI fluency is still lower than the headlines suggest. Jennifer questions rosy "90% adoption" stats because many rooms she's in still show low real-world usage beyond basic experimentation. Stop obsessing over tools. Companies are writing AI policy around tools and forcing everyone into a single platform. Jennifer argues the real goal is discernment, critical thinking, and clarity - not "pick one tool and pray". AI doesn't fix broken processes or dirty data. If your workflows aren't documented, AI will scale the chaos. If your data is messy, the analysis will be messy too. Readiness comes first. A workflow is just a checklist. Jennifer demystifies "workflow" as step-by-step instructions and ownership: who does what, when. Sticky notes on a wall is a valid start. Process documentation is easier than ever. You can dictate steps into a model (without passwords) and ask it to produce an SOP/checklist - getting knowledge out of people's heads and into a shareable format. Prompting isn't dead, but promise-all prompt packs are mostly hype. Prompting differs by model, and the best move is often to ask the model how to prompt it - and how to troubleshoot when output is wrong. One-off AI workshops don't create fluency. AI changes too fast. Real capability requires programs, practice, communities of practice, office hours, and change management - plus leadership modelling and culture. Don't burn out your AI champions. Champions need dedicated time, resources, and leadership sponsorship. Otherwise they become unpaid AI help desks and the entire initiative becomes fragile. Community of practice is the unlock. Jennifer shares her in-person "AI Chats & Bites" group and encourages finding online + in-person + internal communities to keep learning alive. Episode highlights 00:01 — The 30-day podcast-to-book sprint and why people are saying yes in December 00:40 — Susan + Jennifer meet via The Globe and Mail "women and AI" feature 01:21 — Jennifer's origin story: business analyst → digital adoption/L&D → AI readiness 04:09 — The three-part framework: awareness → readiness → adoption 05:03 — Readiness: software stack, data quality ("dirty data"), and mapping current state 06:13 — "People audit" without calling it that: interview humans about pain + fear 08:02 — What Jennifer sees after ~4,000 trainees: fluency still low + stats don't match reality 09:38 — AI doesn't fix broken processes; it scales whatever is there 10:55 — Workflows explained as checklists; "won the lottery" handoff test 12:18 — Dictate your process into AI → generate SOPs/checklists 14:24 — Prompting isn't dead; ask the model to help you prompt + troubleshoot 17:50 — Why one-off training doesn't work; AI fluency requires a program + practice 22:15 — Burning out champions and why AI culture must be top-down 27:49 — Communities of practice: online + local + internal 31:00 — Common mistakes: vending-machine mindset, believing output, not defining the problem 35:31 — Women and AI: opportunity, fear, resilience, and "be in the grey" 39:51 — Where to find Jennifer: hufnagelconsulting.ca + LinkedIn Guest info Jennifer Hufnagel Website: hufnagelconsulting.ca Email: hello@hufnagelconsulting.ca Best place to connect: LinkedIn - Jennifer Hufnagel If AI adoption feels stuck in your organization, don't buy another tool first. Start with readiness: Map one workflow end-to-end. Talk to the humans doing it daily. Clean up the process and data enough that AI can actually help. Then build fluency through a program - not a one-off workshop - and protect your champions with real time and resources.   Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.
If your organization ran an "AI 101" lunch-and-learn… and nothing changed after, this episode is for you. Host Susan Diaz explains why one-off workshops create false confidence, how AI literacy is more like learning a language than learning software buttons, and shares a practical roadmap to build sustainable AI capability. Episode summary This episode is for two groups: teams who did a single AI training and still feel behind, and leaders realizing one workshop won't build organizational capability. The core idea is simple: AI adoption isn't a "feature learning" problem. It's a behaviour change problem. Behaviour only sticks when there's a container - cadence, guardrails, and a community of practice that turns curiosity into repeatable habits. Susan breaks down why one-off training fails, what good training looks like (a floor, not a ceiling), and gives a step-by-step plan you can use to design an internal program - even if your rollout already happened and it was messy. Key takeaways One-off AI training creates false confidence. People leave either overconfident (shipping low-quality output) or intimidated (deciding "AI isn't for me"). Neither leads to real adoption. AI literacy is a language, not a feature. Traditional software training teaches buttons and steps. AI requires reps, practice, play, and continuous learning because the tech and use cases evolve constantly. Access is not enablement. Buying licences and calling everyone "AI-enabled" skips the hard part: safe use, permissions, and real workflow practice. Handing out tools with no written guardrails is a risk, not a training plan. Cadence beats intensity. Without rituals and follow-up, people drift back to business as usual. AI adoption backslides unless you design ongoing reinforcement. Good training builds a floor, not a ceiling. A floor means everyone can participate safely, speak shared language, and contribute use cases—without AI becoming a hero-only skill. The four layers of training that sticks: Safety + policy (permission, guardrails, what data is allowed) Shared language (vocabulary, mental models) Workflow practice (AI on real work, not toy demos) Reinforcement loop (office hours, champions, consistent rituals) The 5-step "training that works" roadmap Step 1: Define a 60-day outcome. "In 60 days, AI will help our team ____." Choose one: reduce cycle time, improve quality, reduce risk, improve customer response, improve decision-making. Then: "We'll know it worked when ____." Step 2: Set guardrails and permissions. List: data never allowed data allowed with caution data safe by default Step 3: Pick 3 high-repetition workflows. Weekly tasks like proposals, client summaries, internal comms, research briefs. Circle one that's frequent + annoying + low risk. That becomes your practice lane. Step 4: Build the loop (reps > theory). Bring one real task. Prompt once for an ugly first draft. Critique like an editor. Re-prompt to improve. Share a before/after with the team. Step 5: Create a community of practice. Office hours. An internal channel for AI wins + FAQs. Two champions per team (curious catalysts, not "experts"). Only rule: bring a real use case and a real question. What "bad training" looks like one workshop with no follow-up generic prompt packs bought off the internet tools handed out with no written guardrails hype-based demos instead of workflow practice no time allocated for learning (so it becomes 10pm homework) Timestamps 00:00 — Why this episode: "We did AI training… and nothing changed." 01:20 — One-off training creates two bad outcomes: overconfident or intimidated 03:05 — AI literacy is a language, not a software feature 05:10 — Access isn't enablement: licences without guardrails = risk 07:00 — Cadence beats intensity: why adoption backslides 08:40 — Training should build a floor, not a ceiling 10:05 — The 4 layers: policy, shared language, workflow practice, reinforcement 12:10 — The 5-step roadmap: define a 60-day outcome 13:40 — Guardrails and permissions (what data is never allowed) 15:10 — Pick 3 workflows and choose a low-risk practice lane 16:30 — The loop: prompt → critique → re-prompt → share 18:10 — Communities of practice: office hours + champions 20:05 — What to do this week: pick one workflow and run one loop If your organization did an AI 101 and nothing changed, don't panic. Pick one workflow this week. Run the prompt → critique → re-prompt → share loop once. Then schedule an office hour to do it again. That's how you move from "we did a training" to "we're building capability". Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.  
Host Susan Diaz is joined by Chris McMartin, National Lead for the Scotiabank Women Initiative (Business Banking), for a real-world conversation about how women are approaching AI. They talk about time poverty, fear of asking "dumb" questions, the shame myth of "AI is cheating", and why the most powerful move right now is women holding the door open for each other - learning in community and sharing what works. Episode summary This episode is a candid, energetic conversation with Chris McMartin - aka "Hype Boss" online and a long-time hype woman for women entrepreneurs. They explore what's different about women's AI adoption, and it's not lack of interest. They discuss the reality of learning AI at 10pm on the couch after the "82-item checklist" of the day is finally done. And the catch-22 at play - AI can save time, but learning it takes time first - like baking cookies with kids turning a 20-minute task into a two-hour event. From there, they unpack a deeper barrier: many women hesitate to ask questions because they don't want to look silly. Chris argues women often use AI "just a little bit" because it doesn't require admitting what they don't know - meaning AI becomes a copywriting helper instead of a real growth lever. They also confront the "AI is cheating" narrative. Chris shares her no-apologies stance: if AI improved your grammar overnight, that's not shameful - it's smart. And if you're worried about being judged for questions, ask AI itself - because it won't judge you. The conversation closes with practical advice for women-led teams (especially 5-50 people): start by identifying the task everyone hates and use AI there first, and schedule learning time during business hours instead of relegating growth to late-night exhaustion. Along the way, Susan brings in a powerful metaphor: "hold the door" leadership - women who are already in the room have a responsibility to bring others in with them. (Metaphor inspired by Game of Thrones and Bozoma Saint John.  Key takeaways Women aren't unwilling. They're time-starved. Many women try to learn AI at the end of the day, when they're exhausted - because that's the only time left. AI has a "cookie problem". It has huge benefits later, but it costs time upfront to learn - just like baking with kids. That learning curve is real, and it's a major adoption barrier. Fear of questions limits adoption. Chris observes women often hesitate to ask "how do I use this better?" which keeps AI usage stuck at surface-level tasks like captions and posts. "AI is cheating" is a myth that needs to die. Chris's take: using AI to communicate more clearly isn't unethical. It's an upgrade. She also notes men rarely apologize for finding ways to do things better. Ask AI how to use AI. If you feel silly asking humans, ask your LLM: "What questions should I answer so you can help me solve this?" That's the difference between generic output and useful work. Community is a women's superpower. Women often collaborate with "competitors" with zero weirdness. That community-of-practice energy is exactly what AI learning needs. For women-led teams: start with pain. Chris's first practical move: ask your team what task they hate most, then use AI to reduce or remove that pain point to build buy-in. Schedule learning like leadership. Don't push AI learning to 10pm. Put it on the calendar during work hours. Your development is part of the job. Grants can fund AI training and tech upgrades. Chris reminds listeners that many grants support technology advancement and hiring expertise - even for non-tech businesses - and AI can reduce the pain of grant writing. Episode highlights [00:03] Meet Chris McMartin + the Scotiabank Women Initiative. [02:00] "10pm on the couch" and why time poverty shapes women's learning. [02:44] The cookie analogy: AI saves time later, but learning costs time now. [05:00] Women using AI 1%: safe tasks without asking questions. [06:46] Why this matters: many "at risk" roles are held by women. [09:35] "AI is cheating" + the grammar glow-up story. [11:42] "Ask AI questions - AI doesn't judge you." [13:00] Relationship mindset: don't be transactional with AI; ask better questions. [16:21] "Hold the door" leadership and building rooms where women feel welcome. [21:43] Two tactical tips: solve a pain point first + schedule learning time. [33:48] Grants as a funding path for training and tech improvements. [38:57] Say yes to conversations even if you "don't know enough." [41:18] Where to find Chris + her podcast I Am Unbreakable. If you're a woman entrepreneur (or you lead women in your organization), take one action from this episode this week: Ask your team what task they hate most. Pick one painful workflow and test AI there first. Put one hour of AI learning on the calendar during business hours. And if you're already "in the room" with AI? Hold the door. Invite someone in.   Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. Connect with Chris McMartin on LinkedIn.
AI can feel like a creativity cheat code… or like the death of originality. In this short, punchy solo episode, Susan argues the truth is simpler: AI doesn't create creativity. It creates options. Creativity still belongs to the driver—your taste, courage, and point of view. Episode summary Susan tackles a question she hears constantly: does AI expand creativity or flatten it? Her answer: it depends on how you're using it. If you use AI like a photocopier—generate a first draft and ship it unchanged—you're not becoming more creative. You're becoming more efficient at being generic. But if you use AI like a paintbrush—as a sparring partner, a possibility engine, a constraint generator, or a remix assistant—it can shorten the distance between blank page and something you can shape. The episode is a practical reset on what creativity actually requires in the AI era: taste, discernment, point of view, and the courage to be a little weird on purpose.   Key takeaways AI doesn't create creativity. It creates options. Creativity still requires intent, judgement, and discernment. Paintbrush vs Photocopier is the core distinction. Photocopier: you accept the default output and publish it. Paintbrush: you use AI to generate raw material, then you curate and shape. People can "spot AI writing" when there's no point of view. Generic writing usually means the human outsourced the messy parts: emotional clarity, lived experience, and risk. Creativity needs both taste and courage. AI doesn't do courage. Taste is what you choose—and what you reject. Better prompts aren't about asking for "the answer." They're about asking for raw material: angles, metaphors, structures, constraints, and pushback. Ways to use AI like a paintbrush Try these modes when you feel stuck: Sparring partner: "Push back on this." "Argue the opposite." "What am I not seeing?" Possibility engine: "Give me 20 angles." "Give me metaphors." "Suggest surprising structures." Constraint generator: "Make this an 8-word active headline." "Explain it for a 12-year-old." "Turn it into a story with a clear villain." Remix assistant: "Turn this into a framework." "Make it a checklist." "Turn it into a debate." The mini exercise Susan gives you Pick something you're working on (a post, pitch, talk, plan). Then ask AI: What's the most boring version of this? What's the boldest version of this? What's the truest version of this for me? Then you decide what to keep, amplify, or reject. That's the creative act.   Episode highlights   00:02.36 — The core question: does AI make you more or less creative? 00:18.46 — "It depends who's driving" (AI amplifies you) 00:55.04 — Core belief: AI creates options, not creativity 01:23.10 — Paintbrush vs photocopier framing 01:38.81 — The photocopier trap: shipping first drafts unchanged 02:16.38 — Why people can "tell it's AI" (patterned, POV-less output) 03:03.35 — Creativity requires taste + courage (AI doesn't do courage) 03:19.86 — Choose the paintbrush path 03:37.29 — Use AI as a sparring partner (push back / argue opposite / what am I missing?) 03:47.10 — Use AI as a possibility engine (20 angles, metaphors, surprising structures) 04:18.08 — Use constraints to force originality (8-word headline, etc.) 05:16.92 — What "taste" actually is: what you choose (and reject) 05:45.66 — Ask for raw material, not "the answer" 06:05.56 — Mini exercise setup (post, pitch, talk, plan) 06:21.82 — Three-question prompt: boring vs bold vs true 07:27.10 — When AI makes you less creative: avoiding the thinking 07:38.96 — Don't quit AI—change the prompt 07:51.10 — Ask for "stranger / truer / mine" 08:03.45 — Wrap: creativity is in the driver's hands 08:19.29 — Quick ask: rating + review   If you're feeling creatively stuck, don't ask AI to be creative for you. Ask it to help you explore—then use your taste to choose. And if you're enjoying this 30-day podcast-to-book sprint, leave us a quick rating + short review to help more people find the show Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.  
Host Susan Diaz is joined by Shona Boyd, a product manager at Mitratech, a SaaS company, and a proudly AI-curious early adopter, for a grounded conversation about what AI literacy actually means now. They talk about representation, critical thinking, everyday meet-you-where-you-are workflows, shadow AI, enterprise guardrails, and why leaders must stop chasing AI features that don't solve real user problems. Episode summary Susan introduces Shona Boyd - AI-curious early adopter and SaaS product manager—whose mission is to make AI feel less scary and more accessible. Shona shares how her approachable AI philosophy started in product work: she used AI to build audience insights and feedback loops when job seekers weren't willing to do interviews, and quickly realized two things: (1) AI wasn't going away, and (2) there weren't many visible women or people who looked like her leading the conversation. So she raised her hand as an approachable reference point others could learn from. From there, the conversation expands into what AI literacy has evolved into. It's no longer just "which tool should I use?" or "how do I write prompts?" Shona argues that today literacy is about critical thinking, learning how to talk to an LLM like a conversation, and choosing workflows that benefit from AI rather than chasing hype. They also get practical: Shona gives everyday examples (Medicare PDFs, credit card points, life admin) to show how AI can meet you where you are, without requiring you to build agents or become super technical. Finally, Susan and Shona go deep on organizational adoption: why handing out logins without policies is risky, how shadow AI shows up (hello, rogue meeting note-takers), why leadership sponsorship matters, and what companies should stop doing immediately: AI for the sake of AI. Key takeaways Representation changes adoption. When people don't see anyone who looks like them using AI confidently, they're less likely to lean in. Shona chose to be a visible, approachable point of reference for others. AI literacy has shifted. It's no longer mainly about which model or prompt frameworks. It's about: learning the language (LLM, GPT, etc.) staying curious building critical media muscles to evaluate what's true, what's AI, and what needs sources. Workflows aren't just corporate. A workflow is simply: tasks + the path to get them done. Shona's examples show AI can help with day-to-day life admin (PDFs, policies, benefits, points programs), which makes AI feel more approachable fast. The first output is not the final. "I can spot AI content" usually means people are publishing raw first drafts. High-quality AI use looks like: draft → critique → refine → human judgement. What good organizational training is NOT: handing out tool logins with no policy, no guidance on acceptable use, and no understanding of enterprise vs personal security. Shadow AI is already here. People are adding unapproved AI note-takers to meetings and uploading sensitive info into personal accounts. Blanket bans don't work - they push experimentation underground. Adoption needs product thinking. Shona suggests leaders treat internal AI like a product launch: run simple feedback loops (NPS-style checks) analyse usage patterns to find sticking points apply AI where it solves real pain, not where competitors are hyping features. Leadership ownership matters for equity. When AI is run department-by-department, you create "haves and have-nots" (tools, training, access). Top-down support plus safe guardrails reduces inequity and increases psychological safety. Spicy take: stop doing AI for the sake of AI. If you can't explain how an AI feature improves real user life in a non-marketing way, it probably shouldn't ship. Episode highlights [00:01] The 30-day podcast-to-book sprint and why leaders are still showing up in December. [01:14] Shona's origin story: using AI to build audiences and feedback loops in a job board context. [02:17] The visibility gap: not many women / people who looked like her in early AI spaces. [05:55] What AI literacy means now: critical thinking + conversation with an LLM + workflow selection. [07:16] "Workflows" made real: Medicare PDFs and credit card points examples. [10:13] Three essentials: foundational language, curiosity, and critical media literacy. [12:23] What training is NOT: handing out logins with no policy or guardrails. [15:49] Handling fear and resistance with empathy and a human-in-the-loop mindset. [23:27] Product lens on adoption: NPS feedback loops + usage analytics to find real needs. [28:14] Shadow AI: rogue note-takers, personal accounts, and why bans backfire. [31:17] Policies at multiple levels, including interviewing and candidate use of AI. [36:49] "Stop AI for the sake of AI" and the race to ship meaningless features. [39:13] Where to find Shona: LinkedIn (under Lashona).   Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.   Connect with Lashona Boyd on LinkedIn  
Most teams are stuck in tool obsession: "Should we build agents?" "Should we buy this AI platform?" In this solo, workshop-style episode, host Susan Diaz pulls you back to reality with a simple decision guide: buy vs bolt-on vs build, four leadership filters, and a practical workflow exercise to help you choose the right approach - without falling for agentic fantasies. Episode summary Susan opens with a pattern she's seeing everywhere: 75% of AI conversations revolve around tools - agents, platforms, add-ons - and they're often framed as all-or-nothing decisions. She reframes it: AI is best understood as robotic process automation for the human mind, not a single agent replacing a person or a department. This episode is structured like a mini workshop. Susan asks you to grab paper and map a real workflow step-by-step - because the decision isn't "which AI tool is hot" it's what job are we automating. Then she defines the three choices leaders actually have: Buy: purchase an off-the-shelf solution that works as-is. Build: create something custom (apps, integrated experiences, models). Bolt-on: the underrated middle path - use tools you already have (enterprise LLMs, suites), then add custom GPTs/projects, prompt templates, and lightweight automations. She introduces a six-level "ladder" from better prompts → templates → custom GPTs/projects → workflow automation → integrated systems → custom builds, and offers a gut-check on whether your "agentic dreams" match your organizational capacity. Key takeaways Start with the job-to-be-done, not the tool. The most common mistake is choosing tech before defining the workflow. A workflow is simply a chain of small tasks with clear verbs and steps. AI is RPA for your brain. Think "Jarvis" more than "replacement." It's about removing repetitive noise while keeping human judgement, discernment, and creativity in the lead. Buy vs Build vs Bolt-on: Buy when you need reliability, guardrails, enterprise support, and the use case is common (summaries, note-taking, analytics). Build when the workflow is your differentiation, data is proprietary, outcomes are strategic, and you can support ongoing maintenance and governance. Bolt-on for most teams: fast, cheaper, easier to change. Start by layering custom GPTs/projects and lightweight automation on top of existing tools and licences. Six levels of maturity (a ladder, not a leap): Better prompts (one-off help) Templates / prompt libraries (repeatable help) Custom GPTs / projects (consistent behaviour + knowledge) Workflow automation (handoffs between steps) Integrated systems (data + permissions + governance) Custom builds (strategic + resourced) Four decision filters for leaders: A) Repeatable workflow or one-off? B) Is the value in the tech itself, or in how you apply it? C) Data sensitivity and risk level? (enterprise controls matter) D) Do you have operating maturity to run it? (monitoring, owners, governance, feedback loops) Automation ≠ autopilot. Automation is great. Autopilot is abdication. If you ship first-draft AI output without review, you'll get "garbage in, garbage out" reputational risk. A simple friction-mapping exercise: Map a 10-step workflow (open, check, find, copy, rewrite, compare, ask someone, format, send, follow up). Circle the friction steps. Label each friction point: R = repeatable J = judgement-heavy D = data-sensitive Then choose: buy / bolt-on / build based on what dominates. Reality check for "agentic dreams": Before building: Do you have a documented workflow? Do you have a human owner reviewing weekly? Do you have a feedback loop? If not, you're building a liability, not a system. The real bet isn't build vs buy. It's this: "What repeatable work needs a personalised tool right now?" Episode highlights [00:02] Why most AI conversations are tool-obsessed (agents, platforms, add-ons). [01:50] "RPA for the human mind" + the Jarvis analogy. [04:14] Workshop setup: buy vs bolt-on vs build + decision filters. [05:15] Step 1: define the job-to-be-done (not the department). [08:13] The 10-step workflow template (open → follow up). [10:49] Definitions: buying AI vs building AI vs bolt-on AI. [14:13] The ladder: prompts → templates → custom GPTs → automation → integrated systems → builds. [16:42] Filter A: repeatable vs one-off (and why repeatable is bolt-on territory). [18:27] Filter C: data sensitivity and enterprise-grade controls. [19:45] Filter D: operating maturity—where agentic dreams go to die. [20:08] Automation vs autopilot (autopilot = abdication). [21:24] Circle friction points + label R/J/D to decide. [25:42] Reality check: documented workflow, owner, feedback loop. [26:33] The takeaway: personalised tools for repeatable work beat agent fantasies.   Try the exercise from this episode with your team this week: Pick one recurring, annoying-but-important job. Map it in 10 simple steps. Circle friction points and label R / J / D. Decide: buy, bolt-on, or build—and write: "For this workflow, we will ___ because the biggest constraint is ___."   Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.  
Host Susan Diaz sits down with her business buddy and go-to-market consultant Suzanne Huber to talk about what AI has actually changed in marketing. Together they explore AI as "robot arms" (an extension of expertise), why first-draft AI content gets a bad rap, how modern marketers use AI for research, planning, editing, and proposals, and why thought leadership and personal brand matter more than ever. Episode summary Susan and Suzanne have been talking about AI since 2022. In this episode, they make it official. Suzanne introduces a metaphor that sticks: AI as "robot arms". You're still the driver. AI can extend your reach, speed up the grunt work, and help you close expertise gaps—but it still needs human judgment, critical thinking, and craft. They compare marketing before vs after AI: headlines, research, applying feedback, simplifying complex plans into executive-friendly formats, cross-checking sources (especially Canadian vs US), and building repeatable workflows with custom GPTs. They also tackle the bigger questions: Does expertise still matter? Is personal brand becoming more important in the age of AI? What should writers do if they feel threatened? Spoiler: AI can speed up output. But insight, values, differentiation, and taste are still the human edge.   Key takeaways AI is "robot arms" not a replacement brain. It's an extension of expertise. You still need to steer, evaluate quality, and avoid publishing raw first drafts that can damage trust. First-draft AI is the content factory problem. AI-assisted content gets a bad reputation when junior-level or high-volume systems publish credible-sounding fluff with no real subject matter expertise behind it. Craftsmanship still matters. Marketing got faster because the grunt work collapsed. Headlines, rewrites, reformatting, applying feedback, outlining, and turning long documents into charts/tables can happen in minutes - not hours. You still refine, but you're starting from a better baseline. Research and fact-checking changed dramatically. Instead of trawling search results for hours (and getting US-default sources), AI tools can surface targeted sources fast - then humans choose what's credible and relevant. Custom GPTs shine for repeatable processes. Susan shares how she uses custom GPTs (including MyShowrunner.com) for guest research, interview questions, emails, and packaged deep research briefs - turning recurring work into reusable systems. Expertise always matters - especially for positioning and thought leadership. Differentiation, values, hot takes, and human intuition are what attract the right people (and repel the wrong ones). AI can assist, but it can't replace lived POV. Personal brand matters more in the age of AI. As audiences get more suspicious of generic content and AI avatars, trust increasingly attaches to real humans with visible ideas, proof, and consistency. For writers who feel threatened: use it or get outpaced. AI can accelerate production for factual formats (press releases, timely content). Writers who combine craft + AI + fast learning become the force multipliers. But journaling/introspective writing still belongs to the human-only zone. Episode highlights [01:29] Suzanne's "robot arms" metaphor: AI as an extension of expertise. [02:47] Why first-draft AI should never leave your desk. [03:56] The telltale signs of lazy AI writing (and why it gets a bad rap). [05:00] Before vs after AI: the research + writing process changes. [07:24] Simplifying complex work: plans → tables → charts for execs. [09:10] Deep research for Canadian sources without wasting hours. [10:25] Custom GPT workflows (MyShowrunner + research briefs). [12:29] Where expertise still matters in an AI-saturated world. [16:56] Personal brand: attracting the right people + repelling the wrong ones. [20:00] AI for proposals and even pricing guidance. [22:00] Advice for writers who feel threatened by AI. If you've been resisting AI because you're worried it will erase your craft, try this reframing: Use AI for the grunt work. Keep the human parts for the parts that build trust: taste, judgement, voice, and values. And if you want a simple starting point, ask yourself: What could use "robot arms" in your marketing workflow this week - headlines, research, rewrites, proposals, or planning? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.   Connect with Suzanne Huber on LinkedIn.   
AI doesn't fail in organizations because the tools are bad. It fails because culture is glitchy. In this solo episode, host Susan Diaz explains why AI is just the "app" while your organizational culture is the real operating system - and she shares six culture pillars (plus practical steps) that determine whether AI adoption becomes momentum… or messy risk. Episode summary Susan reframes AI adoption with a simple metaphor: AI tools, pilots, and platforms are "apps". But apps only run well if the operating system - your culture - is healthy. Because AI is used by humans, and humans have behaviour norms, and they value incentives, safety, and trust. She connects this to the "experiment era" where organizations see unsupervised experimentation, shadow AI, and uneven skill levels - creating an AI literacy divide if leaders don't intentionally design expectations and values. From there, Susan defines culture plainly ("how we think, talk, and behave day-to-day") and shows how it shows up in AI: what people feel safe admitting, whether experiments are shared or hidden, how mistakes are handled, and who gets invited into the conversation. She then walks through six pillars of purposeful AI culture and closes with tactical steps for leaders: naming principles, building visible rituals, supporting different AI archetypes, aligning incentives, and communicating clearly. Key takeaways Stop treating AI like a one-time "project". AI adoption doesn't have a clean start/end date like an ERP rollout of yore. Culture is ongoing, and it shapes what happens in every meeting, workflow, and decision. The "experiment era" creates shadow AI and uneven literacy. If unsupervised experimentation continues without an intentional culture, you get risk and a widening gap between power users and everyone else. Six pillars of an AI-ready culture: Experimentation + guardrails - Pro-learning and pro-safety. Define sandboxes and simple rules of the road not 50-page legal docs. Psychological safety - People won't admit confusion, ask for help, or disclose risky behaviour without safety. Leaders modelling "I'm learning too" matters. Transparency - A trust recession + AI makes honesty essential. Encourage show-and-tell, logging where AI helped, and "we're not here to punish you" language. Quality, voice, and ethics - AI can draft, humans are accountable. Define what must be human-reviewed and what "good" looks like in your brand and deliverables. Access + inclusion - Who gets to play? Who gets training? Avoid new "haves/have-nots" dynamics across departments and demographics. AI literacy is a survival skill. Mentorship - Champions programs and pilot teams only work if mentorship is real and resourced (and doesn't become unpaid side-of-desk work). Four culture traps to avoid: Compliance-only culture (all "don't", no "here's how to do it safely") Innovation theatre (demos and buzzwords, no workflow change) Hero culture (1-2 AI geniuses and nothing scales) Silence culture (confusion and shadow AI stay hidden and leadership thinks "we're fine") Culture is the outer ring around your AI flywheel. Your flywheel (audit → training → personalized tools → ROI) compounds over time, but culture is what makes the wheel safe and sustainable. Episode highlights [00:01] AI is a tool. Culture is the system it runs on. [01:30] The experiment era: shadow AI and unsupervised adoption. [02:01] The AI literacy divide: some people "run apps," others can't "install them." [03:00] Culture defined: how we think, talk, and behave—now applied to AI. [04:56] Pillar 1: experimentation + guardrails (sandboxes + simple rules). [07:23] Pillar 2: psychological safety and the shame factor. [11:37] Pillar 3: transparency in a trust recession. [13:57] Pillar 4: quality, voice, ethics—AI drafts, humans are accountable. [16:33] Pillar 5: access + inclusion—AI literacy as survival skill. [19:00] Pillar 6: mentorship and avoiding unpaid "champion" labour. [23:31] Four bad patterns: compliance-only, innovation theatre, hero culture, silence culture. [25:47] The closer: AI is the latest app. Culture is the operating system. If your organization is buying tools and running pilots but still feels stuck, ask: What "AI culture" is forming by default right now - compliance-only, hero culture, silence? Which one pillar would make the biggest difference in the next 30 days: guardrails, safety, transparency, quality, inclusion, or mentorship? What ritual can we introduce this month (show-and-tell, office hours, workflow demos) to make AI learning visible and normal? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.  
Why are women adopting AI at lower rates than men - and what's really going on underneath the stats? In this episode, host Susan Diaz and Heather Cannings, Women Entrepreneurship Program Lead at InVenture and producer of StrikeUP Canada, dig into time poverty, "cheating" fears, late-night upskilling, and what real support for women entrepreneurs needs to look like in an AI-forward world. Episode summary Susan is joined by Heather Cannings, who leads women's entrepreneurship programs at InVenture and runs StrikeUP, a national digital conference that reaches thousands of women entrepreneurs across Canada and beyond. Heather shares what she's seeing on the ground: huge curiosity about AI, mixed with pressure, fatigue, and a sense that it's "one more thing" women are expected to learn on their own time. Many of the women she serves are juggling multiple roles - business owner, employee, caregiver - then experimenting with AI at 10-11 pm after the workday and bedtime routines are done. They unpack the emotional layer too: why AI still feels like "cheating" or being an imposter for many women the question of whether you have to disclose using AI how to reconcile charging premium prices while using AI behind the scenes Susan and Heather link lower AI adoption rates among women to a wider pattern: another version of the wage gap and systemic inequities in who has time, safety, and support to skill up. The conversation then turns to: Practical, real-world use cases women are already using AI for (grant writing, content, summarizing long docs). What good support systems actually look like for time-strapped entrepreneurs (designing for constraints, not fantasy calendars). How small, scrappy businesses and large organizations can learn from each other on speed, governance, and risk. The uncomfortable reality that many roles most at risk of AI automation - admin, entry-level comms, research - are heavily female. They close with a hopeful lens: how women can use AI to increase their value and control over time and income, why this moment is a genuine opportunity for democratization, and how Heather's StrikeUP event is trying to meet women exactly where they are. Key takeaways AI doesn't feel neutral for women - it feels like another test. Many women entrepreneurs are curious about AI but also feel judged, worried about "doing it wrong," or like they're cheating if they use it. Imposter syndrome shows up as: "Is this really my work if AI helped?" Time poverty is the real barrier, not lack of interest. Heather sees women using AI at 10-11 pm after full workdays and caregiving, trying to finish newsletters, social posts, or grant drafts. They are upskilling - just in stolen moments, not spacious strategy sessions. Support systems must be designed for real constraints. Don't assume people have: unlimited time teams strong internet quiet workspaces Many women join digital events from cars, back rooms, or storage areas between tasks. Training and support must be consumable, flexible, and realistic. One-off AI webinars aren't enough. A single 60-minute "intro to AI" often just generates an overwhelming to-do list. What works better: smaller, workshop-style sessions hands-on guidance on a specific task or tool practical, "do it in the room" support so women leave with something done, not just inspired. Women are already using AI for practical, high-impact tasks. Common use cases include: writing and improving copy content planning and social media summarizing long documents drafting grants and pitches The focus is on time savings, staying within tight budgets, and safely getting more done—not chasing cutting-edge AI for its own sake. Enterprise and small business can - and should - learn from each other. Big firms: bring resources, governance, and policy thinking. Small businesses: bring speed, scrappiness, and the ability to implement immediately. Ecosystem players (non-profits, funders, educators) can translate between the two and help find a healthy middle ground. There's a gendered risk in AI-driven job change. Roles often flagged as "at risk" - admin, entry-level comms, research - are heavily staffed by women. Without intentional upskilling and redeployment, AI could quietly deepen existing inequities. There's also real opportunity. AI can be a "quiet force in the background" that removes 5-10 hours of repetitive work a week - enough to change a woman's lifestyle, income, and capacity. It can help women move up the ladder, redesign roles, or reshape their businesses around higher-value work. Designing AI with women's realities in mind matters. Women shouldn't just be users; they should help shape how tools are designed, so AI reflects real constraints like caregiving, part-time work, and patchy access - rather than assuming a mythical founder with unlimited time and support. Episode highlights [00:01] Susan sets the scene: 30 episodes in 30 days and how Heather fits into the series. [00:57] Heather introduces InVenture and her role as Women Entrepreneurship Program Lead, plus the StrikeUP conference. [01:55] Why AI remains a hot topic for StrikeUP's audience of women entrepreneurs. [02:57] AI as a catch-22: it can save time, but learning it feels like "one more thing." [03:56] "Is this cheating?" – women's fears about using AI and being judged. [05:09] AI, transparency, pricing, and the complexity of "should I tell clients I used AI?" [05:39] How this ties to stats showing women adopting AI 25% less than men—and why Susan sees it as another version of the wage gap. [07:07] Draft vs final: why treating AI output as a first draft, not finished work, is crucial. [08:33] The problem with generic, AI-generated content about "women in AI" that sounds impressive but says very little. [09:20] Real-world use cases Heather sees among small business owners. [10:22] The 11 pm pattern: women learning AI in stolen, exhausted moments. [12:06] Why women are resilient and experimenting—but lack daytime access to deep learning and setup time. [13:27] Designing support systems that don't assume unlimited time, teams, or bandwidth. [14:24] Making training consumable, recorded, and accessible from phones, cars, and storage rooms. [15:34] Why one-off webinars don't work—and the case for small, workshop-style sessions. [18:09] What big firms can learn from scrappy entrepreneurs (and vice versa). [20:10] The myth that corporates "have it all figured out" on AI. [22:19] AI and job loss: the gendered impact on admin, entry-level comms, and research roles. [23:20] Reframing: how women can use AI to increase their value and move up. [25:16] Adaptation over doom: calculators, the internet, and why we'll adjust again. [27:04] Heather's vision: AI as a quiet force helping women gain more control over time and income. [28:41] StrikeUP 2025 details: date, format, giveaways, and on-demand access. If you support or are a woman entrepreneur, use this episode as a prompt to ask: Where are women in your world already using AI - in stolen moments - and how could you meet them there with better support? How can you design AI training and tools that assume real constraints, not fantasy calendars? What's one concrete way you can help a woman in your ecosystem use AI to increase her value and control, instead of feeling like she's at risk of being automated away? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. You can learn more about StrikeUP and register for the free digital conference at strikeup.ca and connect with Heather Cannings on LinkedIn.
Some AI projects in your organization feel weirdly easy. Others feel impossible. In this solo workshop-style episode, host Susan Diaz introduces the Four AI Cliff Archetypes - Divers, Pathfinders, Operators, and Bridge Builders - and shows how understanding your mix of people (not just your tools) explains most of your AI momentum or lack thereof. Episode summary Susan opens with a familiar problem: in the same organization, some AI projects glide and others grind to a halt. The difference, she argues, isn't the tech - it's how different types of people respond when they hit a "cliff" moment, where the familiar path disappears and AI represents a big, unknown drop. Drawing on personality and operating-style frameworks like Kolbe for inspiration, she introduces the Four AI Cliff Archetypes: Divers - jump first, learn in motion, create raw experiments and speed. Pathfinders - map risk and opportunity, research, and ask the hard questions. Operators - take a plan and run it, turning ideas into executed workflows. Bridge Builders - turn chaotic experiments into systems, documentation, and "this is how we do it here" Listeners are invited to score themselves 0-10 on each type as Susan walks through how each archetype behaves at the cliff, what sentences give them away, and how they help or hurt AI adoption if unmanaged. She then moves from personal reflection to organizational design: how to sequence work so each type shines in the right place - especially across the AI flywheel of audit, training, personalised tools, and ROI. She closes with a "cliff to bridge" sequence - Divers jump, Pathfinders map, Operators ship, Bridge Builders scale - and a practical homework exercise for mapping real people on your leadership team to each archetype so you can stop fighting human behaviour and start designing with it. Key takeaways The friction isn't just tools, it's temperament. AI feels like a cliff: the path ends, the map is unclear, the bottom is invisible. People respond to that uncertainty in patterned ways - and those patterns shape your AI projects. The Four AI Cliff Archetypes: Divers - "Let's just try it." Early experimenters who move fast, download tools before memos, and learn by motion. They create velocity and risk (shadow AI, lack of documentation, burnout). Pathfinders - "Hold on, what does this do?" Risk scanners who research, ask for evidence, and think about policy and edge cases. They prevent disasters but can get stuck in analysis. Operators - "Tell me the plan and I'll run it." Execution machines who thrive on clear outcomes, ownership, timelines, and metrics. They build powerful machines… which can be pointed at the wrong target if leadership is vague. Bridge Builders - "No one should have to jump this every time." System designers who create repeatable workflows, playbooks, and training so experiments become infrastructure. They can over-engineer too early if they don't have real-world data. No one type is "best" - you need a mix. A team full of Divers = chaos. Pathfinders-only = analysis paralysis. Operators-only = beautifully executed wrong things. Bridge Builder-only = process with no proof. Balance beats dominance. Sequence the humans, not just the tasks. Susan offers a simple sequence for AI initiatives: Divers jump - generate raw experiments and discover real use cases. Pathfinders map - assess risk, compliance, and opportunity. Operators ship - turn what works into pilots and deployed workflows. Bridge Builders scale - standardize, document, and build bridges so others can cross safely. Map archetypes onto your AI flywheel. In audit, Pathfinders and Bridge Builders lead with Divers exposing shadow systems. In training, Bridge Builders and Operators lead while Divers provide examples. For personalized tools and ROI tracking, all four types play different roles - from prototyping to governance to metrics. Design for behaviour, don't fight it. You can't force Divers to become Pathfinders or Operators to become Bridge Builders. You can design projects, governance, and sequencing so each type does the work they're naturally wired for - reducing friction and accelerating adoption. Episode highlights [00:02] Why some AI projects feel easy in your org—and others feel impossible. [00:26] "It's not the tools. It's the people." Setting up the archetype model. [01:16] The cliff metaphor: the path ends, the map is unclear, and AI = the drop. [01:57] Inspiration from Kolbe and operating modes for creating these archetypes. [03:11] Introducing the four types: Divers, Pathfinders, Operators, Bridge Builders. [04:14] How to play along: scoring yourself 0–10 on each archetype. [04:53] Deep dive on Divers: language, strengths, and how they accidentally create shadow AI. [06:41] The "sandbox plus guardrails" playbook for managing Divers (including burnout protection). [08:02] Pathfinders: risk scanning, research, and how to avoid permanent evaluation mode. [09:37] Two-week sprints and one-page memos as tools to keep Pathfinders moving. [11:02] Operators: "tell me the plan and I'll run it," and why goals matter more than tools. [13:04] Translating AI into workflows and metrics Operators can own. [14:22] Bridge Builders: turning chaos into infrastructure and culture ("this is how we do it here"). [15:40] Pairing Divers + Bridge Builders, and Pathfinders + Bridge Builders, to avoid over-engineering. [17:27] Why a team full of any single archetype breaks your AI efforts in predictable ways. [18:35] Mapping each archetype onto the AI flywheel: audit, training, tools, ROI. [21:28] Applying the model to your leadership team: spotting overloads and missing roles. [22:37] The "cliff to bridge" sequence: Divers jump, Pathfinders map, Operators ship, Bridge Builders scale. [23:38] Homework: map one current AI initiative against the four archetypes and adjust who does what. Use this episode as a mini workshop for your next AI initiative: Score yourself across Diver, Pathfinder, Operator, Bridge Builder. Pick one real AI project and write actual names next to each type on your team. Ask: "Where are we overloaded, where are we missing a type, and how can we re-sequence the work so each archetype shines at the right moment?" That's how you stop treating AI like a terrifying cliff - and start treating it like a crossing your whole team actually knows how to make. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.
What does it really mean to build an AI-forward company that is still deeply human-first? In this episode, host Susan Diaz and senior HR leader and mentor culture advocate Helen Patterson talk about jobs, guardrails, copyright, environmental impact, and why mentorship and connection matter more than ever in the age of AI. Episode summary Susan is joined by Helen Patterson, founder of Life Works Well, senior HR leader, and author of the upcoming book Create a Mentor Culture. They start with a Y2K flashback and draw a straight line from past tech panics to today's AI headlines. Helen shares why she sees AI as the latest evolution of technology as an enabler in HR - another way to clear the admin and grunt work so humans can focus on growth, development, and real conversations. From there, they dig into: The tension between "AI will kill jobs" and tens of thousands of new AI policy and governance roles already posted. How shadow AI shows up when organizations put in blanket "no AI" rules and people just reach for their phones anyway. The very real issues around privacy, copyright, and intellectual property when staff feed proprietary material into public models. The less-talked-about environmental impact of AI and why leaders should demand better facts and more intentional choices from tech providers. In the second half, Helen brings the conversation back to humanity: mentorship as a counterweight to disconnection, her One Million Mentor Moments initiative, and how everyday "micro-mentoring" at work can help people adapt to rapid change instead of being left behind. They close with practical examples of using AI for good in real life - from travel planning and research to late-night dog-health triage - without letting it replace judgement. Key takeaways This isn't our first tech panic. From Y2K to applicant tracking systems, HR has always framed tech as an enabler. GenAI is the newest layer, not an alien invasion. Looking back at history helps calm "sky is falling" narratives. Jobs are changing, not simply disappearing. Even as people worry about AI-driven job loss, platforms like Indeed list tens of thousands of AI policy and governance roles. The work is shifting toward AI-forward skills in every function. Blanket "no AI" rules don't work. When organizations ban external tools or insist on only one locked-down platform, people quietly use their own devices and personal stacks anyway - creating shadow AI with real privacy and IP risk. Guardrails and education beat prohibition. Copyright and confidentiality need more than vibes. Without clear guidance, staff will copy proprietary frameworks or documents into public models and re-badge them. Leaders need simple, well-communicated philosophies about what must not go into AI tools. Environmental impact is part of human-first. Training and running large models consumes energy. The real solution will be systemic (how tech is built and powered), but individuals and organizations can still use AI more efficiently, just like learning not to leave all the lights on. Mentorship is the ultimate human technology. Helen's work on Create a Mentor Culture and One Million Mentor Moments reframes mentoring as everyday, one-conversation acts that share wisdom, reduce fear, and help people reskill for an AI-forward world. Tech should support that, not replace it. Upskilling beats layoffs. When roles change because of AI, the most human-first response isn't to cut people loose, it's to invest in learning, mentoring, and redeployment so existing talent can grow into new, AI-augmented roles. Use AI to simplify life, not complicate it. From planning multi-country trips to triaging whether the dog really needs an emergency vet visit, smart everyday use of AI can save time, money, and anxiety - freeing up more space for the work and relationships that actually matter. Episode highlights [00:01] Susan sets the scene: 30 episodes in 30 days to build Swan Dive Backwards in public. [00:39] Helen's intro: Life Works Well, heart-centred high-performance cultures, and her focus on mentorship. [03:43] What an AI-forward and human-centred organisation looks like in practice. [04:00] Y2K memories and why today's AI panic feels familiar. [06:11] 25–35K AI policy jobs on Indeed and what that says about the future of work. [07:49] Jobs lost vs jobs created—and why continuous learning is non-negotiable. [15:19] The danger of "everyone is using AI" with no strategy or safeguards. [19:25] Shadow AI, personal stacks, and why hard bans don't stop experimentation. [21:13] A real-world IP scare: proprietary material pasted into GPT and re-labelled. [23:06] GPT refusing to summarise a book for copyright reasons—and why that's a good sign. [24:03] The case for a simple AI philosophy doc: purpose, principles, and communication. [25:24] Environmental concerns, fact-checking, and the server-room-to-laptop analogy. [30:17] New social media laws for kids and what they signal about tech accountability. [30:41] One Million Mentor Moments: why one conversation can change a career. [31:22] From elite programmes to everyday mentor cultures inside organisations. [35:01] AI for mentoring and coaching: bots, big-name gurus, and internal use cases. [36:30] Using AI for travel planning, research, and everyday life admin. [37:35] Susan's story: using AI to triage a dog-health scare instead of doom-scrolling vet sites. [38:37] Life Works Well's roots in work–life harmony and simplifying with tech. [39:35] Where to find Helen online and what's next for her book. If you're leading a team (or a whole organization), use this episode as a prompt to ask: Where are we treating AI as a tool in service of humanity - and where are we forgetting the human first? Do our people actually know what's OK and not OK to put into AI tools? How could we use mentorship - formal or informal - to help our people navigate this shift instead of fearing it? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. You can connect with Helen Patterson on LinkedIn and follow her work on Create a Mentor Culture and One Million Mentor Moments via lifeworkswell.ca
Most leaders talk about AI in terms of pilots, projects, and one-off tools. In this solo episode, host Susan Diaz explains why that mindset stalls adoption - and introduces the idea of an AI flywheel: a simple, compounding loop of audit → training → personalized tools → ROI that quietly turns experiments into momentum across your whole organisation. Episode summary Susan opens by contrasting how most organizations approach AI - pilots, isolated chatbots, a few licences to "see what happens" - with how enduring companies build flywheels that compound over time. Borrowing from Jim Collins' Good to Great, and examples like Amazon's recommendation engine, she reframes AI from "one big launch" to a heavy wheel that's hard to move at first, but almost impossible to stop once it's spinning. She then introduces her AI flywheel for organizations, built on four moving pillars: Audit - reality-check where AI already lives in tools, workflows, risks, and guardrails. Training - raise the floor of AI literacy so more people can safely experiment. Personalised tools and workflows - move beyond generic prompts into department- and workflow-specific systems. ROI tracking - measure time saved, errors reduced, risk reduced, and adoption so the story keeps getting funded. Instead of a linear checklist, these components form a loop - each turn of the wheel making the next easier, and creating an unfair advantage for organizations that start early. Finally, Susan adds the outer ring: human-first culture and governance as the operating system around the flywheel - psychological safety, champions and mentors, and values like equity that ensure AI momentum doesn't quietly recreate hustle culture or leave people behind. She closes with practical questions any leadership team can use this week to start their own AI flywheel. Key takeaways Projects start and end. Flywheels don't. Treating AI as a string of pilots and vendor launches creates start–stop energy. Designing a flywheel turns every experiment into input for the next win. A flywheel is heavy at first - but gains unstoppable momentum. Like a giant metal train wheel, it needs a lot of initial force, but each full turn adds speed. AI works the same: early experiments feel slow, compounding learning later feels unfairly fast. The AI flywheel has four core pillars: Audit - map current tools, workflows, risks, and guardrails; discover hidden wins and power users. Training - treat AI like financial literacy: a minimum viable level for everyone so they can ask better questions and prompt more effectively. Personalised tools & workflows - stop asking "Which LLM?" and start asking "Which steps in this 37-step process should AI do?" Workflow first, tool second. ROI tracking - measure time saved, errors reduced, faster time to market, risk reduction, and % of AI-augmented workflows so leaders keep investing. Culture is the operating system around the flywheel. Without psychological safety, people hide experiments. Without support, power users burn out. Values like equity matter: who's getting trained, who has access, and who you're helping reskill. Governance should feel like guidance, not punishment. You don't build an AI flywheel in a day. You start with one audit, one workflow, one dashboard that makes things more transparent - and commit to one small centimetre of momentum at a time. Episode highlights [00:02] Why "we're piloting a chatbot" is not a strategy. [01:34] Flywheel 101: the train-wheel analogy and why momentum beats one-off effort. [03:19] Amazon's recommendation engine as a classic business flywheel. [05:02] Applying Jim Collins' Good to Great flywheel lens to AI initiatives. [05:30] From big bang ERP-style AI projects to small, compounding loops. [08:00] Introducing the four pillars: audit, training, personalised tools, ROI. [08:53] Audit as reality check: surfacing hidden wins and DIY power users. [11:14] Training as "raising the floor" of AI literacy. [14:08] Workflow-first thinking and the myth of the single all-powerful agent. [17:33] ROI stories: error reduction, faster time to market, and risk reduction. [20:19] Culture as outer ring: psychological safety, champions, values in action. [23:06] Starting your flywheel: three questions for your leadership team. Use this episode as a design tool, not just a definition. Grab a whiteboard with your leadership team and map: Where are we already auditing, training, personalising tools, and measuring ROI - however informally? Where is the wheel broken, or missing entirely? What's one centimetre of movement we can create this quarter - one audit, one workflow, one dashboard - to start our AI flywheel turning? Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.  
What does modern sales leadership look like when AI is in the mix? In this episode, host Susan Diaz and sales leadership coach Kirsten Schmidtke unpack how AI and humanity can peacefully coexist in sales, why scale starts with clarity and process (not tools), and how leaders can shift from output-obsessed hustle to outcome-focused, identity-level leadership in an AI-forward world. Episode summary Susan sits down with sales leadership consultant Kirsten Schmidtke to talk about AI, scale, and the "identity-level shifts" leaders need to make in modern sales. They start at the intersection of mindset and skillset - why AI is now part of the sales skill stack, but can't replace the human mindset, judgment, and presence required to sell well. Kirsten shares how sales organizations have moved from using AI as a basic copy/research tool to embedding LLMs in meetings, CRMs, and internal platforms, and even building their own AI features once they deeply understand their market and product. From there, they zoom out to the trust recession, spammy AI outreach, and the difference between being AI first and AI forward. They discuss AI as a way to free people into their zone of genius (hello, The Big Leap), the historical pattern of tech disruption and new job creation, and why AI should be seen as a massive upgrade to human potential - not a replacement. In the second half, they dig into scale and operations: why AI will only scale chaos if you don't have clear goals, processes, and SOPs. Why many sales orgs still lack documented sales and go-to-market processes. And how documenting before automating is the hidden unlock for using AI well. Kirsten closes with her identity-based leadership model (be → do → have), her outcome-over-output philosophy, and practical invitations for leaders who want to use AI to reduce burnout instead of fuelling hustle culture. Key takeaways Modern sales lives at the intersection of AI and humanity. AI is becoming part of the sales skillset, but the mindset - who you are being as a leader or seller - still drives how effectively those tools get used. Sales orgs have evolved past AI as copy tool. Early use was mostly email drafting and light research. Now teams are: choosing an LLM of choice (ChatGPT, Copilot, Perplexity, etc.) and tailoring it to their sales strategy embedding AI in meeting tools to surface questions and summaries in real time building AI into internal platforms based on deep knowledge of market, product, and GTM. We're in a trust recession - and lazy AI is making it worse. Spray-and-pray LinkedIn DMs and generic AI pitches erode trust and make buyers more sceptical and confused. Being AI forward means intentional, human-centred use of AI, not pushing AI for its own sake. AI should move you toward your zone of genius, not further into busywork. Borrowing from Gay Hendricks' The Big Leap, Kirsten and Susan talk about AI as a way to strip away tasks in your zones of incompetence/competence so you can spend more time in your zone of genius - and potentially unlock higher human experiences and contribution. Scale requires clarity and process before tools. AI isn't a magic scale button. Without a clear what and why, it can't help with the how. Leaders must: define the outcome and purpose of what they're scaling decide what not to do document the current process (SOPs) before asking AI to automate or optimise it. Otherwise AI just scales the chaos. Most salespeople are executors, not system builders. They're brilliant at doing the thing - calls, meetings, negotiation - but often not trained to design processes and ops. Pairing them with ops-minded people (and AI) to document and structure their best practices is where real scale lives. Identity-level leadership: be → do → have. Instead of "when I have the title, I'll be a leader", Kirsten coaches leaders to start with identity: "I am the leader of an AI-forward sales organization." That identity shapes thinking, then actions, then results. Shift from output to outcomes to avoid AI-fuelled burnout. If you treat AI as a way to cram more tasks into the same day, you just recreate hustle culture. Focusing on outcomes (what actually changes for customers, teams, and the business) allows you to use AI to create space - for thinking, rest, and higher-value work - instead of filling every spare minute. Episode highlights [00:01] Meeting Kirsten and why you can't talk about modern sales without talking about AI. [01:07] Mindset + skillset at the intersection of AI and humanity in sales. [02:35] How sales orgs first used AI as a copy / research tool—and what's changed. [04:45] Embedding AI in meetings and tools vs building AI features in-house. [06:11] The "spray and pray" LinkedIn problem and AI's role in the trust recession. [08:53] Being "AI forward" instead of "AI first." [10:39] Why humans remain safe: discernment, judgment, spidey senses, and taste. [11:39] Arianna Huffington, Thrive, and using AI to free time for human development. [13:19] The Big Leap and using AI to move into your zone of genius. [17:01] Tech history, job loss, and why we're in the messy middle of another big shift. [19:34] What scale really means: more impact with less time and effort. [20:33] Why AI can't fix a lack of clarity—and how it can accidentally add work. [23:32] "AI will scale the chaos" if you skip documentation and SOPs. [25:08] Salespeople as executors, not ops designers, and the power of pairing them with systems people. [27:47] Branding, buyer clarity, and why AI can't replace the hard work of positioning. [31:00] Identity-level shifts for leaders: adopting "I am…" statements. [35:21] AI and burnout: from productivity for productivity's sake to outcome-focused leadership. [37:25] Newtonian vs Einstein time and rethinking how we use the time AI frees. [39:59] "Outcome over output" as a leadership mantra in the age of AI. [40:38] Kirsten's invitation: a Sales Leader Power Hour to work on your mindset and identity. If you're leading a sales team - or are the sales team - and you're feeling the tension between AI, scale, and leadership start here: Pick one sales process and document it end-to-end. Identify one step where AI could genuinely reduce effort or time. Ask, "Who do I need to be as a leader of an AI-forward sales org?" and let that identity shape your next move. Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. To go deeper on mindset and identity shifts, connect with Kirsten Schmidtke on LinkedIn and book a Sales Leader Power Hour here: https://www.kirstenschmidtke.com/sales-leader-power-hour   
Midway through a 30-episodes-in-30-days podcast-to-book sprint, host Susan Diaz gets honest about what's working, what's hard, and how she's actually using AI as a thinking partner, draft machine, pattern spotter, and quiet project manager - plus what leaders can learn from this for their own AI experiments.   Episode summary This solo episode is a behind-the-scenes check-in from Susan's "completely unhinged" (her words) experiment to record 30 episodes in 30 days as the raw material for her next book. Nine episodes into twelve days, she talks candidly about fatigue, capacity, and why she refused to skip this recording even though she could have. She pulls back the curtain on the very practical ways she's using AI to structure ideas, draft assets, spot patterns across episodes, and manage the subtle project/energy load of a sprint like this. Then she zooms out to translate those lessons for founders and teams: why consistency beats intensity, why experiments are allowed to be small and honest, and why capacity has to be part of your AI strategy instead of an afterthought. Key takeaways This sprint is a live experiment in sustainability, not heroics. The goal isn't to "win" 30 episodes perfectly, it's to see what pace, support, and structure actually make ambitious AI-powered work sustainable for a real human. AI is a thinking partner first. Susan uses voice input in her LLM to dump messy thoughts, then asks it to shape them into outlines, angles, and talking points so she's never facing a blank page. (Pro tip: the built-in mic usually cuts off around five minutes - annoying but survivable.) Drafting support is where AI shines next. From show notes to extra research points to contextualising guest insights, custom GPTs help expand and refine ideas so she can focus on judgement and voice instead of first drafts. Pattern spotting turns episodes into chapters. By feeding multiple conversations into AI and asking for common threads or how ideas map to her core pillars, she can see where book chapters naturally want to live - and build something far more cohesive than her first, fully manual book. AI also helps with energy management. It quietly supports the admin around the sprint: drafting guest emails, summarizing notes, organizing ideas, and helping her see where there's too much on the go so she can re-plan. For organizations, three big lessons emerge: Consistency beats intensity - small, steady steps with AI are better than unsustainable bursts. Experiments can be small and honest - you don't need a centre of excellence to start. A one-hour training or a tiny workflow tweak counts. Capacity is strategy - pretending people have unlimited time and energy guarantees failure. Designing AI work around real capacity gives it a chance to stick. Good AI literacy lowers the cost of entry and raises the quality of thinking. Used well, AI doesn't replace your brain, it gives your best ideas a better chance of making it out of your head and into the world. Episode highlights [00:02] Setting the scene: a 30-episode sprint at the end of 2025 to get the book out of her head. [01:43] Nine episodes in twelve days, fatigue, and choosing to show up anyway. [03:21] Why the sprint mirrors how leaders feel about AI: "We know it matters… but keeping the pace is hard." [05:02] Using AI as a structure-building thinking partner via voice dumps and outlines. [05:30] The five-minute mic limit, word-vomit sessions, and how AI turns fuzz into flows. [07:02] Drafting support: research, context around guests, and custom GPTs for show assets. [07:44] Pattern spotting across episodes to find the book's real chapters and through-lines. [09:18] Why this AI-supported book will be "twice, thrice, ten times" better than the first one. [10:24] Energy and project management: emails, reflections, and organising all the moving pieces. [11:46] Lesson 1 – consistency over intensity for teams experimenting with AI. [13:29] Lesson 2 – small, honest experiments beat grand, delayed programs. [13:59] Lesson 3 – capacity as a core part of AI strategy, not a footnote. [15:01] Gentle prompts for listeners: where you're already experimenting, where AI can remove friction, and who your inside champions are. Use this episode as a mirror, not a mandate. Ask yourself and your team: Where are we already experimenting with AI, even in tiny ways? How could AI remove friction from that work instead of adding pressure? Who are our quiet inside champions - and what support or validation could we offer them this week? Answer even one of those honestly, and you're already moving from vague AI interest to real AI literacy. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.
In every organization there's at least one person quietly doing wild things with AI - working faster, thinking bigger, and building their own personal AI stack. In this episode, host Susan Diaz explains how to find those secret power users, support them properly, and turn their experiments into an organizational advantage (without burning them out or making them your unpaid AI help desk). Episode summary This solo episode is a field guide to the people already living in your organization's AI future. Susan starts by painting the familiar picture of the "suspiciously fast" teammate whose work is cleaner, more strategic, and clearly powered by AI - even if no one has formally asked how. She names them for what they are: AI power users who have built quiet personal stacks and are effectively acting as your R&D lab for AI. From there, she walks through: How to spot these people through audits, language, manager input, and usage data. Why most organizations ignore or accidentally exploit them. A practical three-part framework - Recognition, Resourcing, Routing - to turn power users into supported AI champions instead of secret heroes headed for burnout. The equity implications of who gets seen as a "champion" and how to ensure your AI leaders reflect the diversity of your workforce. She closes with a simple champion blueprint and a piece of homework that any founder, leader, or manager can act on this week. Key takeaways You already have AI power users. The question isn't "Do they exist?" It's "Where are they, and what are we doing with them?" Power users are your unofficial R&D lab. They're not theorising about AI. They're testing it inside real workflows, finding what breaks, and figuring out how to prompt effectively in your specific context. They are rarely the most technical people. Your best champions are often people closest to the work - sales, customer-facing roles, operations - who are simply determined to figure it out. Not just IT. If you ignore them, three things happen: They get tired of doing extra work with no support. Their workflows stay trapped in their heads and personal accounts. Your organization misses the chance to scale what's working. Use the 3 Rs to turn power users into champions: Recognition - Name the role (AI Champions/Guides), make their contribution visible, and invite them into strategy and training conversations. Resourcing - Give them real time (10-20% of their week), adjust workload and goals, and reward them properly - ideally with money, training, and access. Routing - Turn personal hacks into shared assets: playbooks, Looms, internal training, and workflows embedded in L&D or ops. Connect - don't overload - your champions. Give them a direct line to IT, security, legal, and leadership so they can sanity-check ideas and inform strategy, without becoming the AI police. Equity matters here. If you only see loud voices and people closest to power, you'll miss quiet experimenters, women, and people of colour who may be building brilliant systems under the radar. Use multiple ways (surveys, nominations, self-identification) to surface a diverse group of champions. Champions must be guides, not gatekeepers. Their role is to make it easier and safer for others to experiment - not to punish or shut people down. A simple champion blueprint: identify → invite → define → resource → amplify. Done well, your champions become the bridge between today's experimentation and tomorrow's AI strategy. Episode highlights [00:02] The "suspiciously fast" colleague and what their behaviour is telling you. [02:00] Personal AI stacks and why Divers "swan dive backwards" into AI without waiting for permission. [03:37] The risk of ignoring power users: burnout, trapped knowledge, and missed scaling opportunities. [05:03] Why power users are effectively your AI research and development lab. [06:33] How to surface power users through better audit questions, open-ended prompts, and usage data. [07:25] Listening for phrases like "I built a system for that" and "I just play with this stuff because I'm a geek." [08:25] Using managers and platform data to spot a small cluster of heavy AI users. [09:37] The danger of quietly turning champions into unpaid AI help desks. [10:33] The 3 Rs: Recognition, Resourcing, and Rooting. [11:18] What real recognition looks like—naming, invitations to strategy, and public acknowledgement. [12:05] Resourcing: giving champions time, adjusting workloads, and updating job descriptions. [13:14] Rooting: creating playbooks, Looms, and embedding workflows into L&D and ops. [14:29] Connecting champions with IT, security, legal, and leadership. [15:45] The equity lens: who gets seen as a champion and who's missing. [17:16] The risk that women and marginalised groups get left behind and automated first. [18:30] Using surveys, nominations, and explicit invitations to diversify your champion group. [19:07] Why champions should be guides, not AI police or gatekeepers. [19:47] The 5-step "champion blueprint": identify, invite, define, resource, amplify. [22:15] Your homework: talk to one secret power user this week and ask how you can make space for their experimentation. Think of one person in your organization who's already that secret AI power user. This week, have a conversation that goes beyond "cool, can you do that for everyone?" and into "This is important. How can we make space for you to keep experimenting like this and help others learn from you?" That's the first step in building your AI champion program - whether or not you call it that yet. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. 
Most leaders have no clear, single picture of how AI is actually being used inside their organization. In this solo episode, host Susan Diaz walks through a practical, human-first AI audit you can run in weeks (not years) to map tools, workflows, adoption patterns, and risks - so your AI strategy isn't built on vibes and vendor decks. Episode summary This episode tackles a simple but uncomfortable question: "Do you actually know what's happening with AI inside your organisation right now?" Not what vendors say. Not what a slide in a strategy deck says. What people are really doing - with official tools, embedded features, and personal accounts. Susan breaks down a four-part AI literacy audit that gives leaders a coherent baseline: Tools - Which AI-powered tools are already in play, where AI is embedded in existing platforms, and where spend and capabilities overlap. Workflows - Where AI is already changing how work is done, which tasks are automated or accelerated, and which manual processes are obvious candidates for support. Adoption patterns - Who's confident, who's dabbling, who's avoiding AI entirely, and how evenly (or unevenly) AI usage is distributed across teams and levels. Risks and blind spots - Shadow AI, unsanctioned tools, data exposure, governance gaps, and the places where "nothing's gone wrong… yet" is not a strategy. She then walks through a step-by-step approach to running an audit without turning it into a year-long consulting project, and shows how to turn your findings into training, workflow redesign, and a credible AI ROI story. Key takeaways If you skip the audit, you're flying blind. Without a baseline, every AI decision - platforms, pilots, hiring, training - is a shot in the dark based on guesswork and anecdotes. A good AI audit is four-dimensional, not just a tools list. You need to understand tools, workflows, adoption patterns, and risk/gaps together if you want a true picture of AI activity. The hidden costs of "no audit": Duplicate spend on overlapping tools in different departments. Shadow AI and data risk from personal accounts and unsanctioned apps. Wasted efficiency gains because great use cases stay trapped in individual heads and folders. No convincing story of AI ROI for your CFO, board, or leadership. Think of the audit like an MRI, not a court case. The goal is visibility, not blame. If people feel they'll be punished for experimenting, they'll simply stop telling you the truth. You can run a meaningful audit in five practical steps: Listen - Short surveys + focused interviews with department heads, AI champions, and sceptics. Map tools and spend - Inventory official tools, quiet add-ons, free/low-cost apps, and personal subscriptions used for work. Document workflows - Pick priority functions (often marketing, HR, sales, ops) and map how work gets done today, then mark where AI shows up or could. Assess risk and governance - Where does confidential data touch AI? What's policy on paper vs in practice? Where are the biggest gaps? Build an opportunity backlog - Quick wins, experiments, and longer-term projects that emerge from the audit. Your audit output should be short and usable, not a 90-slide graveyard: An executive summary with top risks, opportunities, and 3/6/12-month priorities. A tool + workflow map that shows overlaps, gaps, and shadow usage. A risk and governance section with clear start / stop / continue recommendations. An opportunity backlog that can plug into project management and resourcing. Don't make it an IT-only exercise. AI touches how people think and work across functions. The audit should be leadership-backed and cross-functional, not dropped on a single department. The audit is the bridge, not the endpoint. Once you can see what's happening, you can design training, governance, workflow changes, and ROI tracking that match reality instead of hopes. Episode highlights [00:02] "Do you know exactly what's happening with AI inside your organisation right now?" [01:10] Why an AI audit should come before platforms, hires, or big training programmes. [03:18] Reframing audits: from "innovation killers" to foundations for better decisions. [04:00] The four dimensions of an AI literacy audit: tools, workflows, adoption, risk/blind spots. [05:24] Questions to ask about tools: what's in play, where AI is embedded, where teams overlap. [04:52–05:24] Questions to ask about workflows: where AI is changing work, what's automated, what's still painfully manual. [05:24–06:56] Mapping adoption patterns: power users, dabblers, avoiders, and distribution across departments and levels. [06:56] Shadow AI, unsanctioned tools, and governance gaps as audit essentials. [07:23] Why a single coherent picture of AI activity becomes your baseline for everything that follows. [07:54–10:28] Four costs of skipping the audit: duplicate spend, risk, wasted gains, and weak ROI stories. [10:51–13:03] Step 1 + 2: listening through surveys and interviews, then mapping tools and spend without turning it into a witch hunt. [13:03–14:22] Step 3: documenting workflows in priority functions and spotting patterns. [14:22–15:40] Step 4 + 5: assessing risk/governance and surfacing quick wins + deeper opportunities. [16:15–17:47] What a practical audit output looks like (and why it shouldn't die in a folder). [18:16–18:57] Common traps: making it IT-only, punitive, or overcomplicated. [19:56–21:10] Turning audit insights into training, governance, workflow redesign, and credible ROI tracking. If your current AI strategy rests on vendor promises, scattered pilots, and vibes, this episode is your sign to step back. Share it with: The exec who keeps getting asked for AI ROI. The IT or ops lead worried about shadow AI but unsure where to start. The internal AI champion who's been documenting everything in a lonely Notion doc. Then ask as a leadership team: "What would it take for us to have a clear, one-page picture of AI activity across this organisation in the next 60 days?" Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.
Should you build custom GPTs, agents, digital interns, Gems, and artefacts… or just learn to prompt better? In this roundtable, Susan, social media + AI power user Andrew Jenkins, and GTM + custom GPT builder Dr. Jim Kanichirayil unpack when you actually need a custom build, when a strong prompt is enough, and how to stop treating AI output like a finished product. In this episode, Susan brings back two favourite guests who sit on different ends of the AI usage spectrum: Andrew Jenkins - multi-tool explorer, author, and agency owner who "puts the chat in ChatGPT" and loves talking with his data. Dr. Jim Kanichirayil - founder of Cascading Leadership, builder of thought leadership custom GPTs for go-to-market, content, and analysis. Together they break down: How Andrew uses conversation, prompt optimizers, projects, and tools like NotebookLM and Dojo AI to "talk to" his book, podcast, and data. How Dr. Jim uses a simple Role-Task-Output framework to design custom GPTs, train them on his voice (and the voices of his clients), and keep them on track with root-cause analysis when they drift. The messy reality of limits, context windows, and why AI is still terrible at telling you what it can't do. Why using AI on autopilot (especially for outreach and content) is a brand risk, and how to use it as a drafting and analysis system instead. Key takeaways You don't have to choose only prompts or only custom GPTs. Strong prompting is the starting point. Custom GPTs make sense when you see the same task, drift, or "bleed out" happening over and over again. Start every workflow with three things: Role, Task, Output. Who is the AI supposed to be? What exact job is it doing? What should the output include and exclude? Then ask the model: "What else do you need to execute this well and in my voice?" Knowledge bases are just your best examples and instructions in one place. Transcripts, scripts, PDFs, posts, style packs, platform-specific examples - they're all training material. AI does best when you feed it gold standard samples, not vibes. Projects and talking to your data are the future of reading and research. Andrew uses his entire book in Markdown as a project, then has conversations like "find me five governance examples" instead of scrolling a PDF. NotebookLM turns bullet points into decks, mind maps, and videos, then lets you interrogate them. AI is a 60-70% draft, not a finished product. If you post straight from the model, it will sound generic, over-written, and slightly robotic. The job is to take that draft and ask: "Does this sound like me? Would I actually say this?" Automation is good. Autopilot is dangerous. Using AI to analyze content performance, structure research, or standardise parts of a workflow = smart. Letting AI write content and outreach you never review = reputation risk and audience fatigue. More content is not the goal. Better feedback loops are. Dr. Jim chains GPTs: one for drafting with his voice, one for performance analysis, one for insights. That loop makes the next round of content sharper instead of just… louder. Episode highlights [00:13] The core question: build digital interns (agents/custom GPTs) or just prompt better? [01:09] Andrew's origin story and why he "puts the chat in ChatGPT." [03:39] How Andrew uses prompt optimizers, multiple models, and Dojo AI as an agentic interface. [07:24] Dr. Jim's world: sticking to GPT, building tightly scoped custom GPTs for repetitive work. [08:37] When "bleed out" in prompts tells you it's time to build a custom GPT. [09:26] Using root-cause analysis inside the GPT configuration when outputs go off the rails. [10:25] Projects, books in Markdown, and "talking to your own material" via AI. [13:05] Case study: using AI to surface case examples from a 3.5-year-old book instead of scrolling PDFs. [14:27] NotebookLM for founders and students: one email of bullet points → infographic, map, slide deck, video. [19:03] The Role–Task–Output framework and the importance of explicitly designing for your voice. [22:02] Platform-specific style packs and use cases (spicy vs informational vs editorial). [26:29] The frustrating reality of token limits and why models rarely warn you before they hit a wall. [36:54] What's happening "in the wild": early-stage founders treating AI output as final product. [39:01] Why "more" isn't better, "better" is better: drafts, polish, and content analysis GPTs. [42:03] Automation vs autopilot in B2B social, and why Andrew refuses to buy from a bot. [43:29] Emerging tools: Google's Pommely, Nano Banana for image creation, and AI browsers like Atlas, Comet, and Neo. If you've been stuck wondering whether to spend time on custom GPTs or just prompt better, this episode gives you the mental models to decide. Share it with: The teammate who keeps saying "we should build a GPT" but hasn't defined the workflow. The founder treating AI drafts as finished copy. The ops brain in your org who secretly wants to be a bridge builder. Then ask as a team: "Where do we actually need great prompts, and where do we need a repeatable GPT or project with a real knowledge base?" Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.
You may have heard host Susan Diaz say she "swan dove backwards off the cliff into AI". In this episode, she unpacks what that actually means, how it became the working title of her book, and the concrete frameworks leaders can use to move boldly into AI without being reckless. This is a personal, behind-the-scenes episode. Susan shares how she went from not being in the famous first 6 million users of ChatGPT… to becoming the person who showed up a week later and refused to leave. She explains why generative AI felt different from every underwhelming AI-ish tool she'd used before. Then she introduces two big ideas that will run through the book and the podcast series: The four cliff archetypes of AI in organizations. The five moves of a swan dive that turn bold experimentation into lasting infrastructure. It's part origin story, part field guide, and part invitation to join the Early Divers instead of waiting for the bridge to magically appear. Key takeaways Generative AI was a pattern-breaker. What hooked Susan wasn't hype. It was the combo of: credible output, ability to handle large volumes of messy information, and being free to use. That trifecta changed the game for everyday operators. "Swan dive backwards" is not recklessness. It's a personality pattern. Quick starts jump with a scan for rocks and a plan to tuck their elbows. The instinct is to move, not freeze, when the path ends. Every organization has four cliff archetypes of AI: Divers - the early experimenters pressing all the buttons. Pathfinders - the risk-mappers and governance folks asking "how do we do this safely?" Operators - the people who turn experiments into actual workflows and pilots. Bridge builders - the systems people who turn one-time wins into playbooks, platforms, and training. You are rarely just one archetype. You're more like a sound mix across all four. That mix determines how you respond when AI shows up as a cliff, not a gentle slope. The five moves of a swan dive give you a pattern: Spot the cliff - recognize this is a step-change, not another incremental tool. Check the water - test, set guardrails, understand risks and boundaries. The dive - move out of analysis into real use on real work. Surface with a map - name patterns, document what's working, share stories. Build the bridge - turn what you learned into infrastructure so others don't have to jump cold. AI is too big to leave to one personality type. Divers alone will splatter. Pathfinders alone will stall. Operators without bridge builders will create one-off wins that never stick. You need all four. This book and series are a public swan dive. Backwards! The 30-episode challenge, the naming of Swan Dive Backwards, and the frameworks are all being built where others can see and eventually walk the bridge. Episode highlights [00:00] "I swan dive backwards off the cliff into AI" - why that line sticks and what it actually means. [01:19] Naming the book Swan Dive Backwards and the meta moment for future readers. [01:47] Why Susan was not in the first 6 million ChatGPT users, and why early AI tools had underwhelmed her. [03:03] The three markers that made generative AI different: credible output, large-volume handling, and being free. [05:27] "Late to the party, then refused to leave" – how personality type shaped her AI journey. [06:28] The cliff analogy: divers, plotters, doers, bridge builders. [09:33] Why Susan is a classic "diver" and how that shows up in entrepreneurship. [12:08] The LinkedIn comment from Alison Garwood-Jones that locked in the book title. [14:53] The four cliff archetypes of AI inside companies, in explicit AI terms. [18:38] Move 1: spotting the cliff – realising AI is a calculator/PC-level shift, not a passing tool. [19:44] Move 2: checking the water – personal tests, failures, and organisational governance. [20:45] Move 3: the swan dive – moving from theory to workflow-level experiments. [21:50] Move 4: surfacing with a map – turning experiences into language, frameworks, audits. [23:03] Move 5: building the bridge – connecting experiments into ongoing systems and training. [23:31] Why the real courage is building so others never have to jump cold again. This episode is both an origin story and a mirror. Ask yourself and your team Which cliff archetype do you lead with: Diver, Pathfinder, Operator, or Bridge Builder? Where are you on the five moves of the swan dive: staring at the cliff… or quietly building the bridge? Share this episode with the biggest "diver" you know and the most trusted "pathfinder" in your organization. They're going to need each other. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.
loading
Comments