Discover
Tech Talks Daily
Tech Talks Daily
Author: Neil C. Hughes
Subscribed: 1,761Played: 68,826Subscribe
Share
© Neil C. Hughes - Tech Talks Daily 2015
Description
If every company is now a tech company and digital transformation is a journey rather than a destination, how do you keep up with the relentless pace of technological change?
Every day, Tech Talks Daily brings you insights from the brightest minds in tech, business, and innovation, breaking down complex ideas into clear, actionable takeaways.
Hosted by Neil C. Hughes, Tech Talks Daily explores how emerging technologies such as AI, cybersecurity, cloud computing, fintech, quantum computing, Web3, and more are shaping industries and solving real-world challenges in modern businesses.
Through candid conversations with industry leaders, CEOs, Fortune 500 executives, startup founders, and even the occasional celebrity, Tech Talks Daily uncovers the trends driving digital transformation and the strategies behind successful tech adoption. But this isn't just about buzzwords.
We go beyond the hype to demystify the biggest tech trends and determine their real-world impact. From cybersecurity and blockchain to AI sovereignty, robotics, and post-quantum cryptography, we explore the measurable difference these innovations can make.
Whether improving security, enhancing customer experiences, or driving business growth, we also investigate the ROI of cutting-edge tech projects, asking the tough questions about what works, what doesn't, and how businesses can maximize their investments.
Whether you're a business leader, IT professional, or simply curious about technology's role in our lives, you'll find engaging discussions that challenge perspectives, share diverse viewpoints, and spark new ideas.
New episodes are released daily, 365 days a year, breaking down complex ideas into clear, actionable takeaways around technology and the future of business.
Every day, Tech Talks Daily brings you insights from the brightest minds in tech, business, and innovation, breaking down complex ideas into clear, actionable takeaways.
Hosted by Neil C. Hughes, Tech Talks Daily explores how emerging technologies such as AI, cybersecurity, cloud computing, fintech, quantum computing, Web3, and more are shaping industries and solving real-world challenges in modern businesses.
Through candid conversations with industry leaders, CEOs, Fortune 500 executives, startup founders, and even the occasional celebrity, Tech Talks Daily uncovers the trends driving digital transformation and the strategies behind successful tech adoption. But this isn't just about buzzwords.
We go beyond the hype to demystify the biggest tech trends and determine their real-world impact. From cybersecurity and blockchain to AI sovereignty, robotics, and post-quantum cryptography, we explore the measurable difference these innovations can make.
Whether improving security, enhancing customer experiences, or driving business growth, we also investigate the ROI of cutting-edge tech projects, asking the tough questions about what works, what doesn't, and how businesses can maximize their investments.
Whether you're a business leader, IT professional, or simply curious about technology's role in our lives, you'll find engaging discussions that challenge perspectives, share diverse viewpoints, and spark new ideas.
New episodes are released daily, 365 days a year, breaking down complex ideas into clear, actionable takeaways around technology and the future of business.
3455 Episodes
Reverse
What does it really take to move enterprise AI from impressive demos to decisions that show up in quarterly results? One year into his role as Global Managing Partner at IBM Consulting, Neil Dhar sits at the intersection of strategy, capital allocation, and technology execution. Leading the firm's Americas business and a team of close to 100,000 consultants, he has a front-row view into how large organizations are reassessing their AI investments. From global healthcare leaders like Medtronic to luxury retail brands such as Neiman Marcus, the conversation has shifted. Early proofs of concept helped executives understand what was possible. Now the focus is firmly on proof of value and on whether AI can drive growth, competitiveness, and measurable return. In this episode, I speak with Neil Dhar about what has changed in the boardroom over the past year and why ROI has become the central question. Drawing on more than three decades in finance and private equity, including senior leadership roles at PwC, Neil explains why AI is increasingly being treated as a capital allocation decision rather than a technology experiment. Every dollar invested has to earn its place, whether through productivity gains, operational improvement, or new revenue opportunities. Vanity projects no longer survive scrutiny, especially when boards and investors expect results on a much shorter timeline. We also explore how IBM is applying these same principles internally. Neil shares how the company has identified hundreds of workflows across the business, prioritized those with the strongest economic impact, and used AI and automation to drive large-scale productivity gains. The result is a potential $4.5 billion in annual run rate savings by 2025, with those gains being reinvested into innovation, people, and future growth. It is a candid look at what happens when AI strategy, leadership accountability, and disciplined execution come together inside a global organization. If you are a business leader trying to separate real value from hype, or someone wrestling with how to justify AI spend beyond experimentation, this conversation offers a grounded perspective on what enterprise AI looks like when it is treated as a business decision rather than a technology trend. Are you ready to rethink how AI earns its place inside your organization, and what proof of value really means in 2026? Useful Links Connect With Neil Dhar IBM Institute for Business Value, "The Enterprise in 2030" study Learn More About IBM Consulting
How Do Marketplaces Turn AI Ambition Into Scalable, Trusted Enterprise Reality? That is the question I explore in this episode with Julie Teigland, Global Vice Chair for Alliances and Ecosystems at EY, someone who sits right at the intersection of enterprise demand, technology platforms, and the ecosystems that increasingly power modern AI adoption. As organizations race to deploy AI at scale, many are discovering that the real challenge is not a lack of tools, but the complexity of choosing, integrating, governing, and standing behind those decisions with confidence. Julie explains why marketplaces are becoming a powerful mechanism for reducing friction in this process, helping enterprises move beyond experimentation toward AI solutions that are trusted, scalable, and aligned with real business outcomes. We talk about how marketplaces can collapse complexity, curate choice, and bring much needed clarity to leaders who are overwhelmed by the sheer volume of AI options available today. Julie also shares how EY approaches this challenge through its "client zero" mindset, turning the lens inward and treating EY itself as the first marketplace customer. By doing so, EY stress tests governance, security, and integration at real enterprise scale, serving tens of thousands of clients, running hundreds of thousands of servers, and processing hundreds of millions of transactions every day. That internal experience shapes how EY helps clients navigate trust, accountability, and cross-vendor integration risks, particularly as AI becomes more embedded into workflows and decision-making. We also explore how strong alliances with cloud leaders like Microsoft and SAP are shaping how AI solutions are vetted, standardized, and deployed across industries, as well as how regulation, particularly in Europe, is influencing a shift toward responsibility by design. This conversation goes beyond technology to focus on orchestration, trust, and outcomes, and why marketplaces are evolving from simple app stores into something far more strategic for enterprise AI. If you are trying to understand how ecosystems, governance, and marketplaces can help turn AI from isolated projects into sustained business value, this episode offers a thoughtful and grounded perspective. I would love to know what resonated with you most. How do you see marketplaces shaping the future of AI adoption inside your organization? Useful LInks Connect With Julie Teigland Learn More About EY
As someone who spends a lot of time covering AI announcements, product launches, and conference stages, it is easy to forget that most AI today is still built for desks, screens, and digital workflows. Yet the reality is that the vast majority of the global workforce operates in the physical world, on roads, construction sites, depots, and job sites where mistakes are measured in injuries, collisions, and lives lost. That gap between where AI innovation happens and where real risk exists is exactly why I wanted to sit down with Amish Babu, CTO at Motive. In this episode, I speak with Amish about what it truly means to build AI for the physical economy. We unpack why designing AI for vehicles, fleets, and safety-critical environments is fundamentally different from building AI for emails, documents, or dashboards. Amish explains why latency, trust, and reliability are non-negotiable when AI is embedded directly into vehicles, and why edge AI, multimodal sensing, and on-device compute are essential when milliseconds matter. This is a conversation about AI that has to work perfectly in messy, unpredictable, real-world conditions. We also explore how Motive approaches AI as a full system, combining hardware, software, and models into a single platform built specifically for life on the road. Amish shares how AI can help prevent collisions, support drivers in the moment, and create measurable safety and operational outcomes for fleets operating across transportation, construction, energy, and public sector environments. Along the way, we challenge common misconceptions around AI in vehicles, including the idea that it is about surveillance rather than protection, or that all AI systems are created equal when lives are on the line. If you are interested in how AI moves beyond productivity tools and into high-stakes environments where safety, accountability, and trust matter most, this episode offers a grounded and practical perspective from someone building these systems every day. I would love to hear your thoughts on this one. How do you see the role of AI evolving as it moves deeper into the physical world? Useful Links Connect with Amish Babu Learn More About Motive How Motive's AI works: Real-time edge intelligence, humans-in-the-loop, and continuous improvement.
In this episode of Tech Talks Daily, I sat down with Jinsook Han, Chief Agentic AI Officer at Genpact, to unpack one of the most misunderstood shifts in enterprise AI right now. Many organizations feel confident about the value AI can deliver, yet only a small fraction are able to move beyond pilots and into autonomous operations that actually scale. Genpact's Autonomy By Design research puts hard data behind that gap, and Jinsook explains why optimism often races ahead of readiness. We explore why agentic AI changes the rules entirely. When AI systems begin to act, decide, and adapt on behalf of the business, familiar operating models start to strain. Jinsook makes a compelling case that agentic AI cannot be treated like another software rollout. It demands a rethink of data, governance, roles, and even how teams define work itself. The shift from tools to teammates alters expectations for people across the organization, from frontline operators to the C-suite, and exposes just how unprepared many companies still are. Governance is a major theme throughout the conversation, but not in the way most leaders expect. Rather than slowing progress, Jinsook argues that governance must become part of how work happens every day. She shares how Genpact approaches agent certification, maturity, and oversight, using vivid analogies to explain why quality and alignment matter more than simply deploying large numbers of agents. We also dig into why many governance models fail, especially when they rely on committees instead of lived understanding. Upskilling sits at the heart of this transformation. Jinsook walks through how Genpact is training more than 130,000 employees for an agentic future, starting with executives themselves. The focus is not on abstract learning, but on proving that today's work looks different from yesterday's. Observability, explainability, and responsible AI are woven into this approach, with command centers designed to monitor both agent performance and health, turning early signals into opportunities rather than panic. This conversation goes well beyond hype. It is about readiness, responsibility, and the reality of building autonomous systems that still depend on human judgment. As organizations rush toward agentic AI, are they truly prepared to change how decisions are made, how people work, and how accountability is defined, or are they still treating AI as a faster hammer rather than a new kind of teammate? Useful Links Connect with Jinsook Han Learn More about Genpact
What happens when leaders are confident about AI, but the people expected to use it are not ready? In this episode of Tech Talks Daily, I sat down with Caroline Grant from Slalom Consulting to explore one of the most persistent tensions in enterprise AI adoption right now. Boards and executives are spending more, moving faster, and expecting returns sooner than ever, yet many organizations are struggling to translate that ambition into outcomes that scale. Caroline brings fresh insight from Slalom's latest research into how leadership, culture, and workforce readiness are shaping what actually happens next. We unpack a clear shift in ownership for AI transformation, with CTOs and CDOs increasingly leading organizational redesign rather than HR. That change reflects how deeply AI now cuts across technology, operations, and business models, but it also introduces new risks. Caroline explains why sidelining people teams can create blind spots around skills, incentives, and trust, especially as roles evolve and uncertainty grows inside the workforce. The result is what Slalom describes as a growing AI disconnect between executive optimism and day-to-day reality. Despite the noise around job losses, the data tells a more nuanced story. Many organizations are creating new AI-related roles at a pace, yet almost all are facing skills gaps that threaten progress. We talk about why reskilling at scale is now unavoidable, how unclear career paths fuel employee distrust, and why focusing only on technical capability misses the human side of adoption. Caroline also challenges assumptions about skill priorities, warning that deprioritizing empathy, communication, and change leadership could undermine effective human-AI collaboration. We also dig into ROI expectations, with most UK executives now expecting returns within two years. Caroline shares why that ambition is achievable, where it breaks down, and why so many organizations remain stuck in pilot mode. From governance and decision rights to culture and leadership behavior, this conversation goes beyond tools and platforms to examine what separates experimentation from fundamental transformation. As AI becomes a test of leadership as much as technology, how are you closing the gap between vision and execution within your organization, and are you building a workforce that can keep pace with change rather than resist it? Connect With Caroline Grant from Slalom Consulting The Great AI Disconnect: Slalom's Insights Survey Learn More About Slalom
Is the browser quietly becoming the most powerful and dangerous interface in modern work? In this episode of Tech Talks Daily, I sat down with Karim Toubba, CEO of LastPass, to unpack a shift that many people feel every day but rarely stop to question. The browser is no longer just a window to the internet. It has become the place where work happens, where SaaS lives, and increasingly, where humans and AI agents meet data, credentials, and decisions. From AI-native browsers to prompt-based navigation and headless agents acting on our behalf, the way we access information is changing fast, and so are the risks. Karim shares why this moment feels different from earlier waves like SaaS adoption or remote work. Today, more than ever, productivity, identity, and security collide inside the browser. Shadow AI is spreading faster than most organizations can track, personal accounts are being used to access powerful AI tools, and sensitive data is being uploaded with little visibility or control. At the same time, attackers have noticed that the browser has become the soft underbelly of the enterprise, with a growing share of malware and breaches originating there. We also explore the rise of agentic AI and what happens when software, not people, starts logging into systems. When an agent books travel, pulls data, or completes workflows on a user's behalf, traditional authentication and access models start to break down. Karim explains why identity, visibility, and control must evolve together, and why secure browser extensions are emerging as a practical foundation for this next phase of computing. The conversation goes deep into what users do not see when AI browsers ask for access to email, calendars, and internal apps, and why convenience often masks long-term exposure. Throughout the discussion, Karim brings a grounded perspective shaped by decades in cybersecurity, from risk-based vulnerability management to enterprise threat intelligence. Rather than pushing fear, he focuses on realistic steps organizations and individuals can take, from understanding what data is being shared, to treating security teams as partners, to using tools that bring passwords, passkeys, and authentication into one trusted place as browsing evolves. As AI reshapes how we search, work, and make decisions, the question is no longer whether the browser matters. It is whether we are ready for it to act as the front door to both our productivity and our risk, so are you securing your browser for the future you are already using today? Connect with Karim Toubba LastPass Threat Intelligence, Mitigation, and Escalation (TIME) team page Phish Bowl Podcast
What really happens when AI helps teams write code faster, but everything else in the delivery process starts to slow down? In this episode of Tech Talks Daily, I'm joined once again by returning guest and friend of the show, Martin Reynolds, Field CTO at Harness. It has been two years since we last spoke, and a lot has changed since then. Martin has relocated from London to North Carolina, gaining back hours of his working week. Still, the bigger shift has been in how AI is reshaping software delivery inside modern enterprises. Our conversation centers on what Martin calls the AI velocity paradox. Development teams are producing more code at speed, often thanks to AI coding agents, yet testing, security, governance, and release processes are struggling to keep up. The result is a growing gap between how fast software is written and how safely it can be delivered. Martin shares research showing how this imbalance is already leading to production incidents, hidden vulnerabilities, and mounting technical debt. We also dig into why this AI-driven transition feels different from previous waves, such as cloud, mobile, or DevOps. Many of the same concerns around security, trust, and control still exist, but this time, everything is happening far faster. Martin explains why AI works best as a human amplifier, strengthening good engineering practices while exposing weak ones sooner than ever before. A significant theme in the episode is visibility. From shadow AI usage to expanding attack surfaces, Martin outlines why security teams are finding it harder to see where AI is being used and how data is flowing through systems. Rather than slowing teams down, he argues that the answer lies in embedding governance directly into delivery pipelines, making security automatic rather than an afterthought. We also explore the rise of agentic AI in testing, quality assurance, and security, where specialized agents act like virtual teammates. When well-designed, these agents help developers stay focused while improving reliability and resilience throughout the lifecycle. If you are responsible for engineering, platform, or security teams, this episode offers a grounded look at how to balance speed with responsibility in an AI-native world. As AI becomes part of every stage of software delivery, are your processes designed to safely absorb that change, or are they quietly becoming the bottleneck? Useful Links Learn More About Harness The State of AI in Engineering The State of AI Application Security EngineeringX Follow Harness on LinkedIn Connect With Martin Reynolds Thanks to our sponsors, Alcor, for supporting the show.
What really happens to a business when payments stop working, even for a few minutes? I recorded this episode live at Dynatrace Perform in Las Vegas, inside the Venetian, surrounded by engineers, operators, and business leaders all wrestling with the same uncomfortable reality. Payment outages are no longer rare edge cases. They are becoming a routine operational risk, and the cost is far higher than many organizations realize. To unpack that shift, I sat down with Victoria Ruffo, Software Engineering Team Lead at FreedomPay, for a grounded and practical conversation about resilience, observability, and what failure actually looks like in modern commerce. Victoria explains how FreedomPay supports merchants by orchestrating every part of the payment journey through a single platform, from terminal management to remote updates and even on-device advertising. If you have checked into a hotel and noticed a payment terminal quietly branded "Secured By FreedomPay," there is a good chance you have already interacted with her team's work. That real-world exposure gives her a clear view of what happens when systems fail and why customers are far less patient than businesses often assume. We talk about new research from FreedomPay, Dynatrace, and Retail Economics that puts a stark number on the issue. $44.4 billion in U.S. retail and hospitality revenue is at risk every year due to payment disruptions. But as Victoria points out, the most alarming insight is not the headline figure. It is the gap between how long customers are willing to wait and how long outages actually last. Most consumers abandon a purchase after seven minutes, while many disruptions stretch on for hours. In those early minutes alone, the majority of revenue is already gone. The conversation moves beyond statistics into lived experience. From lunch breaks cut short by declined payments to stadiums losing an entire event's worth of revenue in a single outage, Victoria shares why these failures are not abstract technical issues. They directly affect staff wages, customer loyalty, and long-term brand trust. We also explore why cash-only backups and outdated terminals no longer reflect how people actually pay, and why uneven investment in resilience leaves many merchants dangerously exposed. AI plays a central role in the discussion, but not in the way hype cycles often suggest. Victoria is clear that FreedomPay is not using AI to touch cardholder data or write payment code. Instead, tools like Dynatrace Intelligence help teams detect issues faster, identify patterns humans might miss, and move from reaction to anticipation. That shift, she argues, is where real value shows up, especially when seconds and minutes matter. If you care about payments, customer experience, or the hidden connection between technical failure and business impact, this episode offers a timely reminder that outages do not have to be catastrophic if organizations plan for them properly. As consumers grow less patient and systems grow more complex, are your payment platforms designed to absorb disruption, or are they quietly waiting to fail at the worst possible moment? Useful Links Connect With Victoria Ruffo Learn More About Freedom Pay Whitepaper Payment Resilience in an Uncertain World Learn More About Dynatrace Perform Thanks to our sponsors, Alcor, for supporting the show.
In this episode of Tech Talks Daily, I'm joined by Josh Haas, co-founder and co-CEO of Bubble, to unpack why the next phase of software creation is already taking shape. We talk about how the early excitement around AI-powered code generation delivered fast demos and instant gratification, but often fell apart when teams tried to turn those experiments into durable products that could grow with a business. Josh takes us back to Bubble's origins in 2012, long before AI hype cycles and trend-driven development. At the time, the idea was simple but ambitious: give more people the ability to build genuine software without spending months learning traditional programming. That early focus on visual development now feels timely again, especially as builders wrestle with the limits of black-box AI tools that hide logic until something breaks. We spend time on where vibe coding struggles in practice. Josh explains why speed alone is never enough once customers, payments, and sensitive data are involved. As he explains, most product requirements only surface after users arrive, and those edge cases are exactly where opaque AI-generated code can become risky. If you cannot see how your system works, you cannot truly own it, secure it, or fix it when something goes wrong. The conversation also digs into Bubble's hybrid approach, blending AI agents with visual development. Rather than asking builders to trust an AI, Bubble's model unquestioningly emphasizes clarity, auditability, and shared responsibility between humans and machines. Josh explains how visual logic makes software behavior explicit, helping teams understand rules, permissions, and workflows before they cause real-world problems. I learn how this mindset has helped Bubble-powered apps process over $1.1 billion in payments every year, a level of scale that leaves no room for guesswork. We also explore Bubble AI Agent, where conversational AI meets visual editing, and why transparency and control matter more than flashy demos. From governance and rollback logs to builder accountability, this episode looks at what it actually takes to build software that survives beyond the first launch. If you are building with AI or thinking about how software development is changing, this episode offers a grounded perspective on what comes after the hype fades. As AI tools become more powerful, the real question is whether they help you understand your product better over time, or slowly disconnect you from it. Which path should builders choose right now? Useful Links Connect with Josh Haas Learn More About Bubble Thanks to our sponsors, Alcor, for supporting the show.
How do you turn a developer-first product into a growth engine without losing trust, clarity, or focus along the way? In this episode of Tech Talks Daily, I'm joined by Sanjay Sarathy, VP of Developer Experience and Self Service at Cloudinary, for a grounded and thoughtful conversation about product-led growth when developers sit at the center of the story. Sanjay operates at a rare intersection. He leads Cloudinary's high-volume self-service motion while also caring for the developer community that fuels adoption, advocacy, and long-term loyalty. That dual perspective, part business, part builder, shapes everything we discuss. Our conversation picks up on a theme I have been exploring across recent episodes. When technical work is explained clearly, whether that is security, performance, or reliability, it stops being background noise and starts supporting growth. Sanjay shares how Cloudinary approached this from day one, starting with founders who were developers themselves and carried a deep respect for developer trust into the company's DNA. Documentation that reflects reality, platforms that behave exactly as promised, and support that shows up early rather than as an afterthought all play a part. What stood out to me was how early Cloudinary invested in technical support, even before many traditional growth motions were in place. That decision shaped a self-service experience that still feels human at scale. With thousands of developer sign-ups every day and millions of developers using the platform, Sanjay explains how trust compounds into referrals, word of mouth, and sustained adoption. We also dig into developer advocacy and why community is rarely a single thing. Developers gather around frameworks, tools, workflows, and shared problems, and Cloudinary has learned to meet them where they already are rather than forcing them into a single branded space. From React and Next.js users to enterprise advisory boards, feedback loops become part of the product itself. As AI reshapes how software is built and developer tools become more crowded, Sanjay offers a clear-eyed view on what separates companies that grow steadily from those that burn bright and stall. Profitability, experimentation with intent, and the discipline to double down on what works all feature heavily in his thinking. It is a conversation rooted in experience rather than theory. If you care about product-led growth, developer trust, or building platforms that scale without losing their soul, this episode offers plenty to think about. As always, I would love to hear your perspective too. How do you see developer communities shaping the next phase of product growth, and where do you think companies still get it wrong? Useful Links Connect with Sanjay Sarathy Learn more about Cloudinary Thanks to our sponsors, Alcor, for supporting the show.
What happens when the rush toward AI collides with the messy reality of enterprise data that was never designed for it? That is exactly where this episode with Kevin Dattolico from Syntax begins. Before we even hit record, we were swapping stories about music, travel, and a certain farewell concert that set the tone for a conversation that was both grounded and unexpectedly human. But once we got going, the discussion quickly shifted to one of the biggest blind spots I keep hearing about at tech conferences around the world. AI ambition is running far ahead of data readiness. Kevin leads Syntax across the Americas, working with organizations that rely on SAP, Oracle, and complex cloud environments to run their businesses. In our conversation, he shares why many AI initiatives stall or quietly reset the moment they touch real production data. Proofs of concept can look impressive in isolation, but once AI starts interacting with live operational systems, the cracks appear. Inconsistent data, duplicated records, missing context, and governance gaps all surface at once. The result is confusion, unpredictable outputs, and a growing realization that the issue is rarely the model itself. We dig into why ERP data has traditionally been trusted, while unstructured data across emails, documents, sensors, and logs often tells a very different story. Kevin explains where the real friction shows up when companies try to bring those worlds together, and why assumptions about data quality tend to break long before the technology does. It is a refreshingly honest look at what usually goes wrong first, and why leaders are often blindsided even after years of investment. One of the strongest themes in this episode is the shift Kevin sees from AI-first thinking toward a data-first mindset. That does not mean abandoning AI spend. It means rebalancing priorities so those investments actually deliver outcomes the business can stand behind. We talk about what consolidation, cleansing, and transformation look like at enterprise scale, especially for organizations carrying decades of technical debt and fragmented systems. The conversation also takes a thoughtful turn around governance, trust, and leadership. Kevin shares how the role of the chief data officer is changing from gatekeeper to enabler, and why modern governance has to support speed without sacrificing accountability. Along the way, he reflects on the risks of pushing ahead with weak data foundations, particularly in regulated industries where the cost of getting it wrong can be operational, reputational, or worse. And then there is the moment that caught me completely off guard. When I asked Kevin to look back on his career and reflect on someone who made a difference, his answer led to one of the most moving stories I have heard in thousands of interviews. It is a reminder that behind every transformation story, there are people who quietly shape the path forward. If you are wrestling with AI expectations, data reality, or simply wondering whether everyone else feels just as overwhelmed by this shift, this episode will resonate. The challenges Kevin describes are far more common than most leaders admit, and the opportunities for those who get the foundations right are real. So as AI continues to dominate boardroom conversations, are you confident your data is ready to support the decisions you are asking it to make, or is it time to pause and rethink what sits underneath it all? Useful Links Connect with Kevin Dattolico Learn more about Syntax Thanks to our sponsors, Alcor, for supporting the show.
Why do today's most powerful AI systems still struggle to explain their decisions, repeat the same mistakes, and undermine trust at the very moment we are asking them to take on more responsibility? In this episode of Tech Talks Daily, I'm joined by Artur d'Avila Garcez, Professor of Computer Science at City St George's University of London, and one of the early pioneers of neurosymbolic AI. Our conversation cuts through the noise around ever-larger language models and focuses on a deeper question many leaders are now grappling with. If scale alone cannot deliver reliability, accountability, or genuine reasoning, what is missing from today's AI systems? Artur explains neurosymbolic AI in clear, practical terms as the integration of neural learning with symbolic reasoning. Deep learning excels at pattern recognition across language, images, and sensor data, but it struggles with planning, causality, and guarantees. Symbolic AI, by contrast, offers logic, rules, and explanations, yet falters when faced with messy, unstructured data. Neurosymbolic AI aims to bring these two worlds together, allowing systems to learn from data while reasoning with knowledge, producing AI that can justify decisions and avoid repeating known errors. We explore why simply adding more parameters and data has failed to solve hallucinations, brittleness, and trust issues. Artur shares how neurosymbolic approaches introduce what he describes as software assurances, ways to reduce the chance of critical errors by design rather than trial and error. From self-driving cars to finance and healthcare, he explains why combining learned behavior with explicit rules mirrors how high-stakes systems already operate in the real world. A major part of our discussion centers on explainability and accountability. Artur introduces the neurosymbolic cycle, sometimes called the NeSy cycle, which translates knowledge into neural networks and extracts knowledge back out again. This two-way process opens the door to inspection, validation, and responsibility, shifting AI away from opaque black boxes toward systems that can be questioned, audited, and trusted. We also discuss why scaling neurosymbolic AI looks very different from scaling deep learning, with an emphasis on knowledge reuse, efficiency, and model compression rather than ever-growing compute demands. We also look ahead. From domain-specific deployments already happening today to longer-term questions around energy use, sustainability, and regulation, Artur offers a grounded view on where this field is heading and what signals leaders should watch for as neurosymbolic AI moves from research into real systems. If you care about building AI that is reliable, explainable, and trustworthy, this conversation offers a refreshing and necessary perspective. As the race toward more capable AI continues, are we finally ready to admit that reasoning, not just scale, may decide what comes next, and what kind of AI do we actually want to live with? Useful Links Neurosymbolic AI (NeSy) Association website Artur's personal webpage on the City St George's University of London page Co-authored book titled "Neural-Symbolic Cognitive Reasoning"The article about neurosymbolic AI and the road to AGI The Accountability in AI article Reasoning in Neurosymbolic AI Neurosymbolic Deep Learning Semantics
Why does healthcare keep investing in new technology while so many clinicians feel buried under paperwork and admin work that has nothing to do with patient care? In this episode of Tech Talks Daily, I'm joined by Dr. Rihan Javid, psychiatrist, former attorney, and co-founder and president of Edge. Our conversation cuts straight into an issue that rarely gets the attention it deserves, the quiet toll that administrative overload takes on doctors, care teams, and ultimately patients. Nearly half of physicians now link burnout to paperwork rather than clinical work, and Rihan explains why this problem keeps slipping past leadership discussions, even as budgets for digital tools continue to rise. Drawing on his experience inside hospitals and clinics, Rihan shares how operational design shapes outcomes in ways many healthcare leaders underestimate. We talk about why short-term staffing fixes often create new problems down the line, and how practices that invest in stable, well-trained remote administrative teams see real improvements. That includes faster billing cycles, fewer errors, and more time back for clinicians who want to focus on care rather than forms. What stood out for me was his framing of workforce infrastructure as a performance driver rather than a compliance box to tick. We also dig into how hybrid operations are becoming the default model. Local clinicians working alongside remote admin teams, supported by AI-assisted workflows, are now common across healthcare. Rihan is clear that while automation and AI can remove friction and cost, human oversight still matters deeply in high-compliance environments. Trust, accuracy, and patient confidence depend on knowing where automation fits and where human judgment must stay firmly in place. Another part of the discussion that stuck with me was Rihan's idea that stability is emerging as a better success signal than raw cost savings. High turnover may look efficient on paper, but it quietly limits a clinic's ability to grow, retain knowledge, and improve patient outcomes. We unpack why consistent administrative support can influence revenue cycles, satisfaction, and long-term resilience in ways traditional metrics often miss. If you're a healthcare leader, operator, or technologist trying to understand how AI, remote teams, and smarter operations can work together without losing trust or care quality, this conversation offers plenty to reflect on. As healthcare systems rethink how work gets done behind the scenes, what would it look like if stability and clinician well-being were treated as core performance measures rather than afterthoughts, and how might that change the future of care? Useful Links Connect with Dr. Rihan Javid Edge Health Rinova AI Thanks to our sponsors, Alcor, for supporting the show.
Why do small business leaders keep buying more software yet still feel like they are drowning in logins, dashboards, and unfinished work? In this episode of Tech Talks Daily, I sit down with Jesse Lipson, founder and CEO of Levitate, to unpack a frustration I hear from business owners almost daily. After years of being pitched yet another tool, many leaders now spend hours each week troubleshooting software instead of serving customers. Jesse brings a grounded perspective shaped by decades of building SaaS companies, including bootstrapping ShareFile before its acquisition by Citrix, and what stood out to me immediately was how clearly he articulates where the current software model has broken down for small businesses. We talk about why adding more apps has not translated into better outcomes, especially for teams without dedicated specialists in marketing, finance, or sales. Jesse explains how traditional software often solves only part of the problem, leaving owners to become accidental experts in accounting, marketing strategy, or customer communications just to make the tools usable. From there, our conversation shifts toward what he believes will actually matter as AI adoption matures. Rather than chasing full automation or shiny new dashboards, Jesse argues that the real opportunity lies in blending intelligence with human guidance, allowing AI to work quietly behind the scenes while people remain the face of authentic relationships. A big part of our discussion centers on trust and connection in an AI-saturated world. Jesse shares why customers have become incredibly good at spotting automated communication and why relationship-based businesses cannot afford to lose the human element. We explore how AI can act as a second brain, helping business owners remember details, follow up at the right moments, and show up more thoughtfully, without crossing the line into impersonal automation that turns customers away. His examples, from marketing emails to customer support, make it clear that technology should support better relationships rather than replace them. We also look ahead to what small businesses should realistically focus on as AI evolves. Jesse offers practical guidance on getting started, from everyday use of conversational AI, to building internal documentation that allows systems to work more effectively, and eventually moving toward agent-based workflows that can take on real operational tasks. Throughout the conversation, he keeps returning to the same idea, that AI works best when it helps people become the kind of business leaders they already want to be, more present, more consistent, and more human. If you are a founder, operator, or small business leader feeling overwhelmed by tools that promise productivity but deliver friction, this episode offers a refreshing reset. As AI becomes more capable and more embedded in daily work, the real question is not how many systems you deploy, but whether they help you build stronger, more genuine relationships, so how are you choosing to use AI to support the human side of your business rather than bury it? Useful Links Connect with Jesse Lipson Connect with Jesse on X Learn more about Levitate
What happens when power, rather than compute, becomes the limiting factor for AI, robotics, and industrial automation? In this episode of Tech Talks Daily, I'm joined by Ramesh Narasimhan from Nyobolt to unpack a challenge that is quietly reshaping modern infrastructure. As AI training and inference workloads grow more dynamic, power demand is no longer predictable or steady. It can spike and drop in milliseconds, creating stress on systems that were never designed for this level of volatility. We talk about why data center operators, automation leaders, and industrial firms are being forced to rethink how energy is delivered, managed, and scaled. Our conversation moves beyond AI headlines and into the less visible constraints holding progress back. Ramesh explains how automation growth, particularly in robotics and autonomous mobile robot fleets, has exposed hidden inefficiencies. Charging downtime, thermal limits, and oversized systems are eroding productivity in warehouses and factories that aim to run around the clock. Instead of expanding physical footprints or adding redundant capacity, many operators are questioning whether the energy layer itself has become outdated. One of the themes that stood out for me is how energy has shifted from a background utility to a board-level concern. Power density, resilience, and cycle life are now discussed with the same urgency as compute performance or sensor accuracy. Ramesh shares why executives across logistics, automotive, advanced manufacturing, and AI infrastructure are starting to see energy strategy as a direct driver of uptime, cost control, and competitive advantage. We also explore the industry-wide push toward high-power, high-uptime operations. As businesses demand systems that can stay online continuously, the pressure is on energy technologies to respond faster, charge quicker, and occupy less space. This raises difficult questions about oversizing infrastructure for rare peak loads versus designing smarter systems that can flex in real time without waste. If you are building or operating AI clusters, robotics platforms, or industrial automation at scale, this episode offers a clear-eyed look at why energy systems may be the next major bottleneck and opportunity. As power becomes inseparable from performance, how ready is your organization to treat energy as a strategic asset rather than an afterthought?
What happens when artificial intelligence starts accelerating cyberattacks faster than most organizations can test, fix, and respond? In this episode of Tech Talks Daily, I sat down with Sonali Shah, CEO of Cobalt, to unpack what real-world penetration testing data is revealing about the current state of enterprise security. With more than two decades in cybersecurity and a background that spans finance, engineering, product, and strategy, Sonali brings a grounded, operator-level view of where security teams are keeping up and where they are quietly falling behind. Our conversation centers on what happens when AI moves from an experiment to an attack surface. Sonali explains how threat actors are already using the same AI-enabled tools as defenders to automate reconnaissance, identify vulnerabilities, and speed up exploitation. We discuss why this is no longer theoretical, referencing findings from companies like Anthropic, including examples where models such as Claude have demonstrated both power and unpredictability. The takeaway is sobering but balanced. AI can automate a large share of the work, but human expertise still plays a defining role, both for attackers and defenders. We also dig into Cobalt's latest State of Pentesting data, including why median remediation times for serious vulnerabilities have improved while overall closure rates remain stubbornly low. Sonali breaks down why large enterprises struggle more than smaller organizations, how legacy systems slow progress, and why generative AI applications currently show some of the highest risk with some of the lowest fix rates. As more companies rush to deploy AI agents into production, this gap becomes harder to ignore. One of the strongest themes in this episode is the shift from point-in-time testing to continuous, programmatic risk reduction. Sonali explains what effective continuous pentesting looks like in practice, why automation alone creates noise and friction, and how human-led testing helps teams move from assumptions to evidence. We also address a persistent confidence gap, where leaders believe their security posture is strong, even when testing shows otherwise. We close by tackling one of the biggest myths in cybersecurity. Security is never finished. It is a constant process of preparation, testing, learning, and improvement. The organizations that perform best accept this reality and build security into daily operations rather than treating it as a one-off task. So as AI continues to accelerate both innovation and attacks, how confident are you that your security program is keeping pace, and what would continuous testing change inside your organization? I would love to hear your thoughts. Useful Links Connect with Sonali Shah Learn more about Cobalt Check out the Cobalt Learning Center State of Pentesting Report Thanks to our sponsors, Alcor, for supporting the show.
What happens when AI stops talking and starts working, and who really owns the value it creates? In this episode of Tech Talks Daily, I'm joined by Sina Yamani, founder and CEO of Action Model, for a conversation that cuts straight to one of the biggest questions hanging over the future of artificial intelligence. As AI systems learn to see screens, click buttons, and complete tasks the way humans do, power and wealth are concentrating fast. Sina argues that this shift is happening far quicker than most people realize, and that the current ownership model leaves everyday users with little say and even less upside. Sina shares the thinking behind Action Model, a community-owned approach to autonomous AI that challenges the idea that automation must sit in the hands of a few giant firms. We unpack the concept of Large Action Models, AI systems trained to perform real online workflows rather than generate text, and why this next phase of AI demands a very different kind of training data. Instead of scraping the internet in the background, Action Model invites users to contribute actively, rewarding them for helping train systems that can navigate software, dashboards, and tools just as a human worker would. We also explore ActionFi, the platform's outcome-based reward layer, and why Sina believes attention-based incentives have quietly broken trust across Web3. Rather than paying for likes or impressions, ActionFi focuses on verifying real actions across the open web, even when no APIs or integrations exist. That raises obvious questions around security and privacy. This conversation does not shy away from the uncomfortable parts. We talk openly about job displacement, the economic reality facing businesses, and why automation is unlikely to slow down. Sina argues that resisting change is futile, but shaping who benefits from it remains possible. He also reflects on lessons from his earlier fintech exit and how movements grow when people feel they are pushing back against an unfair system. By the end of the episode, we look ahead to a future where much of today's computer-based work disappears and ask what success and failure might look like for a community-owned AI model operating at scale. If AI is going to run more of the internet on our behalf, should the people training it have a stake in what it becomes, and would you trust an AI ecosystem owned by its users rather than a handful of billionaires? Useful Links Connect with Sina Yamani on LinkedIn or X Learn more about the Action Model Follow on X Learn more about the Action Model browser extension Check out the whitelabel integration docs Join their Waitlist Join their Discord community Thanks to our sponsors, Alcor, for supporting the show.
What does it really take to remove decades of technical debt without breaking the systems that still keep the business running? In this episode of Tech Talks Daily, I sit down with Pegasystems leaders Dan Kasun, Head of Global Partner Ecosystem, and John Higgins, Chief of Client and Partner Success, to unpack why legacy modernization has reached a breaking point, and why AI is forcing enterprises to rethink how software is designed, sold, and delivered. Our conversation goes beyond surface-level AI promises and gets into the practical reality of transformation, partner economics, and what actually delivers measurable outcomes. We explore how Pega's AI-powered Blueprint is changing the entry point to enterprise-grade workflows, turning what used to be long, expensive discovery phases into fast, collaborative design moments that business and technology teams can engage with together. Dan and John explain why the old "wrap and renew" approach to legacy systems is quietly compounding technical debt, and why reimagining workflows from the ground up is becoming essential for organizations that want to move toward agentic automation with confidence. The discussion also dives into Pega's deep collaboration with Amazon Web Services, including how tools like AWS Transform and Blueprint work together to accelerate modernization at scale. We talk candidly about the evolving role of partners, why the idea of partners as an extension of a sales force is outdated, and how marketplaces are reshaping buying, building, and operating enterprise software. Along the way, we tackle some uncomfortable truths about AI hype, technical debt, and why adding another layer of technology rarely fixes the real problem. This is an episode for anyone grappling with legacy systems, skeptical of quick-fix AI strategies, or rethinking how partner ecosystems need to operate in a world where speed, clarity, and accountability matter more than ever. As enterprises move toward multi-vendor, agent-driven environments, are we finally ready to retire legacy thinking along with legacy systems, or are we still finding new ways to delay the inevitable? Useful Links Connect with Dan Kasun Connect with John Higgins Learn more about Pega Blueprint Thanks to our sponsors, Alcor, for supporting the show.
What does it really take to move AI from proof-of-concept to something that delivers value at scale? In this episode of Tech Talks Daily, I'm joined by Simon Pettit, Area Vice President for the UK and Ireland at UiPath, for a grounded conversation about what is actually happening inside enterprises as AI and automation move beyond experimentation. Simon brings a refreshingly practical perspective shaped by an unconventional career path that spans the Royal Navy, nearly two decades at NetApp, and more than seven years at UiPath. We talk about why the UK and Ireland remain a strategic region for global technology adoption, how London continues to play a central role for companies expanding into Europe, and why AI momentum in the region is very real despite the broader economic noise. A big part of our discussion focuses on why so many organizations are stuck in pilot mode. Simon explains how hype, fragmented experimentation, and poor qualification of use cases often slow progress, while successful teams take a very different approach. He shares real examples of automation already delivering measurable outcomes, from long-running public sector programs to newer agent-driven workflows that are now moving into production after clear ROI validation. We also explore where the next wave of challenges is emerging. As agentic AI becomes easier for anyone to create, Simon draws a direct parallel to the early days of cloud computing and VM sprawl. Visibility, orchestration, and cost control are becoming just as important as innovation itself. Without them, organizations risk losing control of workflows, spend, and accountability as agents multiply across the business. Looking ahead, Simon outlines why AI success will depend on ecosystems rather than single platforms. Partnerships, vertical solutions, and the ability to swap technologies as the market evolves will shape how enterprises scale responsibly. From automation in software testing to cross-functional demand coming from HR, finance, and operations, this conversation captures where AI is delivering today and where the real work still lies. If you're trying to separate AI momentum from AI noise, this episode offers a clear, experience-led view of what it takes to turn potential into progress. What would need to change inside your organization to move from pilots to production with confidence? Useful Links Learn more about Simon Pettit Connect with UiPath Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.
What happens when speed, scale, and convenience start to erode trust in the images brands rely on to tell their story? In this episode of Tech Talks Daily, I spoke with Dr. Rebecca Swift, Senior Vice President of Creative at Getty Images, about a growing problem hiding in plain sight, the rise of low-quality, generic, AI-generated visuals and the quiet damage they are doing to brand credibility. Rebecca brings a rare perspective to this conversation, leading a global creative team responsible for shaping how visual culture is produced, analyzed, and trusted at scale. We explore the idea of AI "sloppification," a term that captures what happens when generative tools are used because they are cheap, fast, and available, rather than because they serve a clear creative purpose. Rebecca explains how the flood of mass-produced AI imagery is making brands look interchangeable, stripping visuals of meaning, craft, and originality. When everything starts to look the same, audiences stop looking altogether, or worse, stop trusting what they see. A central theme in our discussion is transparency. Research shows that the majority of consumers want to know whether an image has been altered or created using AI, and Rebecca explains why this shift matters. For the first time, audiences are actively judging content based on how it was made, not just how it looks. We talk about why some brands misread this moment, mistaking AI usage for innovation, only to face backlash when consumers feel misled or talked down to. Rebecca also unpacks the legal and ethical risks many companies overlook in the rush to adopt generative tools. From copyright exposure to the use of non-consented training data, she outlines why commercially safe AI matters, especially for enterprises that trade on trust. We discuss how Getty Images approaches AI differently, with consented datasets, creator compensation, and strict controls designed to protect both brands and the creative community. The conversation goes beyond risk and into opportunity. Rebecca makes a strong case for why authenticity, real people, and human-made imagery are becoming more valuable, not less, in an AI-saturated world. We explore why video, photography, and behind-the-scenes storytelling are regaining importance, and why audiences are drawn to evidence of craft, effort, and intent. As generative AI becomes impossible to ignore, this episode asks a harder question. Are brands using AI as a thoughtful tool to support creativity, or are they trading long-term trust for short-term convenience, and will audiences continue to forgive that choice? Useful Links Connect with Dr. Rebecca Swift on LinkedIn VisualGSP Creative Trends Follow on Instagram and LinkedIn Thanks to our sponsors, Alcor, for supporting the show.




Fantastic insights! I would also shared a unique content abou Do My Assignment' service is such a game-changer! Their platform is user friendly, and the quality of work they deliver is impressive. Whether it's essays, research papers, or complex projects, they consistently provide well structured and thoroughly researched content. academized do my assignment available at https://academized.com/do-my-assignment It is a fantastic resource for students seeking reliable academic support.
Keep educated regarding the most recent business and tech patterns by paying attention to accounts of others in your field and how they are conquering more info on the http://writewaypub.com/ have additionally collaborated with Citrix and its Citrix Ready accomplices to uncover how they are taking care of issues together while building the fate of work.
Keep educated regarding the most recent business and tech patterns by paying attention to accounts of others in your field and how they are defeating difficulties with try this https://www.topwritersreview.com/reviews/bestessayhelp/ have additionally collaborated with Citrix and its Citrix Ready accomplices to uncover how they are tackling issues together while building the eventual fate of work.
I enjoy listening to Tech Blog Writer while washing dishes or cooking.
Fascinating concept & episode. Thoroughly enjoyed it, thanks!