DiscoverTech Talks Daily
Tech Talks Daily
Claim Ownership

Tech Talks Daily

Author: Neil C. Hughes

Subscribed: 1,766Played: 69,739
Share

Description

If every company is now a tech company and digital transformation is a journey rather than a destination, how do you keep up with the relentless pace of technological change?


Every day, Tech Talks Daily brings you insights from the brightest minds in tech, business, and innovation, breaking down complex ideas into clear, actionable takeaways.


Hosted by Neil C. Hughes, Tech Talks Daily explores how emerging technologies such as AI, cybersecurity, cloud computing, fintech, quantum computing, Web3, and more are shaping industries and solving real-world challenges in modern businesses.


Through candid conversations with industry leaders, CEOs, Fortune 500 executives, startup founders, and even the occasional celebrity, Tech Talks Daily uncovers the trends driving digital transformation and the strategies behind successful tech adoption. But this isn't just about buzzwords.


We go beyond the hype to demystify the biggest tech trends and determine their real-world impact. From cybersecurity and blockchain to AI sovereignty, robotics, and post-quantum cryptography, we explore the measurable difference these innovations can make.


Whether improving security, enhancing customer experiences, or driving business growth, we also investigate the ROI of cutting-edge tech projects, asking the tough questions about what works, what doesn't, and how businesses can maximize their investments.


Whether you're a business leader, IT professional, or simply curious about technology's role in our lives, you'll find engaging discussions that challenge perspectives, share diverse viewpoints, and spark new ideas.


New episodes are released daily, 365 days a year, breaking down complex ideas into clear, actionable takeaways around technology and the future of business.
3499 Episodes
Reverse
How do you build trust in a business environment where security reviews, compliance demands, and vendor risk checks can slow everything down just when companies are trying to move faster? In this episode, I sit down with Adam Markowitz, CEO and co-founder of Drata, to talk about why trust has become one of the most important business conversations in tech. Adam brings a fascinating perspective to the table. Before building Drata, he worked on NASA's space shuttle program, and today he leads a company that has grown rapidly by helping organizations rethink compliance, governance, risk, and assurance through automation and AI. What stood out to me in this conversation was how clearly he framed the real issue. Compliance may have been where many companies started, but trust is the bigger story. In a world shaped by cloud services, third party vendors, and constant security scrutiny, old point in time audits and reactive processes are starting to look painfully outdated. We also talked about Drata's acquisition of SafeBase and what that says about the direction of the market. Adam explained how security and GRC teams have too often been treated as back office functions, expected to stay quiet and keep the company out of trouble. But he sees things very differently. He argues that these teams can actively help close deals, accelerate revenue, and remove friction from the buying process. That shift matters because trust now plays a direct role in business growth. If customers can quickly get answers to security questions and understand how a company manages risk, sales cycles move faster and security teams stop being bottlenecks at the final stage of a deal. Another part of the conversation that really stayed with me was Adam's view on AI. He sees it as both a tailwind and a test. AI is helping automate highly manual GRC workflows, improve continuous compliance monitoring, and support newer frameworks tied to AI risk itself. At the same time, he is realistic about the pressure this puts on businesses. AI may introduce fresh concerns, but it also shines a harsher light on issues that have been around for years, things like access creep, weak controls, and data integrity problems. That honesty gave this discussion a lot of weight because it moved beyond hype and focused on what companies actually need to do. We also touched on Drata's momentum as a business, from opening a new San Francisco headquarters to expanding globally and moving further into the enterprise market. But even there, Adam kept coming back to culture, discipline, and a deep understanding of the customer problem. For me, that was the thread running through the whole episode. Trust is not a side issue. It is part of how modern companies grow, compete, and prove they can be relied on. If your business still sees compliance as a checkbox exercise or a cost center, this conversation will give you plenty to think about. Where do you see the relationship between trust, security, and growth heading next, and what did this episode make you question about the way your own organization handles compliance? Share your thoughts with me.
What happens when the most frustrating part of customer service, waiting on hold, repeating yourself, and fighting your way through endless phone menus, finally starts to disappear? In this episode, I sit down with Neil Hammerton, CEO and co-founder of Natterbox, to talk about how AI is reshaping customer experience in ways that feel practical rather than theatrical. We begin with a conversation about the gap between what customers have tolerated for years and what they expect now. Whether it is a bank that still puts you through layers of outdated IVR menus or a service team that answers straight away and solves the issue, those experiences stay with us. Neil makes the case that voice is far from dead. In fact, he believes voice is becoming one of the most exciting places to apply AI, especially when businesses want faster, more human interactions at scale. What I found especially interesting was Neil's view that AI should be treated like a new employee. That means training matters. Tone matters. Context matters. If businesses want AI assistants and agents to succeed, they have to teach them how the organization works, how conversations should sound, and when a human needs to step in. We talk about the difference between using AI for simple triage and using it to complete tasks end to end, from handling password resets to helping callers outside office hours or during spikes in demand. Neil also shares why the smartest path is rarely a giant leap. It is usually a series of smaller, lower-risk steps that build confidence and real results over time. We also get into one of the biggest concerns hanging over every AI conversation right now, whether these tools are replacing people or helping them do better work. Neil's answer is refreshingly balanced. In many cases, AI is taking care of the repetitive jobs that frustrate staff and slow down service, while freeing human agents to handle the conversations where empathy, judgment, and experience still matter most. That shift can improve customer experience while also making work more rewarding for the people on the front line. There is also a strong message here for business leaders who are still stuck in pilot mode, testing AI without ever quite moving forward. Neil explains why smart pilots need clear goals, good training data, and realistic expectations. He also shares how Natterbox is using AI internally, including producing board packs in a fraction of the time, while still keeping people involved to check, challenge, and refine the output. This episode is a thoughtful look at where customer experience is heading next, and why the future probably belongs to businesses that know when to let AI lead, when to keep humans in the loop, and how to blend both into something customers actually value. What are your thoughts on the balance between AI efficiency and human connection in customer service, and where do you think businesses are still getting it wrong?
How do you turn trillions of user interactions into meaningful decisions without drowning in data? In this episode of Tech Talks Daily, I sit down with Todd Olson, co-founder and CEO of Pendo, to talk about the future of product-led organizations and why AI is reshaping how software companies grow, build, and compete. Pendo tracks trillions of product usage events to help organizations understand how customers actually interact with their software. That level of data sounds powerful, but it also raises a challenge many teams face today. How do you turn massive data sets into clear signals that teams can act on without falling into analysis paralysis? Todd explains how Pendo approaches this problem by organizing product data around real user journeys, feature adoption, and areas where people drop off. Instead of leaving teams buried in dashboards, the goal is to surface insights that matter. Increasingly, AI is helping by acting as a kind of embedded analyst that highlights the patterns product teams should focus on. Our conversation also revisits the idea behind Todd's book, The Product-Led Organization. When it was published around the time of the pandemic, it argued that great products should do much of the heavy lifting traditionally done by sales or support teams. Looking back now, Todd believes the core idea remains intact. AI simply accelerates the model by allowing companies to experiment faster and scale product-driven experiences with far fewer people. But that shift is also creating tension in the software industry. We talk about the so-called reckoning in SaaS economics and the growing debate around whether AI will make traditional software companies obsolete. Todd offers a more measured perspective. While AI allows anyone to prototype software quickly, the companies that survive will still be the ones solving difficult problems, navigating compliance requirements, and building products that customers trust. Another theme we explore is geography and innovation. Pendo is headquartered in Raleigh, North Carolina, far from the usual coastal tech hubs. Todd shares how building outside Silicon Valley has shaped the company's culture, talent strategy, and mindset. There are advantages to being close to the center of the AI boom, but there is also value in building away from the echo chamber. We also spend time unpacking the rise of AI-assisted development and the trend many people call "vibe coding." Todd believes AI will dramatically reshape product teams, but he also pushes back against the idea that humans will disappear from the development process. Engineers will still need to review code, teach AI systems best practices, and ensure security and reliability. One of the most interesting moments in our conversation comes near the end when Todd shares a belief that originality will become one of the most valuable assets in the age of AI. As automated content and automated code become easier to generate, he believes people will increasingly value craft, taste, and original thinking. So in a world where AI can generate almost anything with a prompt, the real question becomes far more human. What problems are actually worth solving? If you care about the future of software, product strategy, and how AI is reshaping the economics of building companies, this is a conversation that offers plenty to think about. And after listening, I would love to hear your perspective. As AI becomes embedded in every product and workflow, do you believe originality and craft will become the true differentiators in the software industry?
Have you ever contacted customer support with a simple request, only to find yourself trapped in a loop of scripted chatbot responses that never actually solve the problem? It's an experience many of us know all too well.  AI has made customer service more conversational over the last few years, yet there is still a gap between answering a question and actually resolving an issue. That gap is exactly where today's conversation begins. In this episode of Tech Talks Daily, I spoke with Mike Szilagyi, SVP and General Manager of Product Management at Genesys Cloud, about a new chapter in AI-powered customer experience. Genesys has announced what it describes as the industry's first agentic virtual agent built on Large Action Models, or LAMs. While Large Language Models have dominated the conversation around AI for the past few years, they have largely focused on generating responses, retrieving knowledge, or answering questions. What they have struggled with is execution. Mike explained how Large Action Models take the next step. Rather than simply telling a customer how to solve a problem, these systems can plan and execute the steps needed to complete a task. Imagine contacting an airline after a sudden flight cancellation.  Instead of navigating multiple menus or repeating information to a human agent, an agentic virtual assistant could understand your situation, check alternative flights, apply airline policies, and complete the rebooking process across several systems. In other words, the AI moves from conversation to action. We also explored how Genesys approached the design of this technology with enterprise governance in mind. From explainable decision paths and audit logs to guardrails that ensure every automated action can be traced and understood, the goal is to make autonomous AI trustworthy inside complex organizations. Mike also shared insights into Genesys' partnership with Scaled Cognition and how integrating specialized models helps deliver reliable execution in real-world customer service environments. Perhaps most interesting was our discussion about the human role in this evolving contact center landscape. As automation begins to handle routine and multi-step workflows, human agents are free to focus on situations that require empathy, judgment, and expertise. That shift raises interesting questions about how organizations design customer experiences in the years ahead. So how will customers respond when virtual agents move beyond answering questions and begin resolving problems on their behalf? And once one brand delivers that experience, will it quickly become the expectation? Useful Links Connect with Mike Szilagyi Learn more about Genesys Genesys Agentic Virtual Agent Powered by LAMs for Enterprise CX Follow on LinkedIn
How do global companies make confident decisions when supply chains are constantly disrupted by tariffs, geopolitical tension, shifting consumer demand, and unpredictable global events? In this episode of Tech Talks Daily, I sat down with Dr. Ashwin Rao, EVP of AI and R&D at o9 Solutions, to talk about how artificial intelligence is changing the way organizations plan, forecast, and respond to uncertainty. Ashwin brings a fascinating mix of experience to the conversation. After earning a PhD in mathematics and computer science, he spent fifteen years on Wall Street working on derivatives trading strategies at Goldman Sachs and Morgan Stanley before moving into the world of enterprise technology. Today, he operates at the meeting point between business and academia as both a senior AI leader and an adjunct professor at Stanford University. Our conversation begins with Ashwin's unusual career path and how those early experiences in finance shaped the way he thinks about risk, decision making, and real world AI deployment. The journey from theoretical mathematics to trading floors and eventually into Silicon Valley offers an interesting lens on how analytical thinking can travel across industries and still remain highly relevant. We then move into the work happening at o9 Solutions, where AI is helping organizations make smarter decisions across supply chain planning, demand forecasting, and inventory management. In a world that Ashwin describes using the acronym VUCA, volatility, uncertainty, complexity, and ambiguity, businesses are under pressure to react faster and make better informed decisions. He explains how enterprise AI platforms can connect fragmented data across departments and create a more complete view of the business. One example he shares brings the concept down to earth. Even predicting how many bananas a grocery store should stock on any given day requires analyzing internal sales trends alongside external signals such as weather, social media trends, and economic conditions. Machine learning systems can now process those signals in real time and continuously update forecasts so businesses can respond quickly to changes. We also explore the rise of neuro- and symbolic AI, a concept Ashwin believes represents the next stage in enterprise decision-making. Rather than relying only on large language models, this approach blends the structured reasoning of symbolic systems with the pattern recognition of neural networks. The result, he suggests, feels less like a chatbot and more like having an expert coach embedded inside the decision-making process. Along the way, we also discuss why many organizations still struggle to embed AI successfully. Technology is only one piece of the puzzle. Ashwin believes the toughest obstacle is organizational change management, bringing teams together, connecting data across silos, and helping leaders guide their organizations through transformation. If you have ever wondered how AI moves beyond chatbots and into the systems that quietly power global supply chains, this conversation offers a thoughtful and practical perspective. So, how prepared is your organization to make decisions in a world defined by volatility and uncertainty, and could AI become the trusted partner that helps guide those choices? Useful Links Ashwin's blog Ashwin's LinkedIn o9 Solutions Website o9 LinkedIn  
What does it take to design a data center for a world where the technology inside it may change several times before the building even opens? In this episode of Tech Talks Daily, I sit down with Jackson Metcalf, Principal at Gensler, to talk about how AI is forcing a complete rethink of data center design. Jackson has spent nearly two decades working on critical facilities, and in our conversation he explains how the shift from traditional cloud workloads to dense AI environments is changing everything from building form and cooling strategy to long-term infrastructure planning. What struck me most in this conversation is the sheer mismatch in timescales. Data centers can take two and a half to three years to design and build, while chip and GPU roadmaps are evolving in cycles of months. Jackson explains why that means designing for a fixed end state no longer makes sense. Instead, the future may belong to facilities built with flexibility at their core, spaces that can be reconfigured, upgraded, and even conceptually rebuilt over time rather than treated as static assets. We also talk about what hyper-flexibility actually means in practice. This is not just a buzzword. It is about designing buildings with enough structural and engineering headroom to support very different cooling and power models over their lifespan. As AI workloads push cabinet densities to levels that would have sounded impossible only a few years ago, the need for plug-and-play mechanical and electrical infrastructure becomes far more than a design preference. It becomes essential. Another fascinating part of the conversation centers on sustainability. Jackson shares why durable, well-built structures can create long-term environmental value, even in an industry often criticized for its energy demands. We discuss embodied carbon, adaptive reuse, and why a high-quality building may have a much better second life than something built purely for short-term speed. That leads into a wider conversation about repositioning underused real estate, from former industrial facilities to vacant office buildings, as potential digital infrastructure. We also get into the growing energy challenge behind AI. With demand for power rising fast, and the US grid under increasing pressure, many operators are now weighing options such as on-site natural gas generation while waiting for cleaner long-term alternatives to mature. Jackson offers a thoughtful perspective on the tension between urgent infrastructure needs and environmental responsibility, as well as the uncertainty surrounding future energy roadmaps. Looking further ahead, I ask Jackson what will define a successful data center campus in the years to come. Will it be raw megawatts, adaptability, carbon intensity, location strategy, or something else entirely? His answer opens up a much bigger conversation about whether these buildings can become more connected to the communities around them, and what role they may play in a future where digital infrastructure is no longer hidden in the background, but central to how society functions. So if AI is pushing data center design to extremes, how do we build facilities that are ready for what comes next without becoming obsolete almost as soon as they open? And what does sustainable, adaptable digital infrastructure really look like in practice?
How close are we to the moment when quantum computing moves from scientific curiosity to real-world infrastructure? In today's episode of Tech Talks Daily, I speak with Christian Weedbrook, Founder and CEO of Xanadu, a company pushing the boundaries of what quantum computers might soon achieve. Xanadu has taken an unconventional route in the race to build practical quantum systems. Instead of relying on electronic approaches used by many others in the field, the company builds quantum computers using photonics, effectively computing with particles of light. Christian explains why this matters and how working with photons could unlock advantages in energy efficiency, scalability, and networking as quantum machines grow into large data center–scale systems. The conversation also arrives at a fascinating moment for the company. Xanadu has announced plans to go public through a SPAC deal that values the company at around $3.1 billion. Christian shares what that milestone means, not only for Xanadu but for the broader quantum ecosystem. According to him, the excitement surrounding quantum computing is no longer limited to research labs. Governments, enterprise partners, and investors are increasingly paying attention as the technology edges closer to commercial relevance. One of the most engaging parts of our conversation is Christian's own journey into the world of quantum physics. Before earning a PhD in photonic quantum computing, he began as a film student who admits he once dreamed of becoming a filmmaker. That winding path eventually led him into physics and entrepreneurship, where he founded Xanadu in 2016 with a mission to make quantum computers useful and accessible to everyone. We also discuss PennyLane, the open-source quantum programming framework developed by Xanadu that has quietly become one of the most widely used tools in the quantum developer community. Now taught in universities across more than 30 countries, PennyLane plays an important role in building the next generation of quantum talent. Christian also shares a realistic timeline for where the industry stands today. Quantum computers already exist, but they remain smaller than what is needed for commercial breakthroughs. Xanadu's roadmap points toward large-scale quantum data centers by the end of the decade, systems capable of tackling problems in drug discovery, materials science, logistics, and finance that traditional computers struggle to simulate. For enterprise leaders listening today, the message is clear. The quantum future is closer than many people assume, and organizations that begin exploring use cases now will be far better prepared when these systems mature. So how should businesses prepare for a computing paradigm based on the mathematics of quantum physics rather than traditional software logic? And what lessons can founders learn from a journey that began with filmmaking ambitions and led to building one of the most ambitious quantum companies in the world? Let's find out together.
How can companies invest heavily in AI and still struggle to see meaningful returns? In this episode of Tech Talks Daily, I sit down with Thomas Scott, CEO of Wrike, to unpack a growing tension many organizations are facing right now.  Artificial intelligence adoption is accelerating rapidly across the workplace, yet the structures needed to support it are struggling to keep pace. Wrike's latest research into the "Age of Connected Intelligence" reveals that more than 80 percent of employees are already using AI at work. Yet fewer than half have received any formal training, guidance, or governance around how these tools should be used. That gap between enthusiasm and enablement is creating a new workplace phenomenon that many leaders are only just beginning to notice.  Shadow AI. When employees cannot find approved tools that solve their problems quickly, they often turn to unapproved applications or personal accounts instead. Wrike's data shows that 42 percent of workers admit they have already done this. For organizations handling sensitive data, intellectual property, or regulated information, that trend raises serious questions about security, compliance, and trust. Thomas explains why this pattern is not surprising. Whenever a new technology emerges, the builders and experimenters move first. They explore possibilities, test new tools, and discover productivity gains long before formal policies or training frameworks arrive. The challenge for leadership teams is learning how to harness that momentum without letting experimentation turn into fragmentation. We also explore one of the most overlooked barriers to AI return on investment. Integration. Many employees are now juggling multiple AI tools every week, yet those systems rarely communicate with one another or connect deeply into the core business platforms where real work happens. As a result, context gets lost, workflows become fragmented, and organizations end up running expensive pilots that never scale into meaningful transformation. Thomas introduces the idea of connected intelligence as a possible solution. Instead of deploying AI tools in isolation, companies need systems that understand context across projects, teams, and workflows. When AI can access structured data, shared history, and operational context, it becomes far more capable of supporting real decision making rather than simply generating isolated outputs. Our conversation also explores how leaders can move beyond scattered experimentation and start building structured AI adoption across their organizations.  Thomas argues that the most successful companies start with highly specific problems, empower small groups of motivated builders, and maintain strong executive involvement throughout the process. AI transformation is rarely driven by technology alone. It requires people, process, and leadership alignment working together. So if your organization has already deployed AI tools but still struggles to see real impact, perhaps the question is not whether you are using AI. The real question might be whether those tools are truly connected to the work your teams are trying to do every day.
How should businesses rethink infrastructure when applications, data, and users are increasingly spread across thousands of locations? In this episode of Tech Talks Daily, I sit down with Mark Cree, President and Chief Operating Officer at Scale Computing, to talk about why the future of enterprise infrastructure is moving closer to where data is actually created. This conversation was recorded following the 66th edition of The IT Press Tour, where some of the most interesting conversations in enterprise infrastructure centered on what happens when businesses move away from oversized, monolithic stacks and start focusing on practical, distributed solutions. From retail stores and airports to remote industrial sites, the edge is becoming a critical part of modern IT strategy. Mark shares how Scale Computing has spent years building an edge-first platform designed to run critical workloads reliably across everything from a single location to tens of thousands of distributed sites. Mark also reflects on his own journey through the technology industry, which includes founding companies acquired by Cisco and NetApp, working as a venture capitalist, and leading major storage initiatives at AWS. That experience gives him a unique perspective on how enterprise infrastructure has evolved, particularly as organizations reconsider the balance between centralized cloud environments and local processing closer to users and devices. During our conversation, we explore why edge computing is becoming increasingly important for AI workloads, especially when large volumes of data are generated outside traditional data centers. Mark explains how processing information locally can reduce costs, improve performance, and enable entirely new use cases, from monitoring customer behavior in retail environments to running intelligent systems in remote locations. We also talk about the ongoing reassessment happening across enterprise IT teams following major industry shifts, including changes in the virtualization market and growing concerns around vendor lock-in. Mark explains how Scale Computing is positioning itself as a flexible alternative by combining virtualization, containerization, networking, and security into a platform designed specifically for distributed environments. Looking ahead, Mark shares his perspective on where enterprise infrastructure is heading over the next five years. As smaller AI models become more capable and organizations seek greater control over their data and systems, the role of edge platforms may become even more important.  Instead of relying solely on massive centralized environments, companies may find new value in distributing intelligence closer to the places where real-world activity happens. So as organizations rethink how they deploy applications, manage data, and control infrastructure, is the next big shift in enterprise IT happening right at the edge? And how prepared is your organization for that change?
How can organizations use AI to transform hiring while still protecting the human element at the heart of work? In this episode of Tech Talks Daily, I sit down with Mahe Bayireddi, co-founder and CEO of Phenom, to explore how artificial intelligence is reshaping the way companies attract, hire, and develop talent.  Our conversation comes at an interesting moment for the company, following the announcement that Phenom has acquired Be Applied, an AI-driven cognitive assessment platform designed to validate candidate and employee capabilities at scale. The move follows an earlier acquisition of Included, an AI-native people analytics platform focused on delivering deeper workforce insights and faster decision making. Mahe shares how Phenom's long-term mission to help a billion people find the right job is evolving as AI becomes embedded throughout the HR lifecycle. From candidate discovery to onboarding and internal mobility, organizations are now experimenting with automation, personalization, and intelligent workflows that aim to improve both productivity and employee experience. One theme that runs throughout our discussion is how AI adoption in HR varies dramatically depending on geography, regulation, and industry. In Europe, regulatory frameworks are shaping how companies deploy automation. In the United States, state-level policies introduce additional complexity. Meanwhile, organizations across Asia are often approaching AI with entirely different priorities. As a result, many global companies are experimenting carefully, introducing AI into specific business units or regions before rolling it out more broadly. We also talk about a challenge that has caught many HR teams by surprise: the growing issue of fraudulent candidates and identity manipulation in the hiring process. As job applications become easier to submit and remote work expands global talent pools, organizations must rethink how they validate candidate identity and credentials. Mahe explains how AI-driven fraud detection tools can help highlight suspicious patterns while still keeping humans in the loop for final decisions. Another important point raised in the conversation is the need to preserve humanity in the workplace while introducing intelligent automation. While AI can dramatically improve efficiency across recruiting and workforce planning, Mahe believes HR leaders must be careful to ensure technology strengthens human potential rather than reducing people to data points in a system. Looking ahead, we discuss how organizations can begin adopting AI responsibly by starting small, focusing on high-impact areas, and building guardrails that reflect regional regulations and company culture. For many companies, the most successful path forward will involve testing AI within specific workflows, measuring outcomes quickly, and scaling what works. So as artificial intelligence becomes a central part of hiring, workforce planning, and employee development, the big question for leaders is this. Can organizations use AI to create faster, smarter talent decisions while still keeping people at the center of the workplace experience?
How does a CISO turn cybersecurity from a technical conversation into a business conversation that boards actually care about? In this episode of Tech Talks Daily, I sit down with Thom Langford, EMEA CTO at Rapid7 and a former CISO, to explore what he calls the second phase of cybersecurity leadership. For years, the industry worked hard to secure a seat at the boardroom table. In many organizations, that mission has largely succeeded. But as Thom explains, gaining access was only the first step. The real challenge now is communicating security in a way that drives meaningful business decisions. Thom shares why many CISOs still approach board conversations in the same way they did a decade ago, even though boardroom awareness of cybersecurity has changed dramatically. Today, many boards include members with cybersecurity knowledge or direct security experience. That means security leaders can no longer rely on technical jargon, complex frameworks, or compliance language to make their case. One of the most interesting insights from our conversation is the disconnect between how CISOs frame risk and what boards are actually focused on. While security teams often lead with risk reduction, boards tend to think in terms of revenue growth and operational costs. Thom argues that security leaders must learn to translate cybersecurity into the language of profit and loss if they want their message to resonate at the executive level. We also explore how traditional security tools such as risk frameworks, audits, and compliance standards can sometimes create distance rather than clarity in board discussions. Instead of helping executives understand security priorities, these models can obscure the real question boards are trying to answer. How secure are we, and what does that mean for the business? Another area we discuss is the growing role of tabletop exercises. Thom explains why these simulations are becoming one of the most effective ways for CISOs to demonstrate the real-world impact of security decisions. By walking executives through a realistic incident scenario, leaders can see how security, operations, legal teams, and business priorities intersect during a crisis. Looking ahead, Thom believes the most successful CISOs will increasingly need to think like business leaders rather than purely technical specialists. Communication skills, relationship building, and understanding the organization's financial priorities may prove just as important as deep technical expertise. So if cybersecurity leaders have already earned their place in the boardroom, the next question becomes much more interesting. Are they speaking the language the board actually understands, or are they still trying to solve business problems using only security vocabulary?
What if the next big shift in personal audio is not about blocking the world out, but staying connected to it? In this episode of Tech Talks Daily, I sit down with Nicole from Shokz to talk about why open-ear headphones are suddenly everywhere, and why this category is moving from niche curiosity to everyday essential. For years, the audio market was obsessed with sealing users off from the outside world. Now the conversation is changing. More people want to hear their music, podcasts, and calls without losing awareness of traffic, fellow commuters, colleagues, or the world happening around them. Nicole helps unpack what open-ear audio actually means in simple terms, and why it is resonating with runners, commuters, parents, office workers, and anyone trying to balance comfort, safety, and sound quality. We talk about the cultural shift behind this rise, from growing health and fitness habits to the way hybrid work and always-on lifestyles have changed how people use earbuds throughout the day. We also get into why Shokz has become one of the defining brands in this space. Long before open-ear audio became a trend, Shokz was investing in bone conduction, open-ear design, and the kind of product research needed to make this category work in real life. Nicole shares how years of persistence, technical innovation, and consumer education helped the company move from specialist player to category leader. During our conversation, we explore how real-world behavior shapes product design. That means thinking beyond audio specs and focusing on how headphones actually fit into daily life. Whether someone is running in the rain, commuting to work, wearing glasses, sitting in an office, or trying to stay aware while walking the dog, those everyday moments are shaping the next generation of audio devices. Nicole also talks me through some of Shokz's latest product thinking, including the OpenDots One and the OpenFit Pro. From compact clip-on designs that feel almost like wearable accessories to new approaches around noise reduction in open-ear listening, this episode looks at how the category is becoming more sophisticated and more versatile without losing the awareness that made it appealing in the first place. Looking ahead, we discuss whether open-ear audio will live alongside sealed earbuds as part of a two-device lifestyle, or whether it could eventually become the default choice for more people. We also touch on what comes next, from smarter audio experiences to the role AI and even connected glasses could play in the future of listening. So if you have been seeing the phrase open-ear audio more often and wondering what all the fuss is about, this conversation will bring it to life. Are open-ear headphones simply having a moment, or are we watching a bigger shift in how people want to hear the world around them?  
What happens when the real bottleneck in artificial intelligence is no longer training models, but actually running them at scale? In this episode of Tech Talks Daily, I sit down with Satyam Srivastava from d-Matrix to explore a shift that is quietly reshaping the entire AI infrastructure landscape. While much of the early AI race focused on training ever larger models, the next phase of AI adoption is increasingly defined by inference. That is the moment when trained models are deployed and used to generate real-world results millions of times a day. Satyam brings a unique perspective shaped by years of experience in signal processing, machine learning, and hardware architecture, including time spent at NVIDIA and Intel working on graphics, media technologies, and AI systems. Now at d-Matrix, he is helping design next-generation computing architectures focused on one of the biggest challenges facing the AI industry today: efficiently running large language models without overwhelming data centers with unsustainable power and infrastructure demands. During our conversation, we explored why the industry underestimated the infrastructure implications of inference at scale. While training large models grabs headlines, the real operational pressure often comes later when those models must serve millions of queries in real time. That shift places enormous strain on memory bandwidth, energy consumption, and data movement inside modern data centers. Satyam explains how d-Matrix identified this challenge years before generative AI exploded into the mainstream. Instead of focusing on training hardware like many AI startups at the time, the company concentrated on inference efficiency. That decision is becoming increasingly relevant as organizations begin to realize that simply adding more GPUs to data centers is not a sustainable long-term strategy. We also discuss the growing power constraints surrounding AI infrastructure, and why efficiency-driven design may be the only realistic path forward. With electricity supply, cooling capacity, and semiconductor availability all becoming limiting factors, the industry is being forced to rethink how AI systems are architected. Custom silicon, purpose-built accelerators, and heterogeneous computing environments are now emerging as key pieces of the puzzle. The conversation also touches on the geopolitical and economic importance of AI semiconductor leadership, and why the relationship between frontier AI labs, infrastructure providers, and chip designers is becoming increasingly strategic. As governments and companies compete to maintain technological leadership, the question of who controls the hardware powering AI may prove just as important as the models themselves. Looking ahead, Satyam shares his perspective on how the role of engineers will evolve as AI infrastructure becomes more specialized and energy-aware. Foundational engineering skills remain essential, but the next generation of engineers will also need to think in terms of entire systems, combining software, hardware, and AI tools to build more efficient computing environments. As AI continues to move from research labs into everyday products and services, are organizations prepared for the infrastructure shift that comes with an inference-driven future? And could efficiency, rather than raw computing power, become the defining metric of the next phase of the AI race?
How confident are you that your business could recover from a cyberattack, cloud outage, or infrastructure failure in minutes rather than hours or even days? In this episode of Tech Talks Daily, I explore the changing nature of enterprise resilience with Joseph D'Angelo and Cassie Stanek from InfoScale, now part of Cloud Software Group. Our conversation looks at why many organizations still rely on backup and replication strategies that were designed for a very different era of IT. In a world of hybrid infrastructure, multi-cloud deployments, and increasingly complex application stacks, those traditional tools often protect the data but often fail to restore the business services that depend on it. My guests shares how InfoScale approaches resilience from the application layer outward. Instead of focusing on individual components such as storage or infrastructure, the platform looks at the relationships between applications, services, and data so entire systems can be orchestrated and recovered as a coordinated unit. That distinction becomes especially important during a ransomware attack or cloud outage, where restoring a single database rarely brings a digital business back online. We also discuss how growing regulatory pressure is changing the conversation. Enterprises are no longer expected to simply claim they have disaster recovery processes in place. Increasingly they must demonstrate, test, and prove that recovery capabilities actually work. Cassie explains how controlled "fire drill" rehearsals allow organizations to validate recovery plans without disrupting production systems, creating defensible proof that systems can be restored when it matters most. We also look ahead to the next phase of resilience, where environments will increasingly diagnose, adapt, and respond to disruptions in real time. Instead of reacting after an outage occurs, operational resilience will rely on predictive analytics, anomaly detection, and automated response capabilities that allow systems to self-correct before users ever notice a problem. Throughout our discussion, one theme becomes clear. IT resilience is no longer just an infrastructure conversation. It has become a business continuity strategy that directly affects revenue, customer trust, and competitive advantage. As organizations depend more heavily on digital services, the ability to recover quickly from disruption is becoming one of the defining capabilities of modern enterprise technology. So after listening, I'm curious about your perspective. Do you think most organizations are truly prepared for operational resilience in a multi-cloud world, or are many still relying on backup strategies that were built for a much simpler IT environment?
Have you ever bought a ticket to a show and wondered why the experience still feels strangely disconnected, with one app for ticketing, another for marketing, another for refunds, and a dozen spreadsheets held together by late nights and good intentions? In this episode of Tech Talks Daily, I'm joined by Ritesh Patel, co-founder of Ticket Fairy, to talk about the technology behind live events and why it has lagged behind other industries in some surprisingly familiar ways. Ritesh makes the case that most organizers are operating more like creative founders than corporate operators, building "mini cities" for a weekend with tiny teams, tight budgets, and very little margin for error. That reality shapes every technology decision, and it explains why fragmented tools and siloed data can become a hidden tax on the business. Ritesh walks me through Ticket Fairy's full stack approach, bringing ticketing, marketing, CRM, logistics, and payments into a single system, and why unifying data changes the economics of running an event. We dig into practical examples that go beyond vague AI talk, including how small workflow fixes can speed up entry, improve the on-site experience, and even translate into real revenue uplift once you multiply time savings across thousands of attendees. We also get into where AI agents and large language models are already finding a foothold in events, particularly around unstructured documents like artist specs, supplier agreements, and operational paperwork that can swallow hundreds of hours. Ritesh shares why "AI-native" should mean more than a writing assistant in a text box, and what it looks like when AI becomes an extension of a lean events team, including a prototype voice agent designed to handle common ticket-holder questions without creating new support bottlenecks. If you're interested in the real business mechanics of events, and how SaaS, payments, data, and AI can quietly shape everything from entry lines to repeat attendance, this conversation offers a fresh way to think about an industry that touches all of us, even when we don't think of it as a tech story. And as a bonus, Ritesh leaves a music recommendation that sent me back to an album I had not played in years, Burial's Untrue, with "Archangel" as the track to start with. After listening, tell me this, where do you think unified data and practical AI will make the biggest difference in live experiences over the next couple of years, on the promoter side or the fan side, and why?
Have you ever looked at a global hiring plan and wondered whether you are building a team, or accidentally buying a bundle of hidden fees, legal risk, and avoidable stress? In this episode, I'm joined by Oksana Petrus from Alcor, where she leads customer success and operations, helping tech companies build and scale engineering teams across Eastern Europe and Latin America. If you have ever tried to expand beyond your home market, you know the promise is real, access to great talent, broader coverage across time zones, and the chance to build faster. But the reality can get messy quickly once contracts, compliance, culture, and cost assumptions collide. Oksana brings a sharp perspective because she has seen both sides. Earlier in her career she worked as a lawyer with outsourcing providers, so she understands how pricing structures and contracts can create surprises once a team is already in motion. We talk about why so many leaders start out thinking outsourcing will be simple, then discover they cannot clearly see what they are paying for, who is actually doing the work, or how much of the spend is going to overhead. We also discuss the growing challenge of trust in recruiting, especially as AI tools make it easier to fake profiles, inflate experience, and even perform better in interviews than the person behind the screen can deliver on the job. Oksana shares how teams are responding with stronger verification, background checks, and a more transparent operating model so hiring managers can feel confident about who they are bringing in. We also dig into the real cost of global scaling, and why "salary charts" are only the starting point. Oksana explains how benefits, taxes, local customs like a 13th salary, currency controls, and even language realities can derail budgets and slow hiring if teams do not have local insight. The result is often frustration on both sides, candidates lose momentum, managers lose time, and projects drift. Culture comes through as a theme too, and not in a vague, feel good way. We talk about how different regions communicate, how expectations need to be set early, and why "challenge culture" can be a strength when leaders welcome it. Oksana shares an example of a CTO who came to value Eastern European teams precisely because they questioned decisions and offered alternatives that improved outcomes. If you are a founder, CTO, or business leader thinking about scaling an engineering team this year, this episode is a practical look at what tends to go wrong, why it gets expensive, and how to build a smarter path forward without overcommitting too early.  Where do you think the line is between smart global expansion and taking on complexity before your business is ready for it, and what has your own experience taught you?
How can a world that produces more than enough food still leave millions of people struggling to put a healthy meal on the table? In this episode of Tech Talks Daily, I speak with Jordan Schenck, CEO of Flashfood, about the growing paradox at the heart of our global food system. Grocery prices are climbing, families everywhere are making harder choices at the checkout, and food banks are seeing rising demand. Yet at the same time, vast quantities of perfectly edible food never make it onto a plate. Jordan shares the startling scale of the problem. In North America alone, billions of pounds of edible food are thrown away every year, including huge volumes from grocery stores themselves. Fresh produce, meat, and dairy often end up discarded even though they remain safe and nutritious to eat. The result is a system where food waste and food insecurity grow side by side, despite a supply chain that already produces far more calories than the world needs. Flashfood is attempting to change that equation with a simple but powerful idea. Through its marketplace app, the company partners with grocery retailers to sell surplus food at steep discounts before it reaches the landfill. Shoppers gain access to fresh groceries at far lower prices, while retailers recover value from inventory that might otherwise be lost. What emerges is a rare triple win for shoppers, grocers, and the environment. During our conversation, Jordan explains how consumer behavior, retail expectations, and supply chain logistics have shaped today's food waste problem. She also shares how technology and data are beginning to shift the system in a different direction. Flashfood is now working with more than two thousand grocery partners across North America and serving over a million users, using data and AI to help retailers price surplus inventory more effectively and move products before they are discarded.But the story behind Flashfood is also personal. Jordan reflects on her earlier experiences at Impossible Foods and as founder of the beverage brand Sunwink, and how those roles helped her see both the strengths and weaknesses inside modern food production. Over time, she began to question whether the industry truly needed more products on shelves, or whether the bigger opportunity lay in fixing the inefficiencies that already existed. Our discussion touches on the psychology of grocery shopping, the economics of surplus inventory, and the cultural expectations that lead retailers to overstock shelves in the first place. We also explore why many consumers are more open to buying discounted food than retailers once believed, particularly as the cost of living continues to rise. Perhaps most encouraging of all is the idea that solving food waste does not require entirely new supply chains or radical lifestyle changes. Sometimes it simply requires connecting the dots between food that already exists and the people who need it most.
Is 2026 the year AI finally has to prove it is worth the investment? In this episode, I'm joined by Chris Riche-Webber, VP of Business Intelligence and Analytics at SmartRecruiters, to explore why so many AI and agentic AI initiatives stall after the pilot phase and what separates the projects that scale from the ones that quietly disappear. With Gartner predicting that more than 40 percent of agentic AI programs could be cancelled by 2027, Chris brings a pragmatic, data-led perspective on what is really happening inside organizations as the hype meets operational reality. We talk about the fundamentals that have not changed despite the new technology. Influence, clearly defined problems, measurable impact, and adoption still determine success, yet they are often overlooked in the rush to deploy the latest tools. Chris explains why "good vibes" are no longer enough in front of a CFO, how to baseline outcomes properly, and why ownership of results is one of the most common missing pieces in enterprise AI programs. A big part of the conversation focuses on what Chris calls the "agent washing" problem. Just as products are sometimes marketed with fashionable labels that do not reflect their real value, many solutions are being positioned as agentic without delivering true autonomy or business outcomes. We discuss how leaders can cut through the noise by asking better questions, aligning technology to specific use cases, and recognizing when simple automation is the right answer. Trust, adoption, and measurable ROI emerge as the three signals that determine whether an AI initiative survives. Chris shares a clear framework for defining these signals in a way that is consistent, comparable over time, and meaningful to the executive team. We also explore how connecting talent decisions to revenue, productivity, and retention changes the conversation, especially in the context of SmartRecruiters' broader SAP ecosystem and the opportunity to link people data directly to business performance. This is a conversation about moving from experimentation to accountability, from buying narratives to solving real problems, and from technology-first thinking to outcome-first leadership. So as the window for easy wins closes and the demand for proof of value grows, will your AI strategy be remembered as a pilot that generated excitement or as an initiative that delivered measurable business impact?
What if the real AI race in 2026 isn't about building bigger models, but about where decisions are made, how fast they happen, and whether they deliver measurable value? In this episode, I'm joined by John Bradshaw, Director of Cloud Computing Technology and Strategy at Akamai, to unpack his predictions for the next phase of cloud, AI inference, and the economics that will shape enterprise technology over the next 12 months. As organizations move beyond experimentation, John explains why the boardroom conversation has shifted from capability to return on investment, and how spiraling compute demands are forcing leaders to rethink the balance between performance, cost, and innovation. We explore why this new financial scrutiny is not slowing AI adoption, but refining it. John shares how inefficient GPU workflows, centralized inference, and poorly aligned architectures are being challenged by a more disciplined approach that pushes intelligence closer to the edge. This shift is not only about latency and performance. It is about building scalable, value-driven platforms that can support real-time decision-making, agentic workloads, and global user experiences without breaking traditional IT budgets. Trust is another major theme throughout our conversation. From the rise of everyday AI agents that quietly handle routine tasks to the growing importance of secure, resilient inference pipelines, John outlines how low-latency edge infrastructure, local processing, and hybrid cloud models will redefine reliability for both enterprises and consumers. We also discuss the smart home backlash following recent outages, and why the next generation of connected products will be designed to work even when the network does not. The episode also looks at the future of streaming, where consolidation, intelligent content delivery, and AI-driven personalization are reshaping both the user experience and the economics behind the platforms. Behind the scenes, orchestration is emerging as a defining capability, with multiple models and services working together to validate outputs, reduce hallucinations, and create more dependable AI systems. This is a conversation about moving from possibility to production, from experimentation to accountability, and from centralized architectures to distributed intelligence. So as AI becomes embedded in every workflow and every customer interaction, will the winners be the companies with the biggest models, or the ones that know exactly where their AI should live, how it should be orchestrated, and how it proves its value every single day?
What happens when AI moves from a standalone tool to a teammate that works inside the flow of your organization? In this episode, I'm joined by Mick Hodgins, General Manager for EMEA at Notion, to explore how the idea of a connected AI workspace is reshaping the way teams collaborate, make decisions, and measure productivity. With a career that includes more than a decade at Google scaling growth across multiple countries, Mick brings a unique perspective on what it takes to build technology businesses across diverse markets and why this moment in AI feels fundamentally different from previous waves of innovation. We talk about Notion's journey from a flexible, block-based collaboration platform to an AI-native workspace where context is the real differentiator. Mick explains why AI performs better when it understands how work actually happens, and how embedding agents directly into shared workflows allows teams to move from prompting tools to orchestrating outcomes. From automated reporting and knowledge management to self-improving agent loops that learn from their own performance, the conversation brings to life how organizations are already using AI to remove the "work around the work" and focus on higher-value thinking. A major theme throughout the discussion is return on investment. In a world where many companies are still stuck in pilot mode, Mick shares how leaders can reframe ROI around productivity, speed, and the elimination of repetitive tasks rather than treating AI as a single project with a fixed payback period. We also explore how roles, org structures, and hiring priorities are beginning to shift as agents become extensions of team capability rather than experimental add-ons. Because Mick leads the EMEA region, we also dive into the differences in adoption between the US and Europe, from regulatory considerations and cultural attitudes to the growing strength of the European startup ecosystem. It's a balanced view that recognizes both the caution and the creativity emerging across the region. This is ultimately a conversation about friction. What happens to an organization when coordination overhead disappears, when reporting builds itself, and when knowledge stays current without human intervention? So as AI agents move from novelty to infrastructure, are businesses ready to redesign how work gets done, and what becomes possible when teams stop managing tasks and start compounding impact?
loading
Comments (6)

saini suraj

Fantastic insights! I would also shared a unique content abou Do My Assignment' service is such a game-changer! Their platform is user friendly, and the quality of work they deliver is impressive. Whether it's essays, research papers, or complex projects, they consistently provide well structured and thoroughly researched content. academized do my assignment available at https://academized.com/do-my-assignment It is a fantastic resource for students seeking reliable academic support.

Mar 24th
Reply

Harry M. Henderson

Keep educated regarding the most recent business and tech patterns by paying attention to accounts of others in your field and how they are conquering more info on the http://writewaypub.com/ have additionally collaborated with Citrix and its Citrix Ready accomplices to uncover how they are taking care of issues together while building the fate of work.

Apr 25th
Reply

Gustavo Woltmann

Keep educated regarding the most recent business and tech patterns by paying attention to accounts of others in your field and how they are defeating difficulties with try this https://www.topwritersreview.com/reviews/bestessayhelp/ have additionally collaborated with Citrix and its Citrix Ready accomplices to uncover how they are tackling issues together while building the eventual fate of work.

Apr 21st
Reply

Neil Parker

I enjoy listening to Tech Blog Writer while washing dishes or cooking.

Aug 3rd
Reply

Neil Parker

Fascinating concept & episode. Thoroughly enjoyed it, thanks!

Jun 18th
Reply (1)