DiscoverIT Infrastructure as a Conversation
IT Infrastructure as a Conversation
Claim Ownership

IT Infrastructure as a Conversation

Author: Neil C. Hughes

Subscribed: 1Played: 3
Share

Description

What does it really take to power the digital-first world we now live in? IT Infrastructure as a Conversation explores this question with purpose and insight.

As part of the Tech Talks Network, this podcast focuses on the core systems that make digital transformation possible. From cloud and networking to data management, storage, and analytics, we speak with the leaders responsible for building and maintaining the foundations of enterprise technology.

Each episode features thoughtful conversations with public sector innovators, enterprise architects, business technologists, startup founders and strategic thinkers. We examine how infrastructure decisions influence business outcomes, how to balance reliability with innovation, and why rethinking legacy systems does not have to mean massive cost or disruption.

We also look at the cultural side of infrastructure. What happens when strategy meets operational reality? How do leaders inspire change in complex environments? And where should businesses start if they want to future-proof without overcomplicating?

This is a podcast for those who understand that infrastructure is more than technology. It is the foundation on which everything else depends.

If you're ready to rethink how infrastructure is discussed, delivered, and developed, this is your conversation.

15 Episodes
Reverse
In this episode, I’m joined by Michael Wu of Phison Electronics, recorded shortly after our meeting on the IT Press Tour in Silicon Valley. Michael takes us inside Phison’s latest breakthrough: the aiDAPTIV+ platform and its integration with StorONE’s ONEai solution. Together, they’re reshaping how enterprises think about AI training, inference, and data sovereignty.Michael explains how aiDAPTIV+ acts as expansion memory for GPUs, reducing power consumption and cutting hardware costs by up to 10x. We also dig into the partnership with StorONE, which has produced a plug-and-play, storage-based AI solution that makes large language model training accessible to organizations of all sizes—including smaller businesses and universities that traditionally struggle with GPU access.From the launch of the E28 Gen5 AI-enabled SSD controller to the endurance-driven Pascari X200Z SSDs, Michael shares the technical innovations under the hood and what they mean for performance and reliability. He also looks ahead to future workloads, where models with trillions of parameters will demand smarter, more scalable storage architectures.If you’re an IT leader weighing the trade-offs between cloud-based AI and secure, on-premises solutions, or you’re simply curious about how storage is becoming central to AI acceleration, this conversation will give you a fresh perspective.*********Visit the Sponsor of Tech Talks Network:Land your first job in tech in 6 months as a Software QA Engineering Bootcamp with Careeristhttps://crst.co/OGCLA
How can organizations protect their most valuable asset, data, while unlocking its full potential through AI-driven insights? That is the question I explored on the IT Press Tour in Silicon Valley during a face-to-face conversation with Sanjay Poonen, President and CEO of Cohesity.In this episode, Sanjay shares how Cohesity evolved from reinventing backup and recovery to leading the market in data security and cyber resilience. We discuss the game-changing acquisition of Veritas’s NetBackup business, the company’s growing footprint across healthcare, finance, government, and retail, and why uniting cultures and customers is key to their next chapter.We dive into how Cohesity is using AI to transform backup data into a source of real-time intelligence, including its patented Retrieval Augmented Generation (RAG) capabilities developed in partnership with NVIDIA. Sanjay also explains how this innovation is enabling businesses to meet strict data sovereignty requirements while benefiting from cloud agility.From cyberattack recovery and on-premises innovation to working with some of the biggest names in AI and cloud, Sanjay offers a candid look at leadership in a time of rapid growth. If you want to understand how AI and cybersecurity are converging to shape the future of enterprise data, this conversation delivers practical insight, strategic thinking, and a clear vision for what is ahead.
In this episode of Infrastructure as a Conversation, I’m joined by Tommi Kannisto, founder of Storadera, a cloud storage company based in Estonia that’s quietly building a smarter, simpler alternative to hyperscalers like AWS.We dive into how Storadera has engineered its own storage software from scratch to deliver secure, S3-compatible cloud storage with a unique hyper-converged architecture. It’s all about cutting unnecessary hardware, avoiding bottlenecks, and delivering transparent pricing that makes sense to growing businesses.Tommi explains:How Storadera’s hyper-converged design replaces gateways and load balancers with lean softwareWhy performance with small files became a key differentiatorWhat makes their multi-tenant system attractive to retail partnersThe impact of data sovereignty concerns on customer growth across EuropeWhy Canada is now looking eastward, not southward, for storage partnersThe Estonian tech culture that helped birth 10 unicorns from a country of just 1.3 millionWe also talk about the cultural mindset that powers Estonia’s startup scene, from engineers cold-messaging CEOs for advice to a national infrastructure designed for digital innovation. Tommi shares Storadera’s future roadmap, including plans to use AI to optimize disk read and delete operations without raising prices.If you’re curious about what comes after hyperscale, why storage software still matters, or what makes Estonia such a hotbed for digital infrastructure innovation, this is an episode you’ll want to hear.🔗 Learn more at storadera.com 📍 Want Storadera in your country? Join their regional waitlist on the website.
What if running databases in Kubernetes could be as simple as spinning up a container—without cloud lock-in or complexity?In this episode of IT Infrastructure as a Conversation, I’m joined by Tamal Saha, founder of AppsCode, a company rethinking how we manage data on Kubernetes. We met during the IT Press Tour in London, and this conversation dives deep into how AppsCode is tackling one of the most stubborn challenges in enterprise IT: stateful workloads.From CubeDB to Stash, Voyager, and KubeVault, Tamal walks us through the full story—from his early days at Google and the emergence of Kubernetes to bootstrapping a company through open source tools and evolving it into a full-fledged enterprise platform. We explore:The challenges of running databases in Kubernetes and why traditional VM-based infrastructure falls shortWhy database provisioning, backups, secret management, and ingress need Kubernetes-native solutionsHow AppsCode pivoted from open source tools to a sustainable business modelReal-world enterprise use cases—including a major European telco’s cloud-native transformationThe road ahead: vector databases, open telemetry, and AI-driven automationIf you're a platform engineer, DevOps leader, or just curious about where Kubernetes is headed next, this conversation offers rare insights into building data platforms from the ground up, with a practical, product-led mindset.So tune in to hear how one founder turned a container-native vision into a global business that’s helping companies modernize data operations without losing control.
In this episode of IT Infrastructure as a Conversation, I sit down with Thomas Bak, CEO of Auwau, fresh from our meeting at the IT Press Tour. We explore how his journey from running a managed service provider to founding Auwau led to the creation of CloudTility — a self-service, multi-tenant portal built for modern data protection and storage needs.Thomas unpacks how CloudTility helps MSPs and enterprises unify control across vendors like IBM, Rubrik, Cohesity, and more. He explains why customers are leaning toward on-premise deployment to maintain control and meet compliance demands, and how the platform supports complex organizational structures with flexible role-based hierarchies and automated billing.We also cover how CloudTility empowers IT teams to act more like internal MSPs, reduces reliance on spreadsheets, and enables partners to white-label services with minimal overhead. Whether you’re managing backups across regions or looking to scale your services, Thomas shares practical insights into solving real infrastructure challenges with clarity and control.
On this episode of IT Infrastructure as a Conversation, I explore a fresh approach to one of the oldest headaches in enterprise IT: migrating legacy databases without breaking everything.My guest is Jacek Migdał, co-founder and CEO of Quesma, a startup tackling the messy reality of old data stacks, rigid licensing, and costly, high-risk migration projects. Jacek shares how Quesma’s database gateway acts as a smart proxy, allowing companies to switch data stacks gradually, test changes safely, and avoid the dreaded “big bang” migration that so often fails.We unpack how Quesma blends pragmatic engineering with AI-driven automation, from SQL extensions that enrich data inside the database to “smart charts” that generate meaningful visualizations without complex BI tools. Jacek also explains why even modern industries like telecom and travel still wrestle with legacy systems and how a flexible, proxy-based approach keeps critical operations online while modernising behind the scenes.If your team is wrestling with outdated data infrastructure but cannot afford downtime, you will want to hear how Quesma turns risky transitions into manageable, incremental improvements.This is a candid look at the reality behind today’s data stack promises and a reminder that when it comes to enterprise infrastructure, practical steps often beat grand plans.
In this episode of IT Infrastructure as a Conversation, recorded live at the IT Press Tour, I caught up with Raju Datla, CEO of Fabrix.ai, to talk about a shift that could redefine how IT operations are managed. Formerly known as CloudFabrix, the company has evolved with a sharper focus on what it calls agentic AI: technology that works alongside humans to make smart, controlled decisions at scale.Raju’s story begins at the Indian Institute of Technology and winds through Silicon Valley, where he has founded several ventures grounded in solving real-world tech problems.Reducing Noise, Increasing ValueOne of the standout achievements we discussed is Fabrix.ai’s ability to reduce alert noise by up to 95 percent. In large environments with millions of daily notifications, that kind of reduction changes how teams work. Instead of chasing false alarms, IT professionals can focus on what matters: stability, uptime, and real outcomes for the business.The platform does this through a layered architecture Raju describes as the three fabrics: data, AI, and automation. Each plays a role in bringing clarity and action to complex infrastructure environments. Data is unified from dozens of sources. AI makes decisions based on context. Automation executes those decisions while keeping humans involved in key steps.Strategic Moves and Trusted PartnersFabrix.ai has not gone it alone. Through close relationships with Cisco, IBM, and Splunk, the company has stayed connected to both market demand and enterprise pain points. These partnerships are not just logos on a slide. They are part of how the platform has been built to handle real-world complexity.And the results are tangible. Whether it is automating resolution, tracking full alert lifecycles, or offering visual storyboards for better decision-making, Fabrix.ai is helping enterprise teams keep up with a pace of change that is not slowing down.Agentic AI in PracticeThe concept of agentic AI comes up often in this conversation, and for good reason. Unlike systems that simply follow rules or surface alerts, this approach blends autonomy with awareness. It does not just generate insights; it acts on them. And it does so in ways that respect the role of human judgment.Raju explains that this is not about removing people from the loop. It is about giving them systems that can scale, adapt, and support smart decisions. In that sense, Fabrix.ai is not replacing IT teams. It is extending what they can do.For leaders wrestling with fragmented tools, alert fatigue, and growing complexity, this episode offers a fresh perspective and a reminder that practical, scalable AI is already here.Raju’s parting advice to entrepreneurs and IT leaders alike? Solve the problems you care about. Passion always carries more weight than a quick exit plan.Listen in to learn how Fabrix.ai is helping enterprises bring order to operational chaos, one intelligent decision at a time.
Is AI Infrastructure Broken? A Candid Conversation with VolumezAI adoption is accelerating, but most enterprises are still stuck in the pilot phase. Cloud costs keep climbing, GPUs go underutilized, and data pipelines struggle to keep pace. If AI is the future, why is the infrastructure built to support it so often stuck in the past?In this episode, recorded live in Silicon Valley during the IT Press Tour, I sit down with John Blumenthal, Chief Product Officer at Volumez, and Diane Gonzalez, Senior Director of Business Development and Product. Together, we unpack what is really holding AI back and explore how Data Infrastructure as a Service (DIaaS) could change the equation.We explore:Why traditional AI infrastructure models are inefficient and unsustainableHow DIaaS enables just-in-time, automated infrastructure tuned to each workloadThe role of GPU and data scientist efficiency in determining AI ROIHow Volumez achieved industry-leading results in the MLCommons benchmarkWhy hybrid and multicloud strategies demand a fundamentally different infrastructure approachJohn and Diane share firsthand insights from working at the intersection of data, cloud, and AI infrastructure. They argue that achieving meaningful return on AI investment requires more than hardware upgrades or clever provisioning. It means embracing automation, profiling cloud capabilities in real time, and architecting pipelines that adapt to the specific demands of each phase in AI and ML workflows.Whether you're building AI platforms, running data science teams, or managing cloud infrastructure, this conversation offers a grounded look at how to make AI actually scalable.Are you wasting your most valuable resources or are you ready to run AI workloads at full speed with none of the bloat?
Behind every seamless digital experience is an infrastructure team working hard to keep systems scalable, responsive, and resilient. In this episode of IT Infrastructure as a Conversation, I’m joined by Jesmar Cannaò, COO of ProxySQL on the IT Press Tour, We explore the story behind one of the most trusted open source tools in database management today.What began as a side project born from the frustrations of a single DBA has evolved into a critical component for teams managing MySQL and PostgreSQL environments around the world. Jesmar walks us through the origins of ProxySQL and explains how it empowers DBAs by placing intelligent query routing, load balancing, and failover handling directly in their hands.We discuss the architectural advantages of ProxySQL in both cloud-native and on-premise setups, its ability to operate with minimal friction inside Kubernetes, and why open source remains at the heart of its mission. Jesmar also offers a candid look at what it takes to build a distributed team, maintain performance across time zones, and foster a global community of contributors and users.As database architectures grow more complex and DBA roles continue to shift, ProxySQL is evolving to meet those changes head-on. With the recent alpha release of its PostgreSQL protocol support and plans to expand further in 2025, Jesmar outlines how the team is staying ahead of industry demand.Whether you're a database engineer, a cloud architect, or simply someone trying to future-proof your infrastructure, this conversation is full of practical insight into what open source can offer in a fast-changing world.Explore more at proxysql.com and join the conversation around high-performance infrastructure that does not compromise on transparency, flexibility, or control.
In a world where business users expect instant insights from tools like Power BI and Tableau, few stop to consider the heavy lifting happening behind the scenes. Dashboards may look simple, but the infrastructure powering them often involves layers of complex data engineering, expensive queries, and frustrating delays. This mismatch is what Nicolas Korchia, Co-founder and CEO of Indexima, calls the BI paradox.In this episode of IT Infrastructure as a Conversation, Nicolas joins Neil to explore how Indexima is solving this issue by automating the most painful parts of business intelligence. Rather than forcing engineers to build yet another manual data pipeline, Indexima uses AI to identify query patterns and automatically generate dynamic tables directly within Snowflake. The outcome is faster dashboards, lower compute costs, and a smoother experience for analysts and engineers alike.We unpack how Indexima’s engine monitors live dashboard usage, detects inefficiencies, and rewrites queries on the fly to target optimized aggregation layers. This not only improves performance but also contributes to sustainability by reducing the volume of data scanned and processed. For organizations under pressure to balance data speed with environmental impact, it is a practical and forward-looking approach.Nicolas also shares real-world use cases from retail and finance, where customers have slashed dashboard load times from minutes to milliseconds and eliminated the need for nightly data extracts. The conversation touches on broader trends too, including the role of large language models in BI workflows and how tools like ChatGPT might soon assist in building semantic layers.For anyone responsible for scaling data infrastructure, this episode provides a grounded look at how automation is reshaping BI from the ground up. If your teams are still wrestling with slow dashboards and spiraling query costs, this is a conversation worth listening to.What if the future of analytics was not about working harder, but about letting your infrastructure work smarter?
As AI agents begin to influence how businesses operate, there's growing urgency around building infrastructure that supports their complexity without adding new risks. In this episode of IT Infrastructure as a Conversation, I speak with Alexander Alten, Co-Founder and CEO of Scalytics, about the architecture powering the next generation of AI and machine learning systems.Alexander’s journey includes leadership roles at Cloudera, Allianz, and Healthgrades, and a deep commitment to building scalable, privacy-respecting technologies. At Scalytics, he's helping organizations avoid the limitations of centralizing data by building distributed systems that support federated learning. Rather than extracting and duplicating data across systems, Scalytics enables analysis directly at the source, making it easier for businesses in regulated industries to innovate with confidence.Recorded live at the IT Press Tour in Malta, our conversation dives into the origins of Scalytics Connect, the company's AI agent infrastructure that leverages open-source frameworks like Apache Wayang. We explore why ETL pipelines often create fragility instead of flexibility, how decentralization supports both compliance and collaboration, and why open-source technologies continue to outperform closed systems over the long term.For any CIO, CTO, or data architect looking to align AI capabilities with real-world constraints, Alexander’s perspective offers a refreshingly pragmatic path forward. His framework simplifies the complexity of federated machine learning while preserving data sovereignty, auditability, and future-proof flexibility.If your organization is struggling with data silos, regulatory friction, or the scaling of AI models, this episode offers insight into a model that avoids duplication, improves trust, and accelerates results by treating infrastructure as the foundation for intelligent systems.
Could sustainable IT hold the answers to rising infrastructure costs and environmental pressure? In this episode, recorded live during the IT Press Tour in Malta, we speak with François Machacek, an IT veteran with nearly thirty years of experience and a strong focus on digital sustainability. Now part of the team at EasyVirt, François is helping organizations measure and reduce the environmental impact of their digital operations.The conversation begins with François’s career journey, from managing enterprise data centers across Europe to becoming a leader in responsible IT. He introduces EasyVirt’s technology, which includes DCscope, DCnetscope, and CO2scope, and explains how these tools provide real-time, high-frequency resource usage measurements across virtualized and hybrid environments. This level of visibility helps organizations make informed decisions that improve efficiency and reduce carbon emissions without compromising performance or security.We examine how the environmental costs of digital technology are growing fast, especially with the expansion of AI workloads and data center demand. François discusses how EasyVirt addresses this challenge by offering software that works inside client environments, avoids reliance on average estimates, and instead delivers precise, continuous monitoring. He outlines the financial and environmental benefits clients are seeing in both the short and long term, including reduced compute waste, time savings in planning, and improved reporting against ESG targets.The episode also highlights how FinOps and GreenOps can work together to guide smarter infrastructure use. François explains how these principles allow IT teams to balance financial planning with environmental goals, resulting in better resource control, improved compliance readiness, and more credible emissions reporting.Looking ahead, François shares what’s next for EasyVirt, including new tools for measuring AI energy use, upcoming multi-impact assessments (carbon, water, materials), and a free new comparison platform called ECLIO. This tool gives IT teams insights into pricing and emissions data across major cloud providers, supporting better planning for cloud migrations.If your infrastructure strategy needs to align with both cost efficiency and sustainability expectations, this episode provides a grounded look at how to get there using accurate data and practical solutions.
What does long-term data preservation really require in a digital-first world where technology changes faster than it can be archived? In this episode of IT Infrastructure as a Conversation, recorded during the IT Press Tour in Malta, we explore a fresh perspective with Antoine Simkine, co-founder of DigiFilm Corporation.Antoine’s background is nothing short of extraordinary. Having produced digital visual effects for iconic films like Amelie, Alien Resurrection, and The Ninth Gate, and serving as VFX producer for 20th Century Fox’s I, Robot, Antoine understands the challenges of preserving digital assets in an industry where formats evolve and decay at a relentless pace.Bringing this cinematic experience into the world of infrastructure, Antoine shares how the fragility of digital media inspired him to rethink data preservation. We examine his journey from pioneering digital VFX to founding DigiFilm Corporation and developing Archifix, a solution that combines the permanence of film with the precision of digital encoding.As organizations generate more data than ever before—and as regulatory demands for data integrity and security intensify—this conversation shines a light on why traditional storage methods may not be enough. Antoine explains how Digifilm’s approach addresses the risks of obsolescence, media degradation, and escalating costs associated with perpetual migration.Beyond cinema, Antoine reveals how sectors such as defense, nuclear energy, and architecture are beginning to recognize the need for offline, futureproof data storage strategies. Could an idea rooted in the oldest form of recording still hold the answer to our most modern infrastructure challenges?What steps should enterprises take today if they want their critical digital assets to survive for centuries? And how can organizations balance innovation with the responsibility of long-term stewardship?This is your conversation.Learn more at https://digifilm-corp.com/home
What if the future of enterprise storage wasn’t locked behind expensive licenses and proprietary ecosystems?In this IT Infrastructure as a Conversation episode, recorded during the IT Press Tour in Silicon Valley, we explore how TrueNAS is reshaping expectations in enterprise storage. Brett Davis, Executive Vice President, shares the TrueNAS story—from its roots in Berkeley Unix and FreeBSD to becoming the world’s largest open-source storage platform.With over 500,000 annual downloads and adoption by more than 60 percent of Fortune 500 companies, TrueNAS is proving that open source and enterprise-grade performance aren’t mutually exclusive. Brett takes us inside the company’s philosophy, its decision to remain bootstrapped, and how it built a sustainable business by serving both community users and enterprise customers.We dig into what differentiates TrueNAS from established players like NetApp, Dell, HPE, and Pure Storage, and discuss how it addresses growing concerns over vendor lock-in, inflated storage costs, and budget constraints. Brett also highlights how the platform is being used by organizations like NASA, CERN, Skywalker Sound, and the JFK Library—bringing real-world credibility to the open enterprise storage conversation.As we look ahead, Brett previews TrueNAS’ upcoming software release, codenamed Fangtooth, which will deliver enhanced deduplication, support for RDMA, and enterprise-ready container support. He also unpacks trends such as the steady growth of both cloud and on-prem storage, the squeeze on IT budgets due to AI and VMware licensing hikes, and how businesses are rethinking their infrastructure choices.If you’re exploring more cost-effective, transparent, and flexible storage solutions or questioning whether your current setup is built to last, this conversation offers timely insight and practical perspective.
What happens when enterprise storage finally catches up with the needs of AI-driven workloads and hybrid cloud architectures? In this premiere episode of IT Infrastructure as a Conversation, recorded during the IT Press Tour in Silicon Valley, I sat down with David Flynn, CEO of Hammerspace, to unpack how his team is rethinking the foundations of enterprise data storage.As businesses grapple with ever-expanding datasets, scattered infrastructures, and the pressure to enable real-time AI, Hammerspace is stepping in with a distinctive vision. David shares how their Global Data Platform removes long-standing storage bottlenecks by enabling unstructured data to be orchestrated across edge, cloud, and data center environments—without the friction of traditional silos.We explore how the Parallel Network File System (pNFS) is playing a central role in this transformation. David demystifies why it’s suddenly gaining traction and how it supports the high-performance demands of modern workloads. But more importantly, he explains why orchestration—not just accessibility—is the real differentiator.The conversation also challenges assumptions about global namespaces. While they’ve become a buzzword in storage circles, David argues that without true orchestration, they fall short. He outlines how Hammerspace combines both to make data truly fluid—instantly accessible where and when it’s needed.As AI continues to reshape enterprise demands, this episode offers a window into what’s next for storage architecture, data management, and the infrastructure decisions that support them. We also touch on Hammerspace’s rapid rise, tenfold revenue growth, and its growing leadership team—all signs of a company hitting its stride at just the right moment.Are we entering a new era where storage finally adapts to meet the needs of AI and hybrid cloud? Let me know your take. I’d love to hear what you think.
Comments