Discover
The IT/OT Insider Podcast - Pioneers & Pathfinders

The IT/OT Insider Podcast - Pioneers & Pathfinders
Author: By David Ariens and Willem van Lammeren
Subscribed: 4Played: 104Subscribe
Share
© Willem van Lammeren / David Ariens
Description
How can we really digitalize our Industry? Join us as we navigate through the innovations and challenges shaping the future of manufacturing and critical infrastructure. From insightful interviews with industry leaders to deep dives into transformative technologies, this podcast is your guide to understanding the digital revolution at the heart of the physical world. We talk about IT/OT Convergence and focus on People & Culture, not on the Buzzwords. To support the transformation, we discover which Technologies (AI! Cloud! IIoT!) can enable this transition.
itotinsider.substack.com
itotinsider.substack.com
38 Episodes
Reverse
In our earlier articles, we laid the groundwork for Industrial AI — breaking down the difference between classic AI, generative AI, and agentic AI. But frameworks alone don’t tell the full story. How do these ideas play out when you’re inside a real industrial company, tasked with building teams, getting budget, and making data actually deliver value?For that perspective, we sat down with Nathalie Rigouts, who until recently headed data and analytics at Borealis and is now Head of Business Applications Data and AI at Umicore. Nathalie brings a refreshing, pragmatic voice — someone who moved from finance into IT, and who knows first-hand the reality of building data capabilities in industry.From Finance to Data & AINathalie didn’t start in IT. Her background is in finance, where every month she wrestled with massive spreadsheets just to get accurate actuals. That pain, she recalls, was the start of her data journey:“Every month again, I was struggling with getting the correct actuals. And then of course, you have to make your forecast.”From implementing a financial planning tool, to establishing BI at Borealis, to eventually leading data and analytics, her path shows how close the link is between business need and IT capability. And she’s clear about the lesson: it’s not about technology for its own sake.“It’s not about implementing Microsoft Copilot. You’re not going to gain any sustainable advantage there. But if you can have a deep understanding of the processes in your company, and where data-driven solutions can help, that’s when you start to create value.”Start Small, Sell the SuccessOne of the recurring themes in Nathalie’s story is pragmatism. At Borealis, the team started in 2016 with literally one data scientist and a laptop. “Python notebooks on a laptop, and we started.”The key, she says, is to find enthusiastic allies and solve problems that matter. And once you do, don’t stay modest: market the success internally.“We often forget to sell our success. I would go everywhere and talk about small things we did. And that’s how you gain support for the next steps.”From that first laptop, the team grew, but only because each step came with visible, tangible wins that created pull from the business.Use Cases That MatterSo what are typical use cases in manufacturing? Nathalie sees three common ones:* Predictive maintenance: “If equipment fails often, anomaly detection and predictive maintenance are obvious first steps. But it’s not an easy nut to crack. Often, you don’t have enough failures to feed a model.”* Quality control with computer vision: mainstream, but effective. With enough annotated pictures, good vs bad quality can be classified quickly. The catch? Data Quality.* Logistics optimization: untangling shipping routes and optimizing delivery to customers with AI-based optimization models.These are concrete, valuable problems — and they also highlight the role of data governance. As she recalls with a smile:“We had beautifully annotated data — but all in Finnish. That’s when you realize governance is not optional.”GenAI: Efficiency or Attractiveness?When it comes to Generative AI, Nathalie is cautious. The business case is not always straightforward:“I tried to make the case for Microsoft Copilot. At €30 per user, that’s not small. Does it reduce workforce? No. At best, people spend more time on value-added activities. But what does that bring to the bottom line? Hard to say.”Yet she also sees why companies can’t ignore it.“Companies have to invest in it because it will determine their attractiveness as an employer. New graduates take these tools for granted. If you don’t offer them, you won’t attract talent.”She distinguishes between two levels: workplace efficiency (nice, but hard to quantify) and domain-specific models trained on your own IP. The latter, she believes, is where the real value lies. For example in pharma, where LLMs trained on internal knowledge can speed up R&D. “That’s when AI becomes a true digital co-worker.”Governance, Change, and LegislationOn governance, Nathalie doesn’t mince words:“It’s always the people, the processes, and the tools. The main component around which all of them center is the value case.”Her advice: don’t let your solutions depend on a single enthusiast, and don’t leave an escape hatch back to the old way of working. Change management is part of the job.And on legislation, she takes a positive view:“It’s an opportunity. It forces us to think about awareness, ethics, governance, documentation, monitoring. All things that make sense. Yes, it’s work, but it helps you get budget and build maturity.”Closing ThoughtsWhat we loved about Nathalie’s perspective is how grounded it is. No buzzwords, no silver bullets: just the reality of building teams, solving problems, and learning along the way. Whether it’s predictive maintenance, quality monitoring, or navigating the GenAI hype.Her closing reminder:“Keep it simple, be pragmatic. We built beautiful solutions with just scripting business rules. The business was happy, and nobody needed a fancy machine learning model.”Stay Tuned for More!🚀 Join the ITOT.Academy →Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
When we talk about industrial connectivity, two names always come up: OPC UA and MQTT. They’re often mentioned in the same breath, as if they’re competitors. But as Kudzai Manditereza reminded us in our conversation, that’s a bit of a misconception. These protocols solve different problems, and understanding their history helps explain why they’re both so important today.OPC UA: From Printer Drivers to Industrial StandardsThe story of OPC goes back to the 1990s. At the time, every automation vendor shipped their own drivers, making integration a nightmare. The OPC Foundation stepped in to create a standardized interface — inspired, of all things, by Microsoft’s printer driver model. Just as Windows could talk to any printer through a standard interface, OPC offered a way for SCADA systems and historians to talk to PLCs without custom drivers.Thanks for reading The IT/OT Insider! Subscribe for free to receive our weekly articles directly in your mailbox!The first generation, known as OPC Classic (DA/HDA), was Windows-only and limited in scope. It solved the immediate problem but couldn’t handle the growing complexity of industrial data. Enter OPC UA (Unified Architecture): cross-platform, internet-capable, and built with powerful information modeling.This is where OPC really shines. As Kudzai put it:“The shop floor is full of objects — pumps, compressors, machines. OPC UA lets you model those objects, not just pass around raw tags.”That means a machine builder can ship a unit with a pre-built OPC UA information model, ready for plug-and-play integration.The OPC Foundation even created companion specifications for different industries, so a compressor in Germany “speaks” the same OPC language as a compressor in the US. No more reinventing interfaces for every project.MQTT: Born in the Oil Fields, Adopted by the InternetIf OPC UA came from printer drivers, MQTT came from oil pipelines (well… actually from the even older pub-sub newsgroups back when the internet was still something really special).In 1999, IBM engineers developed MQTT to monitor pipelines over unreliable, low-bandwidth satellite links. The key innovation was the publish/subscribe model: instead of clients constantly polling servers for updates, devices publish data to a central broker, and anyone interested can subscribe.This lightweight, bandwidth-efficient design made MQTT perfect for remote monitoring. But it didn’t stay confined to industry. In fact, one of its biggest early adopters was Facebook, who used MQTT in their Messenger platform. By the 2010s, MQTT had made its way back to industry, now riding the wave of IIoT and event-driven architectures.As Kudzai explained:“MQTT doesn’t tell you how to model your data. It’s a transport protocol. But its hierarchical topic structure maps naturally to concepts like the Unified Namespace (UNS).”Think of it like a Google Drive folder structure: data is organized into topics, and anyone can subscribe to the parts they care about.OPC UA vs MQTT: Different Tools, Different JobsSo should you pick OPC UA or MQTT? The answer is: both, but for different layers.* OPC UA excels close to the machines (Levels 0–2 in the Purdue Model). It provides a rich, standardized way to model and expose machine data. Perfect for SCADA, DCS, and local control.* MQTT shines at higher levels (L3/DMZ and above). It’s ideal for integrating thousands of devices into enterprise systems, feeding data lakes, or enabling event-driven architectures. And of course also for IIoT devices spread around the world!As Kudzai put it:“You’ll never control a pump with MQTT. But if you want to share events across your enterprise (machine status, recipes, quality data,…) MQTT is a great fit.”And that’s an important distinction. OPC UA is about structured access to objects. MQTT is about lightweight distribution of events. They don’t replace each other — they complement each other.Closing ThoughtsIndustrial connectivity isn’t about choosing one protocol over the other. It’s about using the right tool for the job. OPC UA and MQTT are part of the same toolbox — and when used together, they unlock scalable, reusable, event-driven architectures that finally let IT and OT speak the same language.As Kudzai summed it up:“The ability to reuse data is a huge factor. Once you stop thinking point-to-point and start thinking platform, that’s when scale happens.”… And we couldn't agree more!Also, take a look what HiveMQ has to offer: Stay Tuned for More!🚀 Join the ITOT.Academy →Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
When Zev Arnold joined us on the podcast, he brought with him the kind of energy and clarity you rarely get from someone working at the intersection of industry, data, and transformation. As a principal director at Accenture’s Industry X, Zev has spent years working with oil & gas, utilities, mining, and life sciences companies—not just helping them digitize, but helping them make that digitization mean something operationally. “I help engineers and operators use data to improve the way they work,” he said, and that theme stayed with us throughout the conversation.Context is King (But only if the right person owns it)One of the most powerful insights from Zev is his perspective on data contextualization.He tells the story of a compressor engineer who wanted to track starts instead of doing maintenance on a fixed schedule. “Some compressors had start-stop tags, some had rotational speed. The structure of the data needed to support the engineer’s thinking, not the other way around.” That’s when Zev realized: contextualization only works when it’s driven by the user, not imposed by some else. “Give that hierarchy to the compressor engineer and say, this is yours. Own it.”In Zev’s model, self-service is the enabler. If engineers and operators can build their own analytics without writing Python or waiting for a dev team, that’s when transformation becomes real.Platforms that Empower, Not ObstructZev is quick to point out where industrial transformation often stumbles: platforms that weren’t built to scale use cases easily. “We had a platform that worked great for one use case. But every new use case required us to rebuild everything again.”“You want to catch that $50K event before it becomes an environmental incident. The person who understands the problem best is the engineer. We need to give them the tools to act.”The bigger picture? Zev sees a future where operators train and maintain AI systems—even simple expert systems that alert you when a tank overflows. That’s where AI becomes more than a buzzword and actually enters the DNA of industrial work.People, AI, and the Future of WorkZev introduces a compelling framing: people-to-people, people-to-AI, and AI-to-AI. That’s the triangle of future industrial collaboration. A model he borrowed from Paul Daugherty’s book Human + Machine.In this framing, AI isn’t replacing people. Instead, AI becomes part of their toolbox. “Even simple AI—like monitoring sump tank levels—needs someone to train and maintain it,” he says. That job doesn’t belong in a remote digital transformation office. It belongs on the floor, with the engineer who knows the equipment and the impact.Thanks for reading The IT/OT Insider! Subscribe for free to receive new posts and support our work.We just need to catch the real valueIs all this really worth it? Zev answers emphatically, yes. And he points to a hard number: EFORd, the forced outage rate in U.S. power generation. This is a metric used to assess the reliability of thermal power generating units, specifically measuring the probability of a unit being unavailable due to forced outages or deratings when there is a demand for power. It essentially indicates how often a generator is unable to produce power when it's needed. The EFORd rate in the US averages 7.5% (a theoretical 0 value would mean that there are no unplanned outages). If we could close that gap with better decisions, the industry could unlock over $100 billion in value.“And that’s just one industry,” Zev adds. “The ripple effects could be societal: better data centers, climate impact, lower energy bills, even job growth.”Final ThoughtsFrom the subtle distinctions between manufacturing types to the very real, tangible impact of good data and AI done right, we touched it all in this podcast. Whether it’s process or discrete, the message is clear: stop treating transformation like a side project. Get the right tools into the right hands—and let people do what they do best.As Zev put it, “On my worst days, I wonder, is this data really valuable? But 15 years in, I know it is. We just have to use it right.”Stay Tuned for More!🚀 Join the ITOT.Academy →Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts:Spotify Podcasts:Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
📣 Have you already considered joining our ITOT.Academy ? We tell stories. We focus on concepts, not tools.On frameworks, not features.👉 Check out our podcast to learn more !If you’re still thinking of OT cybersecurity as “just” another IT checklist item, it’s time to rethink the whole game. In this episode of the IT/OT Insider podcast, David is joined by Danielle Jablanski — cybersecurity strategist, OT advocate, and all-around force in the industrial cyber world — for a grounded conversation on what cybersecurity in industrial systems really means, why it’s not a product or checklist, and how to approach it without getting lost in the buzzwords.Danielle brings not only deep knowledge but also practical field insight from her time at CISA, Nozomi Networks, and now STV.What is OT Cybersecurity Anyway?OT (Operational Technology) isn’t just ICS (Industrial Control Systems) anymore. “OT now represents a broad set of technologies that covers process automation, instrumentation and field devices, cyber-physical operations, and industrial control systems,” Danielle explains.From water utilities and power grids to baggage claim systems and digital parking meters, these systems form the backbone of our critical infrastructure. And unlike IT systems, the primary concern isn’t just data breaches—it’s real-world, physical consequences.“Segmentation is King”Danielle is clear: “For the last five or six years, I've always said segmentation is king. I still think it's paramount.” But that doesn’t mean it’s easy or one-size-fits-all.The problem? Too many organizations buy visibility tools but neglect the basics like firewall rules or sound architecture. As Danielle notes, “You can't do any type of root cause analysis if you're not incorporating your entire operation into your purview.”Her takeaway: start with effects-based thinking. “Focus on the effect of something rather than the means.”By the way, did you know our very first post on this blog was about the Purdue model? Check it out here:No More Choose-Your-Own-Adventure SecurityDanielle challenges a common trap: jumping into cybersecurity with no strategy. “There’s this leap to: I want a pen test, I want incident response, I want this, this, this. But are people even ready for a 150-page pen test that tells you everything you might want to fix over the next 10 years?”Instead, she advocates for needs assessments, crown jewel analysis, and understanding fault tolerance. She says, “You need to understand what is impossible, what is not plausible… you can't do that without really getting to root cause analysis.”Thanks for reading The IT/OT Insider! Subscribe for free to receive new posts and support our work.The Good, the Bad, and the Pointless DeliverablesWhen asked about good versus bad deliverables, Danielle doesn’t hold back: “A red flag? People rush to procure tools.” In contrast, green flags are often simple: “What forensic capacity do you have? What logs are you keeping? What’s your retention policy?”And watch out for this one: “Our integrator is responsible for cybersecurity.” That’s a red flag unless you’ve built a mechanism to test and verify that assumption.Starting a Career in OT SecurityFor anyone curious about stepping into the field, Danielle’s advice is encouraging and honest. “You can take any interested person and train them based on their interest and their aptitude.” She recommends free online resources like learn.automationcommunity.com and Grady Hillhouse’s Engineering in Plain Sight. Her bottom line? “Do whatever you're interested in and do it as much as your resources allow for.”Why It MattersThroughout the conversation, Danielle keeps it grounded: OT cybersecurity isn’t about buying the latest tool or chasing the latest threat report. It’s about resilience, design, human capacity, and real-world impact. “All the tools in the world are not going to help you if you haven’t built the scaffolding.”Or, to put it more bluntly: this isn’t a choose-your-own-adventure. It’s about picking a strategy and sticking to it.Let us know what you thought of this episode and if you want more cyber content, get in touch. Like we promised during the episode, this topic is too important and we haven’t touched OT Cyber Sec enough… So we’ll be launching a full cybersecurity series later this year.Extra Resources* Find Danielle on LinkedIn: https://www.linkedin.com/in/daniellejjablanski/* Free learning: learn.automationcommunity.com* Grady Hillhouse’s book: Engineering in Plain Sight* Copenhagen Industrial Cybersecurity Event : https://insightevents.dk/isc-cph/Danielle’s talk at SANS:And see also https://www.sans.org/blog/a-visual-summary-of-sans-ics-summit-2023/ for this stunning visual summary of her talk:Stay Tuned for More!🚀 Join the ITOT.Academy →Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
In this episode of the IT/OT Insider Podcast, we welcome someone who doesn’t come from cloud platforms, data infrastructure, or connectivity layers. Instead, he brings something equally vital: operational wisdom.Raf Swinnen has spent his career inside factories. From Procter & Gamble to Kellogg's, and later Danone, Raf worked at the intersection of operations and transformation, guiding teams through continuous improvement and later, digital initiatives.What makes his perspective especially valuable? It’s grounded in Lean thinking. Not as a buzzword, but as a real discipline. One that requires a sharp understanding of processes, a respect for people on the floor, and a strong filter for what actually adds value.From Line Leader to Digital Change AgentRaf didn’t start in digital. He started on the floor: managing lines, people, safety, and performance. That experience shaped how he sees digital transformation today: as something that should support operations, not get in the way of them.At Danone, he led digital initiatives at the Rotselaar site (Belgium). The job wasn’t to implement more dashboards. It was to help teams use data to drive better decisions, without losing sight of the fundamentals.“Tech to the Back” — What Digital Should Learn from LeanOne of the most powerful takeaways from this episode is Raf’s principle of “Tech to the back.”“Digital solutions should not be front and center. People and processes should be. Tech should follow.”This is a strong antidote to the over-designed, solution-first approaches that often flood the industrial space. According to Raf, the biggest risk in digital projects isn’t the technology — it’s losing the problem along the way.Three C’s: Connect, Collaborate, and CoherenceAs part of his work with leadership teams, Raf often introduces what he calls the 3 C’s:* Clarity – Where are we going, and why?* Consistency – Are we reinforcing the same messages and systems?* Coherence – Do our tools, apps, and data work together?These are not slogans, they are essential behaviors for any transformation to stick. They also align closely with how we designed the ITOT.Academy, where cross-role learning and shared frameworks are front and center.One of Raf’s biggest contributions came through how he structured teams. In a newly created role as Digital Program Manager, he pulled in both IT and OT voices and even shifted reporting lines to foster true collaboration.He didn’t look for tech wizards. He looked for people with enthusiasm. People who wanted to make a difference. These became his digital ambassadors, key voices from every shift, every team.“When the night shift speaks up, you listen. They see the edge cases nobody else does.”Case Examples: Real Change Starts SmallRaf shared stories from his time at Danone, Kellogg’s, and P&G, where transformation didn’t come from big declarations — but from small, disciplined steps.At one plant, it was about helping teams make better use of their shift handovers.At another, it meant cleaning up data before launching another round of training.At Danone, the challenge was scaling good ideas without flattening local ownership.“Digital without context is noise. The real challenge is creating relevance at the point of use.”Digital with DisciplineRaf’s story is a reminder that digital transformation doesn’t start with technology, it starts with understanding the process. Listening to the people who run it, and designing with clarity and purpose. Whether it's Lean principles, cultural alignment, or simply asking better questions, his approach keeps the focus where it matters: on solving real problems in practical ways.In a time when industrial tech is advancing fast and buzzwords multiply by the day, it’s refreshing to hear someone say: let’s not forget why we’re doing this in the first place.If you’re working in digital, operations, or somewhere in between, this episode is a pause-and-reflect moment.And maybe also a nudge: to push tech to the back, and put people and purpose out front.Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
📣 Quick note before we dive into all things open source: In our last episode, we announced the launch of the ITOT.Academy: a live on-line learning experience for professionals navigating the complex world of IT/OT collaboration. Our early bird seats are filling up fast. If you’re serious about gaining practical skills (not just theory), now’s the time to secure your spot. Don’t wait too long, the first cohorts start on August 29 and September 5 (each cohort consists of six 2 hour sessions and you receive all recordings). 👉 Full training program and registration via ITOT.AcademyIn this episode of the IT/OT Insider Podcast, we sit down with Alexander Krüger, co-founder and CEO of United Manufacturing Hub (UMH), to talk about something that’s both old and revolutionary in the industrial world: open source software.This isn’t about hobby projects or side experiments. It’s about why open source is playing an increasingly important role in how factories move data, scale operations, and reduce vendor lock-in. Alexander brings both a technical and business perspective and shares what happens when a mechanical engineer dives deep into the world of cloud-native data infrastructure.Not all Open Source is created equalMost industrial companies still equate reliability with paying a vendor and signing a service-level agreement. But Alexander challenges that mindset. His team originally built UMH because they were frustrated with how hard it was to try, test, and scale traditional industrial software.“We just wanted to get data from A to B in a factory, but realized that problem isn’t really solved yet. So we made it open source.”Alexander is quick to point out that choosing open source doesn’t automatically mean less risk, but it does mean different trade-offs. Key factors include:* Licensing clarity* Community health (Is it maintained? Is it active?)* Governance (Who controls the roadmap? What happens if they change direction?)He even brings up the infamous example of vendors repackaging tools like Node-RED under different names, then charging for them without giving proper credit (or worse, shipping outdated versions).“If you’re already bundling open source into your software, why not be honest about it?”What about reliability?If you’re an OT leader, you might still worry: who do I call at 2 a.m. when something breaks?Alexander’s answer: you should be asking that question about any software, open or proprietary. Because often, what fails isn’t the software itself, it’s the integrations someone built in a rush, or the one engineer who knew how things worked and then left the company.With open source, there’s at least transparency, control, and the ability to maintain continuity. You’re not locked out of your own systems.The Human Side: The rise of the hybrid engineerOne of the most interesting parts of the conversation was about who will make this all work. Alexander sees a new kind of engineer emerging: someone with a background in OT, but who enjoys learning IT concepts, tinkering with Docker, and embracing DevOps practices.“We’re looking for people who used to live in TIA Portal but now run state of the art home automation in their free time.”This isn’t about turning everyone into a software developer. But it is about building a culture where people are open to learning from both sides and using modern ways of working and new tools to solve old problems.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Discover the program and claim your seat here: https://itot.academy 🎙️ In this special episode of the IT/OT Insider Podcast, David and Willem officially announce the launch of the ITOT.Academy!After years of conversations with IT/OT professionals, consultants, and technology vendors, one thing became clear: there’s a huge need for practical, vendor-neutral education to help people work together across IT and OT boundaries.The ITOT.Academy is designed to fill that gap.What you’ll learn in this episode:Why we created the AcademyWho it's for: OT teams, IT teams, consultants, vendorsThe structure of the program: short, live, interactive sessionsWhy it's not about convergence but collaborationWhen the first groups will startHow to sign up and join the first cohorts🚀 Learn more and sign up at https://itot.academy🎧 Subscribe for more honest conversations on bridging IT and OT.Chapters00:00 Introduction to ITOT Academy01:38 Feedback from Subscribers03:47 Target Audience for Training07:24 Training Format and Structure11:19 Core Concepts of the Training13:32 Interactive Sessions and Wrap-Up14:37 Launch Details and Closing Remarks This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
It is episode 31 and we’re finally tackling a topic that somehow hadn’t made the spotlight yet: IoT. And we couldn’t have asked for two better guests to help us dive into it: Olivier Bloch and Ryan Kershaw.This is not your usual shiny, buzzword-heavy conversation about the Internet of Things. Olivier and Ryan bring decades of hands-on experience from both sides of the IT/OT divide: Olivier from embedded systems, developer tooling, and cloud platforms, Ryan from the shop floor, instrumentation, and operational systems. Together, they’re building bridges where others see walls.IoT 101Olivier kicks things off with a useful reset:"IoT is anything that has compute and isn’t a traditional computer. But more importantly, it’s the layer that lets these devices contribute to a bigger system: by sharing data, receiving commands, and acting in context."Olivier has seen IoT evolve from standalone embedded devices to edge-connected machines, then cloud-managed fleets, and now towards context-aware, autonomous systems that require real-time decision-making.Ryan, meanwhile, brings us back to basics:"When I started, a pH sensor gave you one number. Now, it gives you twelve: pH, temperature, calibration life, glass resistance... The challenge isn’t getting the data. It’s knowing what to do with it."Infrastructure Convergence: The Myth of the One-Size-Fits-All PlatformWe asked the obvious question: after all these years, why hasn’t “one platform to rule them all” emerged for IoT?Olivier’s take is straightforward:"All the LEGO bricks are out there. The hard part is assembling them for your specific need. Most platforms try to do too much or don’t understand the OT context."You can connect anything these days. The real question is: should you? Start small, solve a problem, and build trust from there.Why Firewalls are no longer enoughAnother highlight: their views on security and zero trust in industrial environments.Olivier and Ryan both agree: the old-school "big fat firewall" between IT and OT isn’t enough."You’re not just defending a perimeter anymore. You need to assume compromise and secure each device, user, and transaction individually."So what is Zero Trust, exactly? It’s a cybersecurity model that assumes no device, user, or system should be automatically trusted, whether it’s inside or outside the network perimeter. Instead of relying on a single barrier like a firewall, Zero Trust requires continuous verification of every request, with fine-grained access control, identity validation, and least-privilege permissions. It’s a mindset shift: never trust, always verify.They also emphasize that zero trust doesn’t mean "connect everything." Sometimes the best security strategy is to not connect at all, or to use non-intrusive sensors instead of modifying legacy equipment.Brownfield vs. Greenfield: Two different journeysWhen it comes to industrial IoT, where you start has everything to do with what you can do.Greenfield projects, like new plants or production lines, offer a clean slate. You can design the network architecture from the ground up, choose modern protocols like MQTT, and enforce consistent naming and data modeling across all assets. This kind of environment makes it much easier to build a scalable, reliable IoT system with fewer compromises.Brownfield environments are more common and significantly more complex. These sites are full of legacy PLCs, outdated SCADA systems, and equipment that was never meant to connect to the internet. The challenge is not just technical. It's also cultural, operational, and deeply embedded in the way people work."In brownfield, you can’t rip and replace. You have to layer on carefully, respecting what works while slowly introducing what’s new," said Ryan.Olivier added that in either case, the mistake is the same: moving too fast without thinking ahead."The mistake people make in brownfield is to start too scrappy. It’s tempting to just hack something together. But you’ll regret it later when you need to scale or secure it."Their advice is simple:Even if you're solving one problem, design like you will solve five. That means using structured data models, modular components, and interfaces that can evolve.Final ThoughtsThis episode was a first deep dive into real-world IoT—not just the buzzwords, but the architecture, trade-offs, and decision-making behind building modern industrial systems.From embedded beginnings to UNS ambitions, Thing-Zero is showing that the future of IoT isn’t about more tech. It’s about making better choices, backed by cross-disciplinary teams who understand both shop floor realities and enterprise demands.To learn more, visit thing-zero.com and check out Olivier’s YouTube channel “The IoT Show” for insightful and developer-focused content.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Today, we have the pleasure of speaking with Nikki Gonzales, Director of Business Development at Weintek USA, co-founder of the Automation Ladies podcast, and co-organizer of OT SCADA CON—a conference focused on the gritty, real-world challenges of industrial automation.Unlike many of our guests who often come from cloud-first, data-driven digitalization backgrounds, Nikki brings a refreshing and much-needed OT floor-level perspective. Her world is HMI screens, SCADA systems, manufacturers, machine builders, and the hard truths about where industry transformation actually stands today.What’s an HMI and Why Does It Matter?In Nikki’s words, an HMI is:"The bridge between the operator, the machine, and the greater plant network."It’s often misunderstood as just a touchscreen replacement for buttons—but Nikki highlights that a modern HMI can do much more:* Act as a gateway between isolated machines and plant-level networks.* Enable remote access, alarm management, and contextual data sharing.* Help standardize connectivity in mixed-vendor environments.The HMI is often the first step in connecting legacy equipment to broader digital initiatives.Industry 3.0 vs. Industry 4.0: Ground Reality CheckWhile the industry buzzes with Industry 4.0 (and 5.0 🙃) concepts, Nikki’s view from the field is sobering:"Most small manufacturers are still living in Industry 3.0—or earlier. They have mixed equipment, proprietary protocols, and minimal digitalization."For the small manufacturers Nikki works with, transformation isn't about launching huge digital projects. It’s about taking incremental steps:* Upgrading a handful of sensors.* Introducing remote monitoring.* Standardizing alarm management.* Gradually building operational visibility."Transformation for small companies isn’t about fancy AI. It’s about survival—staying competitive, keeping workers, and staying in business."With labor shortages, supply chain pressures, and rising cybersecurity threats, smaller manufacturers must adapt—but they have to do it in a way that is affordable, modular, and low-risk.UNS, SCADA, and the State of ConnectivityNikki also touched on how concepts like UNS (Unified Namespace) are being discussed:"Everyone talks about UNS and cloud-first strategies. But in reality, most plants still have islands of automation. They have to bridge old PLCs, proprietary protocols, and aging SCADA systems first."While UNS represents a desirable goal—a real-time, unified data model accessible across the enterprise—many manufacturers are years (or even decades) away from making that a reality without significant groundwork first.In this world, HMI upgrades, standardized communication protocols (like MQTT), and targeted SCADA modernization become the critical building blocks.The Human Challenge: Culture and WorkforceBeyond the technology, Nikki highlighted the human side of transformation:* Younger generations aren't attracted to repetitive, low-tech manufacturing jobs.* Manual, isolated processes make hiring and retention even harder.* Manufacturers must rethink how technology supports not just efficiency, but employee satisfaction.The future of manufacturing depends not just on smarter machines—but on designing operations that attract and empower the next generation of workers.Organizing a Conference from Scratch: OT SCADA CONBefore wrapping up, we asked Nikki about organizing OT SCADA CON."You need a little naivety, a lot of persistence, and the right partners. We jumped first, then figured out how to build the plane on the way down."OT SCADA CON is designed by practitioners for practitioners—short technical sessions, no vendor pitches, no buzzword bingo. Just real, practical advice for the engineers, integrators, and plant technicians who make industrial operations work.Final ThoughtsIn a world obsessed with the future, Nikki reminds us:You can't build Industry 4.0 without first fixing Industry 3.0.And fixing it starts with respecting the complexity, valuing the small steps, and supporting the people on the ground who keep manufacturing running.If you want to learn more about Nikki’s work, visit automationladies.io and check out OT SCADA CON, taking place July 23–25, 2025.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome to another episode of the IT/OT Insider Podcast. Today, we’re diving into the world of Manufacturing Execution Systems (MES) and Manufacturing Operations Management (MOM) with Matt Barber, VP & GM MES at Infor. With over 15 years of experience, Matt has helped companies worldwide implement MES solutions, and he’s now on a mission to educate the world about MES through his website, MESMatters.com .MES is a topic that sparks a lot of debate, confusion, and, in many cases, hesitation. Where does it fit in a manufacturing tech stack? How does it relate to ERP, Planning Systems, Quality Systems, or industrial data platforms? And what’s the real difference between MES and MOM?These are exactly the questions we’re tackling today.Thanks for reading The IT/OT Insider! Subscribe for free to receive new posts and support our work.MES vs. MOM: What’s the Difference?Matt opens the discussion by addressing one of the misconceptions in the industry-what actually defines an MES, and how it differs from MOM."An MES is a specific type of application that focuses on production-related activities-starting and stopping production orders, tracking downtime, recording scrap, and calculating OEE. That’s the core of MES."But MOM is broader. It extends beyond production into quality management, inventory tracking, and maintenance. MOM isn’t a single application but rather a framework that connects multiple operational functions.Many MES vendors include some MOM capabilities, but few solutions cover all aspects of production, quality, inventory, and maintenance in one system. That’s why companies need to carefully evaluate what they need when selecting a solution.How Do Companies Start with MES?Not every company wakes up one day and decides, “We need MES.” The journey often starts with a single pain point-a need for OEE tracking, real-time visibility, or better quality control.Matt outlines two main approaches:* Step-by-step approach* Companies start with a single use case, such as tracking downtime and production efficiency.* Once they see value, they expand into areas like quality control, inventory tracking, or maintenance scheduling.* This approach minimizes risk and allows for quick wins.* Enterprise-wide standardization* Larger companies often take a broader approach, aiming to standardize MES across all sites.* The goal is to ensure consistent processes, better data integration, and a unified system for all operators.* While it requires more planning and investment, it creates a cohesive manufacturing strategy.Both approaches are valid, but Matt emphasizes that even if companies start small, they should have a long-term vision of how MES will fit into their broader Industry 4.0 strategy.The Role of OEE in MESOEE (Overall Equipment Effectiveness) is one of the most common starting points for MES discussions. It measures how much good production output a company achieves compared to its theoretical maximum.The three key factors:* Availability – How much time machines were available for production.* Performance – How efficiently the machines ran during that time.* Quality – How much of the output met quality standards."You don’t necessarily need an MES to track OEE. Some companies do it in spreadsheets or standalone IoT platforms. But if you want real-time OEE tracking that integrates with production orders, material usage, and quality data, MES is the natural solution."People and Process: The Hardest Part of MES ImplementationOne of the biggest challenges in MES projects isn’t the technology-it’s people and process change.Matt shares a common issue:"Operators often have their own way of doing things. They know how to work around inefficiencies. But when an MES system is introduced, it enforces a standardized way of working, and that’s where resistance can come in."To make MES adoption successful, companies must:* Get leadership buy-in – A clear vision from the top ensures the project gets the necessary resources and support.* Engage operators early – Including shop floor workers in the process design increases adoption and usability.* Define clear roles – Having global MES champions and local site super-users ensures both standardization and flexibility."You can have the best MES system in the world, but if no one uses it, it’s worthless."How the MES Market is ChangingMES has been around for decades, but the industry is evolving rapidly. Matt highlights three major trends:* The rise of configurable MES* Historically, MES projects required custom coding and long implementation times.* Now, companies like Infor are offering out-of-the-box, configurable MES platforms that can be set up in days instead of months.* Companies that offer configurable OTB applications (like Infor) are able to offer quick prototyping for manufacturing processes, ensuring customers benefit from agility and quick value realisation.* The split between cloud-based MES and on-premise solutions* Many legacy MES systems were designed to run on-premise with deep integrations to shop floor equipment.* However, cloud-based MES is growing, especially in multi-site enterprises that need centralized management and analytics.* Matt recognises the importance of cloud based applications, but highlights that there will always be at least a small on-premise part of the architecture for connecting to machines and other shopfloor equipment.* MES vs. the rise of “build-it-yourself” platforms* Some smaller manufacturers opt for the “do-it-yourself” approach, creating their own MES-Light applications by layering in various technologies and software platforms.* This trend is more common in smaller manufacturers that need flexibility and are comfortable developing their own industrial applications.* However, for enterprise-wide standardization, an OTB configurable MES platform provides the best scalability and consistency, and the most advanced platforms allow end-users to configure it themselves through master data, reports, and dashboards.MES and Industrial Data PlatformsA big topic in manufacturing today is the role of data platforms. Should MES be the central hub for all manufacturing data, or should it feed into an enterprise-wide data lake?Matt explains the shift:"Historically, MES data was stored inside MES and maybe shared with ERP. But now, with the rise of AI and advanced analytics, manufacturers want all their industrial data in one place, accessible for enterprise-wide insights."This has led to two key changes:* MES systems are increasingly required to push data into (industrial) data platforms.* Companies are focusing on data contextualization, ensuring that production data, quality data, and maintenance data are all aligned for deeper analysis."MES is still critical, but it’s no longer just an execution layer-it’s a key source of contextualized data for AI and machine learning."Where to Start with MESFor companies considering MES, Matt offers some practical advice:* Understand your industry needs – Different MES solutions are better suited for different industries (food & beverage, automotive, pharma, etc.).* Start with a clear business case – Whether it’s reducing downtime, improving quality, or optimizing material usage, have a clear goal.* Choose between out-of-the-box vs. build-your-own – Large enterprises may benefit from standardized MES, while smaller companies might prefer DIY industrial platforms.* Don’t ignore change management – Successful MES projects require strong collaboration between IT, OT, and shop floor operators."It’s hard. But it’s worth it."Final ThoughtsMES is evolving faster than ever, blending traditional execution functions with modern cloud analytics. Whether companies take a step-by-step or enterprise-wide approach, MES remains a critical piece of the smart manufacturing puzzle.For more MES insights, check out mesmatters.com or Matt’s LinkedIn page, and don’t forget to subscribe to IT/OT Insider for the latest discussions on bridging IT and OT.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
In this episode of the IT/OT Insider Podcast, where we’re taking a short detour from our usual deep dives into industrial things to explore something broader-but equally vital: how enterprises evolve.We’re joined by Stephen Fishman and Matt McLarty, authors of the book Unbundling the Enterprise, published by IT Revolution. Stephen is North America Field CTO at Boomi, and Matt is the company’s Global CTO. But more importantly for this conversation-they’re long-time collaborators with a shared passion for modularity, APIs, and systems thinking.We’ll talk about the power of preparation over prediction, about how modular systems and composable strategies can future-proof organizations, and-most unexpectedly-how happy accidents (yes, “OOOPs”) can unlock unexpected success.Thanks for reading The IT/OT Insider! Subscribe for free to receive new posts and support our work.From Creative Writing to Enterprise ArchitectureStephen and Matt first connected over a decade ago, when Stephen was leading app development at Cox Automotive and Matt was heading up the API Academy at CA Technologies. Their collaboration grew from a shared curiosity: why were APIs making some companies wildly successful, and why did that success often seem... unplanned?They didn’t want to write yet another how-to book on APIs. Instead, they wanted to tell the bigger story-about why companies who invested in modularity were able to respond faster, seize opportunities more easily, and unlock new business models.“We wanted to bridge the gap between architects and the business. Help tech teams articulate why they want to build things in a modular way-and help business folks understand the financial value behind those decisions.” – Stephen FishmanOOOPs: The Power of Happy AccidentsOne of the big themes in their book is what the authors call OOOPs-not a typo, but an acronym.“Google Maps is the classic story,” Stephen explains. “People started scraping the APIs and using them in ways Google never planned-until they turned it into a massive business. That was a happy accident. And it happened again and again.”So they gave those happy accidents a structure-Optionality, Opportunism, and Optimization.* Optionality: Modular systems open the door to future opportunities you can’t yet predict.* Opportunism: You need ways to identify where to unbundle or where to apply APIs first.* Optimization: Continuously measuring and refining based on real usage and feedback.This framework makes the case that modularity isn’t just a technical preference-it’s a business strategy.Read more about OOOps in this article.S-Curves, Options, and Becoming the HouseAnother concept that runs through the book is the S-curve of growth-the idea that all successful innovations follow a familiar pattern: slow start, rapid rise, plateau, and eventual decline.Most companies ride that first curve too long, betting too heavily on what worked yesterday. The challenge is recognizing when you’ve peaked-and investing in what comes next.“Most people don’t know where they are on the S-curve,” says Stephen. “They think they’re still climbing, but they’re really on the plateau.”That’s where optionality comes in again: the ability to explore multiple futures at low cost, hedging your bets without breaking the bank. They borrow the idea of “convex tinkering”: placing lots of small, low-cost bets with the potential for high upside.“Casinos don’t gamble,” Stephen says. “They set the rules. They optimize for asymmetric value. That’s what this book is trying to teach organizations-how to become the house.”We also wrote about the importance of having cost effective ways to work with data in this previous post:Unbundling is Not Just for Big TechYou might think this is a book for Google, Amazon, or SaaS unicorns-but the lessons apply to every enterprise. Even in manufacturing.“The automotive world has always understood modularity,” Stephen says. “Platforms existed in car design before they existed in tech. When you separate chassis from body and engine, you gain flexibility and efficiency.”And the same applies in IT and OT.* Building platforms of reusable APIs and services* Designing products and processes with change in mind* Investing in capabilities close to revenue, not just internal shared servicesEven internal IT teams benefit from this mindset. Once a solution is decontextualized and reusable, it can scale across departments and generate asymmetric value internally-without needing to sell to the outside world.All Organization Designs Suck (and That’s Okay)A memorable quote in the book comes from an interview with David Rice (SVP Product and Engineering at Cox Automotive):“All organization designs suck”It’s a reminder that there’s no perfect org chart, no flawless model. Instead, success comes from designing your systems, your teams, and your investments with awareness of their limits-and building flexibility around them.“APIs aren’t a silver bullet. Neither is GenAI. But if you design your systems, teams, and investments around modularity and resilience, you’re better prepared for whatever future emerges.”We highly recommend the book Team Topologies as further read on this topic.Final ThoughtsUnbundling the Enterprise is not a technical manual. It’s a mindset. A playbook for organizations that want to survive disruption, scale intelligently, and embrace change-without betting everything on a single future.The ideas in this book are especially relevant for those working on digital transformation in complex industries. It’s not always about moving fast-it’s about moving smart, building for change, and staying ready.You can find the book on IT Revolution or wherever great tech books are sold. And be sure to check out their companion article on OOOPs on the IT Revolution blog.Until next time and stay modular! 🙂Want More Conversations Like This?Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome to another episode of the IT/OT Insider Podcast. Today, we’re diving into visibility, traceability, and real-time analytics with Tim Butler, CEO and founder of Tego.For the last 20 years, Tego has been specializing in tracking and managing critical assets in industries like aerospace, pharmaceuticals, and energy. The company designed the world’s first rugged, high-memory passive UHF RFID chip, helping companies like Airbus and Boeing digitize lifecycle maintenance on their aircraft.It’s a fascinating topic—how do you keep track of assets that move across the world every day? How do you embed intelligence directly into physical components? How does all of this connect to the broader challenge of IT and OT convergence? And… how do you create a unified view that connects people, parts, and processes to business outcomes?Let’s dive in!Thanks for reading The IT/OT Insider! Subscribe for free to receive new posts and support our work.From Serial Entrepreneur to Asset IntelligenceTim’s journey into asset intelligence started 20 years ago, when he saw a major opportunity in industrial RFID technology."At the time, RFID chips had only 96 or 128 bits of storage. That was enough for a serial number, but not much else. We set out to design a chip that could hold thousands of times more memory—and that completely changed the game."That chip became the foundation for Tego’s work in aerospace.* Boeing and Airbus needed a better way to track assets on planes.* Maintenance logs and compliance records needed to (virtually) move with the asset itself.* Standard RFID solutions didn’t have enough memory or durability to survive extreme conditions.By designing high-memory RFID chips, Tego helped digitize aircraft maintenance and inventory management. They co-authored the ATA Spec 2000 Chapter 9-5 standards that are now widely used in aerospace."The challenge was clear—planes fly all over the world, so the data needed to travel with them. We had to embed intelligence directly into the assets themselves."A Real-World Use Case: Tracking Aircraft Components with RFIDOne of the best examples of Tego’s impact is in the aerospace industry.The Challenge:* Aircraft components need regular maintenance and compliance tracking.* Traditional tracking methods relied on centralized databases, which weren’t always accessible.* When a plane lands, maintenance teams need instant access to accurate, up-to-date records.The Solution:* Every critical component (seats, life vests, oxygen generators, galley equipment, etc.) is tagged with a high-memory RFID chip (yes, also the one underneath your next airplane seat probably has one 🙂).* When a technician scans a tag, they instantly access the asset’s history.The Impact:* Reduced maintenance delays—Technicians no longer have to search for data across multiple systems.* Improved traceability—Every asset has a digital history that travels with it.* Compliance enforcement—Airlines can quickly verify whether components meet regulatory requirements."This isn’t just about making inventory tracking easier. It’s about ensuring safety, reducing downtime, and making compliance effortless."The IT vs. OT Divide in AerospaceA major theme of our podcast is the convergence of IT and OT—and in aerospace, that divide is particularly pronounced.Tim breaks it down:* IT teams manage enterprise data—ERP systems, databases, and security.* OT teams manage physical assets—maintenance operations, plant floors, and repair workflows.* Both need access to the same data, but they use it differently."IT thinks in terms of databases and networks. OT thinks in terms of real-world processes. The goal isn’t just connecting IT and OT—it’s making sure they both get the data they need in a usable way."The Future of AI and Asset IntelligenceWith all the buzz around AI and Large Language Models (LLMs), we asked Tim how these technologies are impacting industrial asset intelligence.His take? AI is only as good as the data feeding it."If you don’t have structured, reliable data, AI can’t do much for you. That’s why asset intelligence matters—it gives AI the high-quality data it needs to make meaningful predictions."Some of the key trends he sees:* AI-powered maintenance recommendations—Analyzing historical asset data to predict failures before they happen.* Automated compliance checks—Using AI to validate and flag compliance issues before inspections.* Smart inventory optimization—Ensuring that spare parts are always available where they’re needed most.But the biggest challenge? Data consistency."AI works best when it has standardized, structured data. That’s why using industry standards—like ATA Spec 2000 for aerospace—is so important."Final ThoughtsIndustrial asset intelligence is evolving rapidly, and Tego is leading the way in making assets smarter, more traceable, and more autonomous.From tracking aircraft components to ensuring regulatory compliance in pharma, Tego’s technology blends physical and digital worlds, making it easier for companies to manage assets at a global scale.Together with Tego, businesses create a single source of truth for people, processes, and parts that empowers operations with the vision to move forward.If you’re interested in learning more about Tego and their approach to asset intelligence, visit www.tegoinc.com.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome to the final episode of our special Industrial DataOps podcast series. And what better way to close out the series than with Dominik Obermaier, CEO and co-founder of HiveMQ—one of the most recognized names when it comes to MQTT and Unified Namespace (UNS).Dominik has been at the heart of the MQTT story from the very beginning—contributing to the specification, building the company from the ground up, and helping some of the world’s largest manufacturers, energy providers, and logistics companies reimagine how they move and use industrial data.Every Company is Becoming an IoT CompanyDominik opened with a striking analogy:"Just like every company became a computer company in the ‘80s and an internet company in the ‘90s, we believe every company is becoming an IoT company."And that belief underpins HiveMQ’s mission—to build the digital backbone for the Internet of Things, connecting physical assets to digital applications across the enterprise.Subscribe for free to receive new posts and support our work.Today, HiveMQ is used by companies like BMW, Mercedes-Benz, and Lilly to enable real-time data exchange from edge to cloud, using open standards that ensure long-term flexibility and interoperability.What is MQTT?For those new to MQTT, Dominik explains what it is: a lightweight, open protocol built for real-time, scalable, and decoupled communication.Originally developed in the late 1990s for oil pipeline monitoring, MQTT was designed to minimize bandwidth, maximize reliability, and function in unstable network conditions.It uses a publish-subscribe pattern, allowing producers and consumers of data to remain decoupled and highly scalable—ideal for IoT and OT environments, where devices range from PLCs to cloud applications."HTTP works for the internet of humans. MQTT is the protocol for the internet of things."The real breakthrough came when MQTT became an open standard. HiveMQ has been a champion of MQTT ever since—helping manufacturers escape vendor lock-in and build interoperable data ecosystems.From Broker to Backbone: Mapping HiveMQ to the Capability ModelHiveMQ is often described as an MQTT broker, but as Dominik made clear, it's far more than that. Let’s map their offerings to our Industrial DataOps Capability Map:Connectivity & Edge Ingest →* HiveMQ Edge: A free, open-source gateway to connect to OPC UA, Modbus, BACnet, and more.* Converts proprietary protocols into MQTT, making data accessible and reusable.Data Transport & Integration →* HiveMQ Broker: The core engine that enables highly reliable, real-time data movement across millions of devices.* Scales from single factories to hundreds of millions of data tags.Contextualization & Governance →* HiveMQ Data Hub and Pulse: Tools for data quality, permissions, history, and contextual metadata.* Pulse enables distributed intelligence and manages the Unified Namespace across global sites.UNS Management & Visualization →* HiveMQ Pulse is a true UNS solution that provides structure, data models, and insights without relying on centralized historians.* Allows tracing of process changes, root cause analysis, and real-time decision support.Building the Foundation for Real-Time Enterprise DataFew topics have gained as much traction recently as UNS (Unified Namespace). But as Dominik points out, UNS is not a product—it’s a pattern. And not all implementations are created equal."Some people claim a data lake is a UNS. Others say it’s OPC UA. It’s not. UNS is about having a shared, real-time data structure that’s accessible across the enterprise."HiveMQ Pulse provides a managed, governed, and contextualized UNS, allowing companies to:* Map their assets and processes into a structured namespace.* Apply insights and rules at the edge—without waiting for data to reach the cloud.* Retain historical context while staying close to real-time operations."A good data model will solve problems before you even need AI. You don’t need fancy tech—you need structured data and the ability to ask the right questions."Fix the Org Before the TechOne of the most important takeaways from this conversation was organizational readiness. Dominik was clear:"You can’t fix an organizational problem with technology."Successful projects often depend on having:* A digital transformation bridge team between IT and OT.* Clear ownership and budget—often driven by a C-level mandate.* A shared vocabulary, so teams can align on definitions, expectations, and outcomes.To help customers succeed, HiveMQ provides onboarding programs, certifications, and educational content to establish this common language.Use CaseOne specific use case we’d like to highlight is that at Lilly, a Pharmaceutical company:Getting Started with HiveMQ & UNSDominik shared practical advice for companies just starting out:* Begin with open-source HiveMQ Edge and Cloud—no license or sales team required.* Start small—connect one PLC, stream one tag, and build from there.* Demonstrate value quickly—show how a single insight (like predicting downtime from a temperature drift) can justify further investment.* Then scale—build a sustainable, standards-based data architecture with the support of experienced partners.Final Thoughts: A Fitting End to the SeriesThis episode was the perfect way to end our Industrial DataOps podcast series—a conversation that connected the dots between open standards, scalable data architecture, organizational design, and future-ready analytics (and don’t worry, we have lots of other podcast ideas for the months to come :)).HiveMQ’s journey—from a small startup to powering the largest industrial IoT deployments in the world—is proof that open, scalable, and reliable infrastructure will be the foundation for the next generation of digital manufacturing.If you want to learn more about MQTT, UNS, or HiveMQ Pulse, check out the excellent content at www.hivemq.com or their article on DataOps. Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome to Episode 11! As we get closer to Hannover Messe 2025, we’re also approaching the final episodes of this podcast series. Today we have two fantastic guests from AVEVA: Roberto Serrano Hernández, Technology Evangelist for the CONNECT industrial intelligence platform, and Clemens Schönlein, Technology Evangelist for AI and Analytics.Together, they bring a unique mix of deep technical insight, real-world project experience, and a passion for making industrial data usable, actionable, and valuable.We cover a lot in this episode: from the evolution of AVEVA's CONNECT industrial intelligence platform, to real-world use cases, data science best practices, and the cloud vs. on-prem debate. It’s a powerful conversation on how to build scalable, trusted, and operator-driven data solutions.Thanks for reading The IT/OT Insider! Subscribe for free to receive new posts.What is CONNECT?Let’s start with the big picture. What is the CONNECT industrial intelligence platform? As Roberto explains:"CONNECT is an open and neutral industrial data platform. It brings together all the data from AVEVA systems—and beyond—and helps companies unlock value from their operational footprint."This isn’t just another historian or dashboard tool. CONNECT is a cloud-native platform that allows manufacturers to:* Connect to on-prem systems.* Store, contextualize, and analyze data.* Visualize it with built-in tools or share it with AI platforms like Databricks.* Enable both data scientists and domain experts to collaborate on decision-making.It’s also built to make the transition to cloud as seamless as possible—while preserving compatibility with legacy systems."CONNECT is for customers who want to do more – close the loop, enable AI, and future-proof their data strategy"Where CONNECT Fits in the Industrial Data Capability MapRoberto breaks it down neatly:* Data Acquisition – Strong roots in industrial protocols and legacy system integration.* Data Storage and Delivery – The core strength of CONNECT: clean, contextualized, and trusted data in the cloud.* Self-Service Analytics & Visualization – Tools for both data scientists and OT operators to work directly with data.* Ecosystem Integration – CONNECT plays well with Databricks, Snowflake, and other analytics platforms.But Clemens adds an important point:"The point isn’t just analytics—it’s about getting insights back to the operator. You can’t stop at a dashboard. Real value comes when change happens on the shop floor."Use Case Spotlight: Stopping Downtime with Data Science at AmcorOne of the best examples of CONNECT in action is the case of Amcor, a global packaging manufacturer producing the plastic film used in things like chip bags and blister packs.The Problem:* Machines were stopping unpredictably, causing expensive downtime.* Traditional monitoring couldn’t explain why.* Root causes were hidden upstream in the process.The Solution:* CONNECT was used to combine MES data and historian data in one view.* Using built-in analytics tools, the team found that a minor drift in a temperature setpoint upstream was causing the plastic’s viscosity to change—leading to stoppages further down the line.* They created a correlation model, mapped it to ideal process parameters, and fed the insight back to operators."The cool part was the speed," said Clemens. "What used to take months of Excel wrangling and back-and-forth can now be done in minutes."The Human Side of Industrial Data: Start with the OperatorOne of the most powerful themes in this episode is the importance of human-centric design in analytics.Clemens shares from his own experience:"I used to spend months building an advanced model—only to find out the data wasn't trusted or the operator didn’t care. Now I start by involving the operator from Day 1."This isn’t just about better UX. It’s about:* Getting faster buy-in.* Shortening time-to-value.* Ensuring that insights are actionable and respected.Data Management and Scaling ExcellenceWe also touched on the age-old challenge of data management. AVEVA’s take? Don’t over-architect. Start delivering value."Standardization is important—but don’t wait five years to get it perfect. Show value early, and the standardization will follow."And when it comes to building centers of excellence, Clemens offers a simple yet powerful principle:"Talk to the people who press the button. If they don’t trust your model, they won’t use it."Final ThoughtsAs we edge closer to Hannover Messe, and to the close of this podcast series, this episode with Clemens and Roberto reminds us what Industrial DataOps is all about:* Useful data* Actionable insights* Empowered people* Scalable architectureIf you want to learn more about AVEVA's CONNECT industrial intelligence platform and their work in AI and ET/OT/IT convergence, visit: www.aveva.comStay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome to Episode 10 of the IT/OT Insider Podcast. Today, we're pleased to feature Anupam Gupta, Co-Founder & President North Americas at Celebal Technologies, to discuss how enterprise systems, AI, and modern data architectures are converging in manufacturing.Celebal Technologies is a key partner of SAP, Microsoft, and Databricks, specializing in bridging traditional enterprise IT systems with modern cloud data and AI innovations. Unlike many of our past guests who come from a manufacturing-first perspective, Celebal Technologies approaches the challenge from the enterprise side—starting with ERP and extending into industrial data, AI, and automation.Anupam's journey began as a developer at SAP, later moving into consulting and enterprise data solutions. Now, with Celebal Technologies, he is helping manufacturers combine ERP data, OT data, and AI-driven insights into scalable Lakehouse architectures that support automation, analytics, and business transformation.Thanks for reading The IT/OT Insider! Subscribe for free to receive new posts and support our work.ERP as the Brain of the EnterpriseOne of the most interesting points in our conversation was the role of ERP (Enterprise Resource Planning) systems in manufacturing."ERP is the brain of the enterprise. You can replace individual body parts, but you can't transplant the brain. The same applies to ERP—it integrates finance, logistics, inventory, HR, and supply chain into a single system of record."While ERP is critical, it doesn't cover everything. The biggest gap? Manufacturing execution and OT data.* ERP handles business transactions → orders, invoices, inventory, financials.* MES and OT systems handle operations → machine status, process execution, real-time sensor data.Traditionally, these two have been separated, but modern manufacturers need both worlds to work together. That's where integrated data platforms come in.Bridging Enterprise IT and Manufacturing OTCelebal Technologies specializes in merging enterprise and industrial data, bringing IT and OT together in a structured, scalable way.Anupam explains: "When we talk about Celebal Tech, we say we sit at the right intersection of traditional enterprise IT and modern cloud innovation. We understand ERP, but we also know how to integrate it with IoT, AI, and automation."Key focus areas include:* Unifying ERP, MES, and OT data into a central Lakehouse architecture.* Applying AI to optimize operations, logistics, and supply chain decisions.* Enabling real-time data processing at the edge while leveraging cloud for scalability.This requires a shift from traditional data warehouses to modern Lakehouse architectures—which brings us to the next big topic.What is a Lakehouse and Why Does It Matter?Most people are familiar with data lakes and data warehouses, but a Lakehouse combines the best of both.Traditional Approaches:* Data warehouses → Structured, governed, and optimized for business analytics, but not flexible for AI or IoT data.* Data lakes → Can store raw data from many sources but often become data swamps—difficult to manage and analyze.Lakehouse Benefits:* Combines structured and unstructured data → Supports ERP transactions, sensor data, IoT streams, and documents in a single system.* High performance analytics → Real-time queries, machine learning, and AI workloads.* Governance and security → Ensures data quality, lineage, and access control."A Lakehouse lets you store IoT and ERP data in the same environment while enabling AI and automation on top of it. That's a game-changer for manufacturing."Celebal Tech is a top partner for Databricks and Microsoft in this space, helping companies migrate from legacy ERP systems to modern AI-powered data platforms.There's More to AI Than GenAIWith all the hype around Generative AI (GenAI), it's important to remember that AI in manufacturing goes far beyond chatbots and text generation."Many companies are getting caught up in the GenAI hype, but the real value in manufacturing AI comes from structured, industrial data models and automation."Celebal Tech is seeing two major AI trends:* AI for predictive maintenance and real-time analytics → Using sensor and operational data to predict failures, optimize production, and automate decisions.* AI-driven automation with agent-based models → AI is moving from just providing recommendations to executing complex tasks in ERP and MES environments.GenAI has a role to play, but:* Many companies are converting structured data into unstructured text just to apply GenAI—which doesn't always make sense.* Enterprises need explainability and trust before AI can take over critical operations."Think of AI in manufacturing like self-driving cars—we're not fully autonomous yet, but we're moving toward AI-assisted automation."The key to success? Good data governance, well-structured industrial data, and AI models that operators can trust.Final Thoughts: Scaling DataOps and AI in ManufacturingFor manufacturers looking to modernize their data strategy, Anupam offers three key takeaways:* Unify ERP and OT data → AI and analytics only work when data is structured and connected across systems.* Invest in a Lakehouse approach → It's the best way to combine structured business data with real-time industrial data.* AI needs governance→ Without trust, transparency, and explainability, AI won't be adopted at scale."You don't have to replace your ERP or MES, but you do need a data strategy that enables AI, automation, and better decision-making."If you want to learn more about Celebal Technologies and how they're bridging AI, ERP, and manufacturing data, visit www.celebaltech.com.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome to Episode 9 in our Special DataOps series. We’re getting closer to Hannover Messe, and thus also the end of this series. We still have some great episodes ahead of us, with AVEVA, HiveMQ and Celebal Technologies joining us in the days to come (and don’t worry, this is not the end of our podcasts, many other great stories are already recorded and will be aired in April!)In this episode, we’re joined by David Rogers, Senior Solutions Architect at Databricks, to explore how AI, data governance, and cloud-scale analytics are reshaping manufacturing.David has spent years at the intersection of manufacturing, AI, and enterprise data strategy, working at companies like Boeing and SightMachine before joining Databricks. Now, he’s leading the charge in helping manufacturers unlock value from their data—not just by dumping it into the cloud, but by structuring, governing, and applying AI effectively.Databricks is one of the biggest names in the data and AI space, known for lakehouse architecture, AI workloads, and large-scale data processing. But how does that apply to the shop floor, supply chain, and industrial operations?That’s exactly what we’re unpacking today.Join Our Community Today! Subscribe for free to receive all new postWhat is Databricks and How Does It Fit into Manufacturing?Databricks is a cloud-native data platform that runs on AWS, Azure, and Google Cloud, providing an integrated set of tools for ETL, AI, and analytics.David breaks it down:"We provide a platform for any data and AI workload—whether it’s real-time streaming, predictive maintenance, or large-scale AI models."In the manufacturing context, this means:* Bringing factory data into the cloud to enable AI-driven decision-making.* Unifying different data types—SCADA, MES, ERP, and even video data—to create a complete operational view.* Applying AI models to optimize production, reduce downtime, and improve quality."Manufacturers deal with physical assets, which means their data comes from machines, sensors, and real-world processes. The challenge is structuring and governing that data so it’s usable at scale."Why Data Governance Matters More Than EverGovernance is becoming a critical challenge in AI-driven manufacturing.David explains why:"AI is only as good as the data feeding it. If you don’t have structured, high-quality data, your AI models won’t deliver real value."Some key challenges manufacturers face:* Data silos → OT data (SCADA, historians) and IT data (ERP, MES) often remain disconnected.* Lack of lineage → Companies struggle to track how data is transformed, making AI deployments unreliable.* Access control issues → Manufacturers work with multiple vendors, suppliers, and partners, making data security and sharing complex.Databricks addresses this through Unity Catalog, an open-source data governance framework that helps manufacturers:* Control access → Manage who can see what data across the organization.* Track data lineage → Ensure transparency in how data is processed and used.* Enforce compliance → Automate data retention policies and regional data sovereignty rules."Data governance isn’t just about security—it’s about making sure the right people have access to the right data at the right time."A Real-World Use Case: AI-Driven Quality Control in AutomotiveOne of the best examples of how Databricks is applied in manufacturing is in the automotive industry, where manufacturers are using AI and multimodal data to improve yield of battery packs for EV’s.The Challenge:* Traditional quality control relies heavily on human inspection, which is time-consuming and inconsistent.* Sensor data alone isn’t enough—video, images, and even operator notes play a role in defect detection.* AI models need massive, well-governed datasets to detect patterns and predict failures.The Solution:* The company ingested data from SCADA, MES, and video inspection cameras into Databricks.* Using machine learning, they automatically detected defects in real time.* AI models were trained on historical quality failures, allowing the system to predict when a defect might occur.* All of this was done at cloud scale, using governed data pipelines to ensure traceability."Manufacturers need AI that works across multiple data types—time-series, video, sensor logs, and operator notes. That’s the future of AI in manufacturing."Scaling AI in Manufacturing: What Works?A big challenge for manufacturers is moving beyond proof-of-concepts and actually scaling AI deployments.David highlights some key lessons from successful projects:* Start with the right use case → AI should be solving a high-value problem, not just running as an experiment.* Ensure data quality from the beginning → Poor data leads to poor AI models. Structure and govern your data first.* Make AI models explainable → Black-box AI models won’t gain operator trust. Make sure users can understand how predictions are made.* Balance cloud and edge → Some AI workloads belong in the cloud, while others need to run at the edge for real-time decision-making."It’s not about collecting ALL the data—it’s about collecting the RIGHT data and applying AI where it actually makes a difference."Unified Namespace (UNS) and Industrial DataOpsDavid also touches on the role of Unified Namespace (UNS) in structuring manufacturing data."If you don’t have UNS, your data will be an unstructured mess. You need context around what product was running, on what line, in what factory."In Databricks, governance and UNS go hand in hand:* UNS provides real-time context at the factory level.* Databricks ensures governance and scalability at the enterprise level."You can’t build scalable AI without structured, contextualized data. That’s why UNS and governance matter."Final Thoughts: Where is Industrial AI Heading?* More real-time AI at the edge → AI models will increasingly run on local devices, reducing cloud dependencies.* Multimodal AI will become standard → Combining sensor data, images, and operator inputs will drive more accurate predictions.* AI-powered data governance → Automating data lineage, compliance, and access control will be a major focus.* AI copilots for manufacturing teams → Expect more AI-driven assistants that help operators troubleshoot issues in real time."AI isn’t just about automating decisions—it’s about giving human operators better insights and recommendations."Final ThoughtsAI in manufacturing is moving beyond hype and into real-world deployments—but the key to success is structured data, proper governance, and scalable architectures.Databricks is tackling these challenges by bringing AI and data governance together in a platform designed to handle industrial-scale workloads.If you’re interested in learning more, check out www.databricks.com.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome to Episode 8 of the IT/OT Insider Podcast. Today, we’re diving into real-time data, edge processing, and AI-driven analytics with Evan Kaplan, CEO of InfluxData.InfluxDB is one of the most well-known time-series databases, used by developers, industrial companies, and cloud platforms to manage high-volume data streams. With 1.3 million open-source users and partners like Siemens, Bosch, and Honeywell, it’s a major player in the Industrial DataOps ecosystem.Evan brings a unique perspective—coming from a background in networking, cybersecurity, and venture capital, he understands both the business and technical challenges of scaling industrial data infrastructure.In this episode, we explore:* How time-series data has become critical in manufacturing.* The shift from on-prem to cloud-first architectures.* The role of open-source in industrial data strategies.* How AI and automation are reshaping data-driven decision-making.Let’s dive in.If you like this episode, you surely don’t want the miss our other stuff. Subscribe now! From Networking to Time-Series DataEvan’s journey into time-series databases started in venture capital, where he met Paul Dix, the founder of InfluxData."At the time, I wasn't a data expert, but I saw an opportunity—everything in the world runs on time-series data. Sensors, machines, networks—they all generate metrics that change over time."At the time, InfluxDB was a small open-source project with about 3,000 users. Today, it’s grown to 1.3 million users, powering everything from IoT devices and industrial automation to financial services and network telemetry.One of the biggest drivers of this growth? Industrial IoT."Over the last decade, we’ve seen a shift. IT teams originally used InfluxDB for monitoring servers and applications. But today, over 60% of our business comes from industrial IoT and sensor data analytics."How InfluxDB Maps to the Industrial Data Platform Capability ModelWe often refer to our Industrial Data Platform Capability Map to understand where different technologies fit into the IT/OT data landscape.So where does InfluxDB fit?* Connectivity & Ingest → One of InfluxDB’s biggest strengths. It can ingest massive amounts of data from sensors, PLCs, MQTT brokers, and industrial protocols using Telegraf, their open source agent.* Edge & Cloud Processing → Data can be stored and analyzed locally at the edge, then replicated to the cloud for long-term storage.* Time-Series Analytics → InfluxDB specializes in storing, querying, and analyzing time-series data, making it ideal for predictive maintenance, OEE tracking, and process optimization.* Integration with Data Lakes & AI → Many manufacturers use InfluxDB as the first stage in their data pipeline before sending data to Snowflake, Databricks, or other lakehouse architectures."Our strength is in real-time streaming and short-term storage. Most customers eventually downsample and push long-term data into a data lake."A Real-World Use Case: ju:niz Energy’s Smart Battery SystemsOne of the most compelling use cases for InfluxDB comes from ju:niz Energy, a company specializing in off-grid energy storage.The Challenge:* ju:niz needed to monitor and optimize distributed battery systems used in renewable energy grids.* Each battery had hundreds of sensors generating real-time data.* Connectivity was unreliable, meaning data couldn’t always be sent to the cloud immediately.The Solution:* Each battery system was equipped with InfluxDB at the edge to store and process local data.* Data was compressed and synchronized with the cloud whenever a connection was available.* AI models used InfluxDB data to predict battery failures and optimize energy usage.The Results:* Improved energy efficiency—By analyzing real-time data, ju:niz optimized battery charging and discharging across their network.* Reduced downtime—Predictive maintenance prevented unexpected failures.* Scalability—The system could be expanded without requiring a centralized cloud-only approach."This hybrid edge-cloud model is becoming more common in industrial IoT. Not all data needs to live in the cloud—sometimes, local processing is faster, cheaper, and more reliable."Cloud vs. On-Prem: The Future of Industrial Data StorageA common debate in industrial digitalization is whether to store data on-premise or in the cloud.Evan sees a hybrid approach as the future:"Pushing all data to the cloud isn’t practical. Factories need real-time decision-making at the edge, but they also need centralized visibility across multiple sites."A few key trends:* Cloud adoption is growing, with 55-60% of InfluxDB deployments now cloud-based.* Hybrid architectures are emerging, where real-time data stays at the edge while historical data moves to the cloud.* Data replication is becoming the norm, ensuring that insights aren’t locked into one location."The most successful companies are balancing edge processing with cloud-scale analytics. It’s not either-or—it’s about using the right tool for the right job."AI and the Next Evolution of Industrial AutomationAI has been a major topic in every recent IT/OT discussion, but how does it apply to manufacturing and time-series data?Evan believes AI will redefine industrial operations—but only if companies structure their data properly."AI needs high-quality, well-governed data to work. If your data is a mess, your AI models will be a mess too."Some key AI trends he sees:* AI-assisted predictive maintenance → Combining sensor data, historical trends, and real-time analytics to predict failures before they happen.* Real-time anomaly detection → AI models can identify subtle changes in machine behavior and flag potential issues.* Autonomous process control → Over time, AI will move from making recommendations to fully automating factory adjustments."Right now, AI is mostly about decision support. But in the next five years, we’ll see fully autonomous manufacturing systems emerging."Final Thoughts: How Should Manufacturers Approach Data Strategy?For companies starting their Industrial DataOps journey, Evan has a few key recommendations:* Start with a strong data model → Don’t just collect data—structure it properly from day one.* Invest in developers → The best data strategies aren’t IT-led or OT-led—they’re developer-led.* Think hybrid → Balance edge and cloud storage to get the best of both worlds.* Prepare for AI → Even if AI isn’t a priority now, organizing your data properly will make AI adoption easier in the future."Industrial data is evolving fast, but the companies that structure and govern their data properly today will have a huge advantage tomorrow."Next Steps & More ResourcesIndustrial DataOps is no longer just a concept—it’s becoming a business necessity. Companies that embrace scalable data management and AI-driven insights will outpace competitors in efficiency and innovation.If you want to learn more about InfluxDB and time-series data strategies, visit www.influxdata.com.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome back to the IT/OT Insider Podcast. In this episode, we dive deep into industrial data modeling, manufacturing execution systems (MES), and the rise of headless data platforms with Geoff Nunan, CTO and co-founder of Rhize.Geoff has been working in industrial automation and manufacturing information systems for over 30 years. His experience spans multiple industries, from mining and pharmaceuticals to food & beverage. But what really drove him to start Rhize was a frustration many in the industry will recognize:"MES solutions are either too rigid or too custom-built. We needed a third option—something flexible but structured, something that could scale without requiring endless software development."Rhize is built around that idea. It’s a headless manufacturing data platform that allows companies to build custom applications on top of a standardized data backbone.In today’s discussion, we explore why MES implementations often struggle, why data modeling is key to digital transformation, and how companies can avoid repeating the same mistakes when scaling industrial data solutions. Or in the words of Geoff:“Data Modeling in manufacturing isn't optional. You're either going to end up with the model that you planned for or the one that you didn’t.”Thanks for reading The IT/OT Insider! Subscribe for free to support our work:Why Geoff co-founded Rhize: The MES DilemmaGeoff’s journey to starting Rhize began with a frustrating experience at a wine bottling plant in Australia.The company was implementing an MES solution to track downtime, manage inventory, and integrate with ERP. Sounds simple, right? But the project quickly became complex and expensive—and despite being an off-the-shelf solution, it required a lot of custom development."It was a simple MES use case, yet we spent 80% of our time on the 20% of requirements that didn’t fit the system. That’s the reality of most MES projects."After seeing this pattern repeat across multiple industries, Geoff realized the problem wasn’t just the software—it was the entire approach.* Off-the-shelf MES systems are often too rigid → They don’t adapt well to company-specific workflows.* Custom-built solutions are too complex → They require too much development and long-term maintenance, especially in larger corporations.* Manufacturing data needs structure, but also flexibility → There wasn’t a “headless” option that let companies build custom applications on a standardized data backbone.So, seven years ago, Geoff and his team started Rhize, focusing on providing a flexible, open manufacturing data platform that supports modern low-code front-end applications."We don’t provide an MES. We provide the data foundation that lets you build MES-like applications the way you need them."How Rhize Maps to the Industrial Data Platform Capability ModelOne of the key themes of our podcast series is understanding where different solutions fit into the broader industrial data ecosystem.So, how does Rhize align with our Industrial Data Platform Capability Map?* Data Modeling → The core of Rhize. It provides a structured, standardized manufacturing data model based on ISA-95.* Connectivity → Connection via open API’s and the most important industrial protocols.* Workflow & Event Processing → Supports rules-based automation and event-driven manufacturing processes.* Scalability → Built to support multi-site deployments with a common, reusable data architecture."Traditional MES forces you into a rigid workflow. With Rhize, you get the structure of MES but the flexibility to adapt it to your needs."The Importance of Data Modeling in ManufacturingA recurring theme in our conversation is data modeling—a topic that IT teams understand well, but OT teams often overlook.Geoff explains why a strong data model is critical for industrial data success:"Any IT system lives or dies by how well its data is structured. Yet in manufacturing, we often take a 'just send the data somewhere' approach without thinking about how to organize it for long-term use."The problem? Without a structured approach:* Data becomes siloed → Every plant has a different data format and naming convention.* Scaling becomes impossible → A solution that works in one factory won’t work in another without extensive rework.* AI and analytics won’t deliver value → Without consistent, contextualized data, AI models struggle to provide reliable insights.Geoff believes companies need to adopt structured industrial data models—and the best foundation for that is ISA-95."ISA-95 gives us a common language to describe manufacturing. If companies start with this as their foundation, they avoid years of painful restructuring later."A Real-World Use Case: Gold Traceability in Luxury WatchmakingOne of Rhize’s projects involved a luxury Swiss watchmaker trying to solve a complex traceability problem.The Challenge:* The company uses different grades of gold in its watches.* Due to fluctuating gold prices, tracking material usage accurately was critical.* The company needed mass balance tracking across all factories, but each plant had different processes and equipment.The Solution:* They implemented Rhize as a standardized data platform across all factories.* They modeled gold usage at a granular level, ensuring every gram was accounted for.* By unifying data across sites, they could benchmark efficiency and reduce material waste.The Result:* Improved material traceability, reducing financial loss from inaccurate tracking.* More efficient use of gold, leading to millions in savings per year.* A scalable system, enabling future expansion to other materials and components."They didn’t just solve a traceability problem. They built a data foundation that can now be extended to other manufacturing processes."Why MES Projects Fail—and How to Avoid ItOne of the biggest takeaways from our conversation is why MES implementations struggle.Geoff has seen companies fail multiple times before getting it right, often repeating the same mistakes:* Overcomplicating the data model → Trying to design for every possible scenario upfront.* Lack of standardization → Each site implements MES differently, making it impossible to scale.* Not considering long-term flexibility → A system that works now may not work five years from now.His advice?"Companies need to move away from 'big bang' MES rollouts. Start with a strong data model, implement a scalable data platform, and build applications on top of that."The Role of UNS in Data GovernanceUnified Namespace (UNS) has been a hot topic in recent years, but how does it fit into manufacturing data management?Geoff sees UNS as a useful tool, but not a silver bullet:* It helps with real-time data sharing, but without a structured data model, it can quickly become a mess.* Companies should see UNS as part of their data strategy, not the entire strategy."If you don’t start with a structured data model, UNS can become an uncontrolled stream of unstructured data. Governance is key."Final ThoughtsIndustrial data is evolving fast, but companies that don’t invest in proper data modeling will struggle to scale.Rhize is tackling this problem by providing a structured but flexible data platform, allowing manufacturers to build applications the way they need—without the limitations of traditional MES.If you want to learn more about Rhize and their approach to industrial data, visit www.rhize.com.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome to Episode 6 of our Industrial DataOps podcast series. Today, we’re diving into a conversation with Joel Jacob, Principal Product Manager at Splunk, about the company’s growing focus on OT, its approach to industrial data analytics, and how it fits into the broader ecosystem of industrial platforms.Splunk is a name that’s well known in IT and cybersecurity circles, but its role in industrial environments is less understood. Now, as part of Cisco, Splunk is positioning itself at the intersection of IT observability, security, and industrial data analytics. This episode is all about understanding what that means in practice.Thanks for reading The IT/OT Insider! Subscribe for free to receive new Industrial DataOps Insights and support our work.From IT and Cybersecurity to Industrial DataJoel’s journey into Splunk mirrors the company’s shift into OT. Coming from a background in robotics, automotive, and smart technology, he initially saw Splunk as a security and IT analytics company. But what he found was a growing demand from industrial customers who were already using Splunk for OT use cases."A lot of customers had already started using Splunk for OT, and the company realized it needed people with industrial experience to support that growing demand."Splunk has built its reputation on handling log data, security monitoring, and IT observability. But as Joel explains, industrial data has its own challenges, and Splunk has had to adapt.How Splunk Fits into the Industrial Data Platform Capability MapTo make sense of where Splunk fits, we look at our Industrial Data Platform Capability Map—a framework that defines the core building blocks of an industrial data strategy.Splunk’s Strengths:* Data Storage and Analytics: This is where Splunk is strongest. The platform can ingest, store, and analyze massive amounts of data, whether it’s sensor data, log files, or security events.* Data Quality and Federation: Splunk allows companies to store raw data and extract value dynamically, rather than forcing them to clean and standardize everything upfront. Its federated search capabilities also mean that data doesn’t have to be centralized—a key advantage for IT/OT integration.* Visualization and Dashboards: With Dashboard Studio, Splunk provides modern, customizable visualizations that stand out from traditional industrial software.Where Splunk is Expanding:* Connectivity and Edge Computing: Historically, getting industrial data into Splunk required external middleware. But in the last 18 months, the company has introduced an edge computing device with built-in AI capabilities, making it easier to ingest and process OT data directly.* Edge Analytics and AI: The Splunk Edge Hub enables local AI inferencing and analytics on industrial equipment, addressing latency and connectivity challenges that arise when relying on cloud-based models.Joel sees this as a natural evolution:"We know that moving all industrial data to the cloud isn’t always practical. By adding edge computing capabilities, we make it easier for OT teams to process data where it’s generated."A Real-World Use Case: Energy Optimization in Cement ManufacturingOne of Splunk’s key industrial customers, Cementos Argos, is a major cement producer facing a common challenge—high energy costs and carbon emissions.The Problem:* Cement manufacturing is one of the most energy-intensive industries in the world.* The company needed a way to optimize kiln operations while ensuring consistent product quality.* Traditional manual adjustments were slow and lacked real-time visibility.The Solution:* The company ingested data from OT systems into Splunk.* Using the Machine Learning Toolkit, they built predictive models to optimize kiln temperature and pressure settings.* These models were then pushed back to PLCs, allowing automated process adjustments.The Results:* $10 million in annual energy savings across multiple sites.* The ability to push AI models to the edge reduced response times by 20%.* Operators could now trust AI-generated recommendations, while still overriding changes if needed."The combination of machine learning and real-time process control created a true closed-loop optimization system."Federated Search: A Different Approach to Industrial DataOne of Splunk’s unique contributions to industrial data management is federated search. Unlike traditional platforms that require all data to be centralized, Splunk allows companies to analyze data across multiple sources in real-time.Joel explains the shift in thinking:"Most industrial data strategies assume you need a single source of truth. But in reality, data lives in multiple places, and moving it all is expensive. With federated search, we can analyze data wherever it resides—whether it’s on-prem, in the cloud, or at the edge."This is a major departure from the “data lake” approach that many industrial companies have pursued. Instead of trying to move and harmonize all data upfront, Splunk’s model is about leaving data where it makes the most sense and analyzing it dynamically.How IT and OT Collaboration is ChangingBridging the IT/OT divide has been a theme across this podcast series, and Splunk’s approach to security and data federation provides a unique perspective on this challenge.Joel shares some key insights on what makes collaboration successful:* Security is often the bridge. Since IT teams already use Splunk for security monitoring, they are more open to OT data integration when it’s part of a broader cybersecurity strategy.* OT needs tools that don’t slow them down. Engineers don’t want to wait for IT approval to test new models. That’s why Splunk’s edge device was designed to be easily deployable by OT teams.* The next generation of engineers is more IT-savvy. Younger engineers entering the workforce are more comfortable with IT tools and cloud environments, making collaboration easier.One of the most interesting points was how Splunk leverages its Cisco partnership to expand into OT environments:"Cisco has an enormous footprint in industrial networking. By running analytics on Cisco switches and edge devices, we can make OT data integration seamless."The Role of AI in Industrial DataLike many companies, Splunk is exploring the role of AI and generative AI in industrial environments. One of the most promising areas is automating data analysis and dashboard creation.Joel shares how this is already happening:* AI-generated dashboards: Engineers can simply describe what they want in natural language, and Splunk’s AI generates the necessary queries and visualizations.* Low-code model deployment: Instead of manually writing Python scripts, users can export machine learning models with a single click.* Multimodal AI: By combining sensor data, image recognition, and sound analysis, AI models can detect patterns that human operators might miss."In the next few years, AI will make it dramatically easier to analyze and visualize industrial data—without requiring deep programming expertise."Final ThoughtsSplunk’s journey into OT is a great example of how traditional IT platforms are adapting to the realities of industrial environments. While the company’s core strength remains in data analytics and security, its expansion into edge computing and OT integration is opening up new possibilities for manufacturers.If you want to learn more about how Splunk is evolving in the OT space, check out their website: www.splunk.com.Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com
Welcome to another episode of the IT/OT Insider Podcast. In this special series on Industrial DataOps, we’re diving into the world of real-time industrial data, edge computing, and scaling digital transformation. Our guest today is John Younes, Co-founder and COO of Litmus, a company that has been at the forefront of industrial data platforms for the past 10 years.Litmus is a name that keeps popping up when we talk about bridging OT and IT, democratizing industrial data, and making edge computing scalable. But what does that actually mean in practice? And how does Litmus help manufacturers standardize and scale their industrial data initiatives across multiple sites?That’s exactly what we’re going to explore today.Thanks for reading The IT/OT Insider! Subscribe for free to receive new DataOps insights and support our work.Litmus, you say?John introduces Litmus as an Industrial DataOps platform, designed to be the industrial data foundation for manufacturers. The goal? To make industrial data usable, scalable, and accessible across the entire organization."We help manufacturers connect to any type of equipment, normalize and store data locally, process it at the edge, and then integrate it into enterprise systems—whether that’s cloud, AI platforms, or business applications."At the core of Litmus’ offering is Litmus Edge, a factory-deployable edge data platform. It allows companies to:* Connect to industrial equipment using built-in drivers.* Normalize and store data locally, enabling real-time analytics and processing.* Run AI models and analytics workflows at the edge for on-premise decision-making.* Push data to cloud platforms like Snowflake, Databricks, AWS, and Azure.For enterprises with multiple factories, Litmus Edge Manager provides a centralized way to manage and scale deployments, allowing companies to standardize use cases across multiple plants."We don’t just want to collect data. We want to help companies actually use it—to make better decisions and improve efficiency."How Litmus Maps to the Industrial Data Platform Capability ModelWe always refer to our Industrial Data Platform Capability Map to understand how different technologies fit into the broader IT/OT data landscape. So where does Litmus fit in?* Connectivity → One of Litmus’ core strengths. Their platform connects to PLC, SCADA, MES, historians, and IoT sensors out-of-the-box.* Edge Compute and Store → Litmus processes and optionally stores data locally before sending it to the cloud, reducing costs and improving real-time responsiveness.* Data Normalization & Contextualization → The platform includes a data modeling layer, making sure data is structured and usable for enterprise applications.* Analytics & AI → Companies can run KPIs like OEE, asset utilization, and energy consumption directly on the edge.* Scalability & Management → With Litmus Edge Manager, enterprises can deploy and scale their data infrastructure across dozens of plants without having to rebuild everything from scratch.John explains:"The biggest challenge in industrial data isn’t just connecting things—it’s making that data usable at scale. That’s why we built Litmus Edge Manager to help companies replicate use cases across their entire footprint."A Real-World Use Case: Standardizing OEE Across 35 PlantsOne of the most compelling Litmus deployments comes from a large European food & beverage manufacturer with 50+ factories.The Challenge:* The company had grown through acquisitions, meaning each factory had different equipment, different systems, and different data formats.* They wanted to standardize OEE (Overall Equipment Effectiveness) across all plants to benchmark performance and identify inefficiencies.* They needed a way to deploy an Industrial DataOps solution at scale—without taking years to implement.The Solution:* The company deployed Litmus Edge in 35 factories within 12-18 months.* They standardized KPIs like OEE across all plants, providing real-time insights into performance.* By filtering and compressing data at the edge, they reduced cloud storage costs by 90%.* They also introduced energy monitoring, identifying unused machines running during non-production hours, leading to 4% energy savings per plant.The Impact:* Faster deployment: The project was rolled out with just a small team, proving that scalability in industrial data is possible.* Cost savings: Less unnecessary cloud storage and lower energy usage translated to significant financial gains.* Enterprise-wide visibility: For the first time, they could compare OEE across all plants and identify best practices for process optimization."With Litmus, they didn’t just deploy a one-off use case. They built a scalable, repeatable data foundation that they can expand over time."The Challenge of Scaling Industrial DataOne of the biggest barriers to industrial digitalization is scalability. IT systems are designed to scale effortlessly—but factory environments are different.John explains:"Even within the same factory, two production lines might be completely different. How do you deploy a use case that works across all sites without starting from scratch every time?"His answer? A standardized but flexible approach.* 80% of the deployment can be standardized.* 20% requires last-mile configuration to account for machine variations.* A central management platform ensures that scaling doesn’t require an army of engineers."The key is having a platform that adapts to different machines and processes—without forcing companies to custom-build everything for each site."Data Management: The Next Big IT/OT ChallengeAs industrial companies push for enterprise-wide data strategies, data management is becoming a bigger issue.John shares his take:"IT teams have been doing data management for years. But in OT, data governance is still a new concept."Some of the biggest challenges he sees:* Legacy data formats and siloed systems make data hard to standardize.* Different plants use different naming conventions, making data aggregation difficult.* Lack of clear ownership—Who is responsible for defining the data model? IT? OT? Corporate?To address this, Litmus introduced a Unified Namespace (UNS) solution, allowing companies to enforce data models from enterprise level down to individual assets."We’re seeing more companies set up dedicated data teams—because without good data management, AI and analytics won’t work properly."The Role of AI in Industrial DataAI is the hottest topic in manufacturing right now, but how does it actually fit into industrial data workflows?John sees two major trends:* AI-powered analytics at the edge* Instead of just sending raw data to the cloud, companies are running AI models directly on edge devices.* Example: AI detecting machine anomalies and recommending preventative actions to operators before failures occur.* AI-assisted deployment & automation* Litmus is using AI to simplify Industrial DataOps—automating edge deployments across multiple sites.* Example: Instead of manually configuring devices, users can type a command like “Deploy Litmus Edge to 30 plants with Siemens drivers”, and the system automates the entire process."AI won’t replace humans on the shop floor anytime soon. But it will make deploying, managing, and using industrial data significantly easier."Final ThoughtsIndustrial DataOps is no longer just a technical experiment—it’s becoming a business necessity. Companies that don’t embrace scalable data management and AI-driven insights risk falling behind their competitors.Litmus is tackling the problem head-on by providing a standardized but flexible way to ingest, process, and scale industrial data.If you want to learn more about Litmus and their approach to Industrial DataOps, check out their website: www.litmus.io.Continue reading here: Stay Tuned for More!Subscribe to our podcast and blog to stay updated on the latest trends in Industrial Data, AI, and IT/OT convergence.🚀 See you in the next episode!Youtube: https://www.youtube.com/@TheITOTInsider Apple Podcasts: Spotify Podcasts: Disclaimer: The views and opinions expressed in this interview are those of the interviewee and do not necessarily reflect the official policy or position of The IT/OT Insider. This content is provided for informational purposes only and should not be seen as an endorsement by The IT/OT Insider of any products, services, or strategies discussed. We encourage our readers and listeners to consider the information presented and make their own informed decisions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit itotinsider.substack.com