DiscoverTech Transformed
Tech Transformed
Claim Ownership

Tech Transformed

Author: EM360Tech

Subscribed: 11Played: 155
Share

Description

Expert-driven insights and practical strategies for navigating the future of AI and emerging technologies in business.

Led by an ensemble cast of expert interviewers offering in-depth analysis and practical advice to make informed decisions for your enterprise.
399 Episodes
Reverse
As companies rethink how they provide customer experiences (CX), a new form of AI capability, agentic AI, is quickly changing how work is accomplished in contact centres. In the recent episode of the Tech Transformed podcast, Dialpad Lead Product Manager Calvin Hohener sits down with host Jon Arnold, Principal at J Arnold & Associates. They discuss the transition from legacy chatbots to more autonomous agents capable of completing tasks and improving customer interactions.The conversation highlights the importance of understanding the technology's impact on enterprise architecture, the need for clean data, and the strategic implications for C-level executives. Hohener emphasises the importance of starting with clear use cases and working closely with vendors to maximise the potential of AI in business operations.From Legacy Chatbots to Agentic AIMost people have used chatbots and found them lacking. Hohener explains why: earlier conversational AI was based on retrieval-augmented generation (RAG). These systems could take user input, search a knowledge base or the internet, and provide an answer. This was helpful for customer service queries, but limited.“Previous AI models could retrieve and return information, but now we’re moving into a new phase with agentic AI.” Agentic AI can take action rather than just providing information. For AI agents to succeed, organisations must first organise their data. “How your internal knowledge is structured is crucial. Even if the data is unorganised, you need to know its location and ensure it’s clean,” stated Hohener.Agentic systems depend on internal knowledge, including knowledge base articles, CRM notes, and process documentation. If this foundation is disordered, the agent’s output will not be reliable. This isn’t about achieving ideal data cleanliness from the start; it’s about knowing what information exists, where it is, and whether it can be trusted. If an AI agent bases its decisions on outdated, conflicting, or incomplete content, it will struggle to perform tasks aptly, regardless of how sophisticated the model is. Enterprises need at least basic clarity about which systems hold which knowledge, who is responsible for them, and whether there is consistency across sources.Hohener noted that organisations often overlook how quickly conflicting information can undermine an otherwise well-designed agent. A single outdated procedure or mismatched policy in a knowledge repository can lead an AI to produce incorrect results or halt during workflow execution. Keeping internal content clean, deduplicated, and consistent gives the agent a reliable, valid source. This reliability becomes crucial when AI starts taking meaningful actions, not just providing answers.By focusing on data readiness early, enterprises not only reduce deployment obstacles but also set the stage for scaling agentic AI across more complex processes. In many ways, preparing data isn’t just a technical task; it’s an organisational one. How Human Agents Work with AI Agents?The Dialpad Lead Product Manager noted that human roles, too, will evolve with agentic AI entering the contact centre. For instance, human agents will take on more of an advisory role—reviewing conversation traces and helping adjust the models.”Instead of...
Client service teams are at a breaking point. Margins are shrinking, the demand keeps rising, and much of the day is consumed by work that doesn’t move the needle. As a result, skilled people often spend hours reconciling spreadsheets, re-entering the same data across multiple systems, and chasing updates, time that should be spent on the work clients actually pay for. Every hour lost to manual admin is an hour of revenue slipping away. In this day and age, that’s a hit no business can afford.AI isn’t just a buzzword here; it’s a practical lever. It can cut through the repetitive tasks that slow teams down, surface the information they need instantly, and free them to focus on high-value work. The companies winning aren’t replacing staff; they’re removing the obstacles that keep people from doing their best. In a world where speed and accuracy matter more than ever, ignoring that shift isn’t optional.In the latest episode of Tech Transformed, hosted by Christina Stathopolus, founder of Dare to Data, Daniel Mackey, CEO of Teamwork.com, discussed how AI is reshaping the daily operations of client service teams. From automating repetitive admin tasks to surfacing critical information faster, AI is giving teams the bandwidth to focus on the work that truly drives value for clients. AI and Business Transformation in PracticeDuring the conversation, Mackey highlighted how AI is reshaping business operations, emphasising efficiency and productivity rather than job displacement. “AI has transformed our company,” he noted, pointing to tangible improvements across workflow and project management. Teams are now able to focus on strategic initiatives, leaving repetitive tasks to intelligent systems.The Teamwork.com CEO also shared a recent example from a government agency that integrated AI into its processes. By automating routine administrative work, the agency experienced better resource allocation and improved project outcomes. “They’re more efficient, higher quality,” Mackey said. “AI allows them to focus on the bigger parts of the business.”Rethinking Productivity and Client DeliveryOne of the challenges in the industry is that most AI features are added onto existing tools that weren’t designed for client services. Mackey discussed how TeamworkAI addresses this gap. Built into a platform designed specifically for managing client services end-to-end, TeamworkAI connects projects, people, and profits in one system.By integrating AI directly into client delivery workflows, organisations can streamline project management, reduce manual reporting, and ensure that technology enhances rather than disrupts service delivery. This approach allows businesses to use technology strategically, rather than simply automating isolated tasks.Technology and the Future of WorkThe discussion also touched on the broader impact of AI on traditional business models. Organisations that adopt AI thoughtfully can improve their internal processes, freeing employees from repetitive tasks and enabling them to contribute to higher-value projects. Mackey emphasised that the goal isn’t just automation, it’s profitable client delivery. AI can unlock both time and insight, allowing businesses to prioritise the most impactful work.AI is redefining how businesses allocate resources, manage projects, and deliver value to clients. By eliminating repetitive work and connecting projects,...
With the rapid evolution of Generative AI, customer experience (CX) is evolving rapidly, too. In a recent episode of the Tech Transformed podcast, Mike Gozzo, Chief Product and Technology Officer at Ada, sat down with host Christina Stathopoulos, Founder of Dare to Data. They talked about how generative AI is changing business-to-customer interactions.“I view it not just as a business opportunity, but we are here to solve a problem that has existed as long as commerce has,” Gozzo said. He emphasised that AI's goal isn’t just efficiency. It is about building trust and clearly understanding customer needs to allow productive interactions.Artificial intelligence, he noted, “has really enabled what used to be much more costly to happen at scale.” The Ada Chief Product and Technology Officer pointed out that the best customer experiences are highly personalised. Comparing it to arriving at a luxury hotel where the staff already knows your name, even on your first visit. He noted that modern AI aims to make such experiences, which were once only for a select few, common for everyone.Looking to the future, Gozzo tells Stathopoulos he believes generative AI will foster more engagement between customers and brands. “If I consider the trend, I think we will have much more natural, personalised, and effortless interactions than ever before because of this technology.”Gen AI’s impact on Customer Data When discussing operational challenges, especially regarding customer data management, the guest speaker stressed quality over quantity. Gozzo explained that in most AI set-ups, “the real value lies not in the data you’ve collected, but in the understanding of how your business runs, operates, and the people doing the tasks you want to automate.”Governance, Human Orchestration & the Future of AIBeyond personalisation, AI should be implemented responsibly and monitored closely. “The first thing with any AI deployment is to avoid thinking of it as software you buy, deploy, and forget. They need ongoing monitoring, engagement, and maintenance,” Gozzo tells Stathopoulos. He suggested thorough testing processes and collaboration with specialised companies like AIUC, which verify AI systems against common risks. “These tests need to happen quarterly or yearly because the underlying models change so rapidly,” he added.In addition to regularly conducting AI checks, the human element is also critical. AI might automate up to 80% of routine tasks, but humans will still play a vital role. Gozzo described the human role as that of an orchestrator, managing teams that include both humans and AI systems and effectively delegating tasks between them.Finally, Gozzo talked about AI's immediate impact on customer experience. “Our leading customers’ AI agents are outperforming humans. They deliver higher-quality customer service experiences, and customers prefer interacting with their AI.” The key measure, he said, is the positive effect on business growth and customer lifetime value.The chief technology officer’s parting advice to IT decision makers is: “The people on your team know how to make AI work. Capture their insights. Don’t treat this as a technology project. The technologist will not dominate the next decade. This is about business leaders and experts doing the heavy lifting.”At the core of generative and agentic AI, Gozzo...
The era of 3G is ending. For many industrial businesses, smart infrastructure systems, remote device management, and IoT connectivity rely on networks that are now being phased out globally. The question isn’t if—but when your operations could be disrupted.In this episode of Tech Transformed, Trisha Pillay speaks with Jana Vidis, Business Development Manager at IFB, about the worldwide 3G sunset, what it means for enterprises, and how proactive planning can prevent costly disruptions. They explore the reasons behind the transition to 4G and 5G, the impact on various industries, and the strategies organisations can implement to assess their reliance on legacy devices. Why the 3G Sunset Matters3G networks have powered connectivity for decades, offering wide coverage and reliability. But as global operators move to 4G and 5G, maintaining 3G is no longer sustainable. Carriers are discontinuing services, and support is dwindling, leaving legacy devices vulnerable to:Operational downtimeInconsistent performanceIncreased security risksJana emphasises:“Have a good understanding of what devices you have. Work with IT partners to prepare for future changes. Plan your transition and act before disruption hits.”Jana also stressed the importance of understanding current technology deployments, planning for transitions, and future-proofing investments to avoid disruptions. The conversation highlights the need for proactive measures in adapting to technological advancements and ensuring operational continuity.A Global TimelineThe transition is already well underway across multiple regions:North America: AT&T, Verizon, and T-Mobile 3G networks discontinued in February 2022; Canada’s shutdown begins in early 2025.Europe: Most countries, including the UK, Germany, Hungary, and Greece, will complete shutdowns by the end of 2025.Asia: Japan phased out 3G in 2022, Singapore in July 2024, and India plans completion by the end of 2025.Africa: South Africa started in July 2025; other countries are slowing the transition.South America: Providers like Telefonica, Entel, and Claro completed shutdowns in 2022–2023.Middle East: Oman started shutting down in July 2024; Zain Bahrain in Q4 2022; Kuwait, Iran, and Jordan are following.Industrial devices still using 3G must transition now to avoid operational disruption. From smart infrastructure to remote IoT systems, legacy devices left unaddressed can cause downtime, inconsistent performance, and increased security risks.Takeaways3G networks are being phased out to enable 4G and 5G development.Businesses must assess their reliance on 3G devices before shutdowns.Legacy devices can
Tech leaders are often led to believe that they have “full-stack observability.” The MELT framework—metrics, events, logs, and traces—became the industry standard for visibility. However, Robert Cowart, CEO and Co-Founder of ElastiFlow, believes that this MELT framework leaves a critical gap. In the latest episode of the Tech Transformed podcast, host Dana Gardner, President and Principal Analyst at Interabor Solutions, sits down with Cowart to discuss network observability and its vitality in achieving full-stack observability.The speakers discuss the limitations of legacy observability tools that focus on MELT and how this leaves a significant and dangerous blind spot. Cowart emphasises the need for teams to integrate network data enriched with application context to enhance troubleshooting and security measures. What’s Beyond MELT?Cowart explains that when it comes to the MELT framework, meaning “metrics, events, logs, and traces, think about the things that are being monitored or observed with that information. This is alluded to servers and applications.“Organisations need to understand their compute infrastructure and the applications they are running on. All of those servers are connected to networks, and those applications communicate over the networks, and users consume those services again over the network,” he added.“What we see among our growing customer base is that there's a real gap in the full-stack story that has been told in the market for the last 10 years, and that is the network.”The lack of insights results in a constant blind spot that delays problem-solving, hides user-experience issues, and leaves organizations vulnerable to security threats. Cowart notes that while performance monitoring tools can identify when an application call to a database is slow, they often don’t explain why.“Was the database slow, or was the network path between them rerouted and causing delays?” he questions. “If you don’t see the network, you can’t find the root cause.”The outcome is longer troubleshooting cycles, isolated operations teams, and an expensive “blame game” among DevOps, NetOps, and SecOps.Elastiflow’s approaches it differently. They focus on observability to network connectivity—understanding who is communicating with whom and how that communication behaves. This data not only speeds up performance insights but also acts as a “motion detector” within the organization. Monitoring east-west, north-south, and cloud VPC flow logs helps organizations spot unusual patterns that indicate internal threats or compromised systems used for launching external attacks.“Security teams are often good at defending the perimeter,” Cowart says. “But once something gets inside, visibility fades. Connectivity data fills that gap.”Isolated Monitoring to Unified Experience Cowart believes that observability can’t just be about green lights...
Enterprises are discovering that the first wave of cloud adoption didn’t simplify operations. It created flexibility, but it also introduced fragmentation, rising costs, and skills gaps that now make AI adoption harder to manage. In this episode of Tech Transformed, analyst and host Dana Gardner speaks with two leaders from across the IBM portfolio: Maria Bracho, CTO for the Americas at Red Hat, and Tyler Lynch, Field CTO for the HashiCorp product suite. They discuss how organisations can move from scattered cloud operations to a unified, automated model that supports AI securely and at scale. The conversation covers the pressures leaders face today, the role of automation, and the skills and operating model changes required as AI becomes core to enterprise strategy. What you’ll learn Why tool sprawl and shrinking teams are increasing operational risk How AI amplifies gaps in data, security, and processes What skills and operating model changes CIOs must prioritise Why hybrid cloud is essential for multi-model AI workloads The growing importance of automation in cloud and AI delivery How poor data hygiene can rapidly increase AI costs Practical steps for building secure, reliable AI operations Key insights from the discussion Cloud complexity is accelerating Most organisations now run “a sprawl of tool sets and environments,” Bracho notes, often without the people or standardized processes to manage them. While cloud created opportunities, the operational overhead has increased. AI raises the stakes Training, tuning, and inference often run in different environments, each with separate performance and security requirements. Bracho describes AI as “the killer workload,” reinforcing the need for robust hybrid architectures. Skills gaps slow progress Lynch highlights the disconnect between AI teams and production engineering teams. Without alignment, model deployment becomes slow and risky — echoing findings from the HashiCorp 2025 Cloud Complexity Report, where most organizations say platform and security teams are not working in sync. AI exposes underlying weaknesses “AI is not going to solve complexity; it will amplify what you already have,” Bracho says. But with structured processes and automation, AI can reduce operator workload and help teams adopt best practices faster. Automation is becoming essential The Cloud Complexity Report shows that more than half of enterprises see automation as key to unlocking cloud innovation. With the foundations already laid, AI can accelerate progress by improving consistency and reducing manual effort. Modernization is continuous Both guests emphasise that AI success depends on long-term investment in people, operating rhythms, and security. Consulting can help organizations start strong, but lasting results come from internal alignment and disciplined execution. Episode chapters 00:00 Navigating cloud complexity08:11 Skills and operating model challenges15:13 Automation for cloud and AI productivity21:48 How consulting accelerates AI readiness24:10 Final guidance for CIOs About...
The semiconductor industry is at an inflection point. As systems become more intelligent, connected, and software-defined, chip design is growing too complex for humans alone. Advances in electronic design automation are reshaping how silicon is built and verified, enabling faster, smarter, and more reliable innovation from data centers to edge devices.How AI Is Changing EDA and Chip DesignIn the latest episode of Tech Transformed, host John Santaferraro speaks with Dr. Thomas Andersen, Vice President of AI and Silicon Innovation at Synopsys, about the real-world impact of AI in chip design. Together, they explore how AI and automation are redefining EDA, how generative AI is accelerating design efficiency, and what the Synopsys acquisition of Ansys means for the future of simulation and system-level integration.As Dr. Andersen explains, “AI is transforming EDA. Synopsys leads in silicon design, and the Ansys acquisition expands our capabilities across multiphysics simulation and system optimization.”From Silicon to SystemsThe integration of complex hardware and software has become one of the greatest challenges in semiconductor and OEM innovation. Traditional sequential development, where software waits for hardware, often causes delays and missed targets. Advances in EDA tools and virtual prototyping now enable engineers to initiate software design months before silicon is finalised, thereby accelerating bring-up and enhancing collaboration across the supply chain.“Generative AI enables more efficient design,” says Andersen. “AI reshapes engineering workflows, but human expertise remains essential.”The result is faster time-to-market, enhanced design verification, and greater overall system reliability.Listen to the full conversation on the Tech Transformed podcast to discover how Synopsys is advancing electronic design automation, improving engineering workflows and chip design from silicon to systems.For more insights follow Synopsys:X: @SynopsysInstagram: @synopsyslifeFacebook: https://www.facebook.com/Synopsys/LinkedIn: https://www.linkedin.com/company/synopsys/TakeawaysAI is transforming EDA and chip design by automating complex processes.Synopsys is a leader in silicon-to-systems design, providing critical software for chipmakers.The acquisition of Ansys expands Synopsys' capabilities beyond EDA.Generative AI is enabling more efficient and adaptable chip design.AI-powered observability is reshaping engineering workflows.The complexity of chip design has increased, requiring advanced tools and automation.Human expertise remains essential in chip design, despite advances in automation.EDA tools simulate chip...
Driving Enterprise Innovation with AI and Strong CI/CD FoundationsAs enterprises push to deliver software faster and more efficiently, continuous integration and continuous delivery (CI/CD) pipelines have become central to modern engineering. With increasing complexity in builds, tools, and environments, the challenge is no longer just speed, but it’s also about maintaining flow, consistency, and confidence in every release.In this episode of Tech Transformed, host Dana Gardner joins Arpad Kun, VP of Engineering and Infrastructure at Bitrise, to explore how solid CI/CD foundations can drive innovation and enable enterprises to harness AI in more practical, impactful ways. Drawing on findings from the Bitrise Mobile DevOps Insights Report, Kuhn shares how teams are optimising mobile delivery pipelines to accelerate development and support intelligent automation at scale.Complexity of Continuous Integration“Continuous integration pipelines are becoming more complex,” says Kuhn. “Build times are decreasing despite increasing complexity.” Faster compute and caching solutions are helping offset these pressures, but only when integrated into a cohesive CI/CD platform that can handle the rising demands of modern software delivery.A mature CI/CD environment creates stability and predictability. When developers trust their pipelines, they iterate faster and with less friction. As Kuhn notes, “A robust CI/CD platform reduces anxiety around releases.” Frequent, smaller iterations deliver faster feedback, shorten release cycles, and often improve app ratings—especially in the fast-paced world of mobile and cross-platform development.AI Ambitions with Engineering RealityIt’s easy to become swept up in the potential of AI without considering whether existing foundations can support it. Many development environments are not yet equipped to handle the iterative, data-intensive nature of AI-powered software engineering. Without scalable CI/CD pipelines, teams risk encountering bottlenecks that can cancel out the potential benefits of AI.To truly drive innovation, enterprises must align their AI ambitions with robust automation, strong observability, and disciplined engineering practices. A well-designed CI/CD platform allows teams to integrate AI responsibly, accelerating testing, improving deployment accuracy, and maintaining agility even as complexity grows.TakeawaysContinuous integration pipelines are becoming more complex.Build times are decreasing despite increasing complexity.Faster computing and caching are key to improving delivery speed.Flaky tests have increased significantly, causing inefficiencies.Monitoring and isolating flaky tests can improve build success rates.Maintaining flow for engineers is crucial for productivity.A robust CI/CD platform reduces anxiety around releases.Frequent iterations lead to faster feedback and improved app ratings.Cross-platform development is on the rise, especially with React Native.The future of software development will be influenced by AI.For more insights, follow Bitrise:X: @bitriseInstagram: @bitrise.ioFacebook:
For years, observability sat quietly in the background of enterprise technology, an operational tool for engineers, something to keep the lights on and costs down. As systems became more intelligent and automated, observability has stepped into a far more strategic role. It now acts as the connective tissue between business intent and technical execution, helping organizations understand not only what is happening inside their systems, but why it’s happening and what it means.This shift forms the core of a recent Tech Transformed podcast episode between host Dana Gardner and Pejman Tabassomi, Field CTO for EMEA at Datadog. Together, they explore how observability has changed into what Tabassomi calls the “nervous system of AI”, a framework that allows enterprises to translate complexity into clarity and automation into measurable outcomes.Building AI LiteracyAI models make decisions that can affect everything from customer experiences to financial forecasting. It's important to understand that without observability, those decisions remain obscure.“Visibility into how models behave is crucial,” Tabassomi notes. True observability allows teams to see beyond outputs and into the reasoning of their systems, even if a model is drifting, automation is adapting effectively, and results align with strategic goals. This transparency builds trust. It also ensures accountability, giving organizations the confidence to scale AI responsibly without losing sight of the outcomes that matter most.Observability Observability is not merely about monitoring; it is about decision-making. It provides the insight required to manage complex systems, optimize outcomes, and act with agility. For organizations relying on AI and automation, observability becomes the differentiator between being merely efficient and achieving a sustainable competitive edge. In short, observability is no longer optional; it is central to translating technology into strategy and strategy into advantage.For more insights follow Datadog:X: @datadoghq Instagram: @datadoghq Facebook: facebook.com/datadoghq facebook.comLinkedIn: linkedin.com/company/datadogTakeawaysObservability has evolved from cost efficiency to a strategic role in...
““Without healthy employees, you don’t have healthy customers. And without healthy customers, you don’t have a healthy bottom line.” — Kate Visconti, Founder and CEO, Five to Flow.While artificial intelligence (AI) has hastened development and made enterprises more efficient, it also comes with more deadlines. The deadlines often merge into after-hours messaging. Burnout has become a default result of productivity, especially in the tech industry. In this episode of the Tech Transformed podcast, Shubhangi Dua, Podcast Host, Producer and B2B Tech Journalist, speaks with Kate Visconti, Founder and CEO of Five to Flow, about the critical issues of burnout and disengagement in the workplace. They discuss the five core elements of change management, the financial implications of employee wellness, and strategies for enhancing productivity through flow optimisation. Also Watch: Fixing the Gender Gap in STEMThe Wellness Wave Diagnostic to Help Fix Profit LeaksVisconti stresses the importance of creating a supportive work environment and implementing effective change management practices to improve organisational performance. The conversation also highlights the role of technology in productivity and the need for leaders to prioritise employee well-being to drive business success.With an ambition to change the way organisations define true performance, VIsconti developed a system – a data-driven framework called The Wellness Wave. As per the official Five to Flow website, The Wellness Wave is “a proprietary diagnostic that measures sentiment and business performance across five core elements.”Visconti sheds light on the original framework of the company. She says, “The original was adopted when we first kicked off as part of our consulting, and it's called the Wellness Wave diagnostic. It’s literally looking across the five core elements — people, culture, process, technology, and analytics.”This framework helps companies identify and fix their profit leaks, which are the hidden financial losses caused by employee burnout, disengagement, and distraction. In her conversation with Dua, host of the Tech Transformed podcast episode, Visconti shares how understanding human behaviour can lead to significant improvements in business performance.According to Five to Flow’s global diagnostics, only 13 per cent of flow triggers work at their best. For tech leaders, that means most teams are functioning well below their potential.Kate’s top tip is to create flow blocks. “It’s about designing uninterrupted time for peak focus. This is when your brain isn’t in a stress state. For me, it’s mornings with my coffee. For others, it might be in the afternoon. Communicate those times to your team and protect them like meetings.”These flow blocks aren’t just productivity tricks; they show that focus is more important than frantic multitasking. “Multitasking is a fallacy,” Kate says. “You’re just rapidly switching tasks and burning through mental...
“Cyber resilience isn’t just about protection, it’s about preparation.”Every business in this day and age lives in the cloud. Our operations, data, and collaboration tools are powered by servers located invisibly around the world. But here’s the question we often overlook: what happens when the cloud falters?In this episode of Tech Transformed, Trisha Pillay sits down with Jan Ursi, Vice President of Global Channels at Keepit, to uncover the real meaning of cyber resilience in a cloud-first world. Are you putting all your trust in hyperscale cloud providers? Think again. Trisha and Jan explore why relying solely on giants like Microsoft or Amazon can put your data at risk and how independent infrastructure gives organisations control, faster recovery, and true digital sovereignty.Takeaways:The importance of cyber resilience in a cloud-first worldHow independent cloud infrastructure protects your SaaS applicationsCommon shared responsibility misconceptions that can cost organisations dataStrategies for quick recovery from ransomware and cyberattacksWhy digital sovereignty ensures control and complianceChapters:00:00 – Introduction to Cyber Resilience and Cloud Strategy05:00 – The Importance of Independent Infrastructure10:00 – Shared Responsibility and Misconceptions15:00 – Digital Sovereignty and Compliance20:00 – Practical Tips for CISOs and CIOs22:00 – ConclusionAbout Jan Ursi:Jan Ursi leads Keepit’s global partnerships, helping organisations embrace the AI-powered cyber resilience era. Keepit is the world’s only independent cloud dedicated to SaaS data protection, security, and recovery. Jan has previously built and scaled businesses at Rubrik, UiPath, Nutanix, Infoblox, and Juniper, shaping the future of enterprise cloud, hyper-automation, and data protection.Follow EM360Tech for more insights:Website: www.em360tech.comX: @EM360TechLinkedIn: EM360TechYouTube: EM360Tech
"5G is becoming a great enabler for industries, enterprises, in-building connectivity and a variety of use cases, because now we can provide both the lowest latency and the highest bandwidth possible,” states Ganesh Shenbagaraman, Radisys Head of Standards, Regulatory Affairs & Ecosystems.In the recent episode of the Tech Transformed podcast, Shubhangi Dua, Podcast Host, Producer, and Tech Journalist at EM360Tech, speaks to Shenbagaraman about 5G and edge computing and how they power private networks for various industries, from manufacturing, national security to space.The Radisys’ Head of Standards believes in the idea of combining 5G with edge computing for transformative enterprise connectivity. If you’re a CEO, CIO, CTO, or CISO facing challenges of keeping up the pace with capacity, security and quality, this episode is for you. The speakers provide a guide on how to achieve next-gen private networks and prepare for the 6G future.Real-Time ControlThe growing need for real-time applications, such as high-quality live video streams and small industrial sensors with instant responses, demands data processing to occur closer to the source than ever before. Alluding to the technical solution that provides near-zero latency and ensures data security, Shenbagaraman says:"By placing the 5G User Plane Function (UPF) next to local radios, we achieve near-zero latency between wireless and application processing. This keeps sensitive data secure within the enterprise network."Such a strategy has now become imperative in handling both high-volume and mission-critical low-latency data all at the same time. Radisys addresses key compliance and confidentiality issues by storing the data within a private network. Essentially, they create a safe security framework that yields near-zero latency to guarantee utmost data security.Powering Edge Computing ApplicationsThe real-world benefit of this zero-latency setup is the power it gives to edge computing applications. As the user plane function is the network's final data exit point, positioning the processing application near it assures prompt perspicuity and action."The devices could be sending very domain-specific data,” said Shenbagaraman. “The user plane function immediately transfers it to the application, the edge application, where it can be processed in real time."It reduces errors and improves the efficiency of tasks through the Radisys platform, with the results meeting all essential requirements, including compliance needs.One such successful use case spotlighted in the podcast is the Radisys work with Lockheed Martin’s defence applications. "We enabled sophisticated use cases for Lockheed Martin by leveraging the underlying flexibility of 5G,” the Radisys speaker exemplified.Radisys team customised 5G connectivity for the US defence sector. It incorporated temporary, ad-hoc networks in challenging terrains using Internet Access Backhaul. It also covered isolated, permanent private networks for locations such as maintenance hangars.Intelligence comes from the RAN Intelligent...
Now that companies have begun leaping into AI applications and adopting agentic automation, new architectural challenges are bound to emerge. With every new technology comes high responsibility, consequences and challenges. To help face and overcome some of these challenges, Temporal introduced the concept of “durable execution.” This concept has quickly become an integral part of building AI systems that are not just scalable but also reliable, observable and manageable.In this episode of the Tech Transformed podcast, host Kevin Petrie, VP of Research at BARC, sits down with Samar Abbas, Co-founder and CEO of Temporal Technologies. They talk about durable execution and its critical role in driving AI innovation within enterprises. They discuss Abbas’s extensive background in software resilience, the development of application architectures, and the importance of managing state and reliability in AI workflows. The conversation also touches on the collaboration between developers, data teams, and data scientists, emphasising how durable execution can enhance productivity and governance in AI initiatives.Also Watch: Developer Productivity 5X to 10X: Is Durable Execution the Answer to AI Orchestration Challenges?Chatbots to Autonomous Agents“AI agents are going to get more and more mission critical, more and more longer lived, and more asynchronous," Abbas tells Petrie. “They’ll require more human interaction, and you need a very stable foundation to build these kinds of application architectures.”AI not just fuels chatbots today. Enterprises are increasingly experimenting with agentic workflows—autonomous AI agents that carry out complex background tasks independently. For example, agents can assign, solve, and submit software issues using GitHub pull requests. Such a setup isn’t just a distant vision; the Temporal co-founder pointed to OpenAI’s Codex as a real-world case. With this approach, AI becomes a system that can handle hundreds of tasks at once, potentially achieving "100x orders of magnitude velocity," as Abbas described.However, there are some architectural difficulties to stay mindful of. The AI agents are non-deterministic by nature and often depend on large language models (LLMs) like OpenAI’s GPT, Anthropic’s Claude, or Google’s Gemini. They reason based on probabilities, and they improvise. They often make decisions that are hard to trace or manage.AI workflows as simple codeThis is where Temporal comes in. It becomes the executioner that keeps the system cohesive and in alignment. “What we are trying to solve with Temporal and durable execution more generally is that we tackle challenging distributed systems problems," said Abbas.Rather than developers stressing over queues, retries, or building their own reliability layers, Temporal allows them to write their AI workflows as simple code. Temporal takes care of everything else—reliable state management, retrying failed tasks, orchestrating asynchronous services, and ensuring uptime regardless of what fails below the surface.As agent-based architectures become more common, the demand for this kind of system-level orchestration will only increase.Listen to the full conversation on the Tech...
For CISOs and technology leaders, AI is reshaping business process management and daily operations. It can automate routine tasks and analyse data, but the human element remains critical for workforce oversight, customer interactions, and strategic decision-making.In this episode of Tech Transformed, Trisha Pillay talks with Anshuman Singh, CEO of HGS UK, about AI in the workplace. They discuss how AI can support employees, improve customer service, and require careful oversight. Singh also shares insights on preparing organisations for AI integration and trends leaders should watch in the coming years.Questions or comments? Email info@em360tech.com or follow us on YouTube, Instagram, and Twitter @EM360Tech.TakeawaysAI is reshaping workforce needs, not just replacing jobs.Routine tasks are increasingly being automated by AI.AI can free up capacity for more meaningful work.The narrative around productivity is changing with AI.AI will create new job opportunities, often better-paying.Human oversight is crucial in AI decision-making.AI can assist in customer service, enhancing empathy.Organisations should not wait for perfect AI solutions.Training and hands-on experience with AI are essential.A psychological safety net is necessary for AI experimentation.Chapters00:00 Introduction to AI and Human Element03:03 AI's Impact on Workforce Dynamics08:29 The Role of Human Oversight in AI10:46 AI Innovations in Customer Service16:34 Positioning for Growth in Business Process Management20:01 Preparing the Workforce for AI Integration25:35 Emerging Trends in AI and Workforce29:19 Final Thoughts on AI and Ethics
AI-Powered Canvases: The Future of Visual Collaboration and InnovationAs hybrid and remote work become the standard, organizations are rethinking how teams brainstorm, align, and innovate. Traditional whiteboards and digital tools often fall short in keeping pace with today’s complex business challenges. This is where AI-powered canvases are transforming visual collaboration.In this episode of Tech Transformed, Kevin Petrie, VP of Research at BARC, joins Elaina O’Mahoney, Chief Product Officer at Mural, to explore how AI collaboration tools are reshaping teamwork in off-site locations. From customer journey mapping to process design, AI-powered canvases give teams the ability to visualize ideas, surface insights faster, and make better decisions—while keeping human creativity at the centre.AI-Powered Canvases, Visuals, and CollaborationA central theme in the conversation is the distinction between automation and augmentation. While AI can recommend activities, map processes, and identify participation patterns, decision-making remains a human responsibility.As O’Mahoney explains:“In the Mural canvas experience, we’re looking to draw out the ability of a skilled facilitator and give it to participants without them having to learn that skill over the years.”This balance ensures that while AI-powered canvases streamline collaboration, teams still rely on human judgment, creativity, and contextual knowledge. One of the most powerful contributions is in AI-driven visuals, which can translate raw data or unstructured input into clear diagrams, journey maps, or process flows. These visuals not only accelerate understanding but also help teams spot gaps and opportunities more effectively.For example:In customer journey mapping, AI can quickly generate visual flows that highlight pain points and opportunities that would take much longer to uncover manually.In manufacturing, AI-powered canvases can create dynamic visuals of workflows, showing how new technologies might disrupt established processes.The Role of Visual Tools in Hybrid WorkIn blended work environments, teams often lack the in-person cues that guide effective collaboration. Visual canvases bring those cues into the digital workspace, showing where ideas are concentrated, highlighting gaps in participation, and enabling alignment across dispersed teams. By combining intuitive design with AI-driven support, platforms like Mural help organisations adapt to the demands of hybrid work while keeping human creativity at the centre.TakeawaysAI is reshaping visual collaboration in distributed teams.Visual elements enhance understanding and decision-making.AI can augment workflows but requires human oversight.There is no universal playbook for AI integration in businesses.Hybrid work necessitates effective digital collaboration tools.AI can help visualize complex customer experiences.Human intuition and creativity remain essential in AI applications.Training and guidance are crucial for effective AI use.Collaboration tools must adapt to diverse work environments.AI should be seen as a partner in the creative process.Chapters00:00 The Evolution of Visual Collaboration05:15 Augmenting vs Automating: The Role of AI10:36...
The issue is data fragmentation, where untrustworthy data is siloed across different databases, SaaS applications, warehouses, and on-premise systems,” Vladimir Jandreski, Chief Product Officer at Ververica, tells Christina Stathopoulos, the Founder of Dare to Data. “Simply, there is no single view of the truth that exists. With governance and data quality checks, these are often inconsistent, AI systems end up consuming incomplete or conflicting signals,” he added, setting the stage for the podcast.In this episode of the Don't Panic, It's Just Data podcast, Stathopoulos speaks with Jandreski about the vital role of unified streaming data platforms in facilitating real-time AI. They discuss the difficulties businesses encounter when implementing AI, the significance of going beyond batch processing, and the skills necessary for a successful streaming data platform. Applications in the real world, especially in e-commerce and fraud detection, show how real-time data can revolutionise AI strategies.Your AI Could Be a Step Behind Jandreski says that most organisations continue to be engineered on batch-first data systems. That means, they still process information in chunks—often hours or even days later. “It's fine for reporting, but it means your AI is always going to be one step behind.”However, “the unified streaming platform flips that model from data at rest to data in motion.” A unified platform will “continuously capture the pulse” of the business and feed it directly to AI for automated real-time decision making. Challenges of Agentic AI Considering that the world is moving toward the era of agentic AI, there are some key challenges that still need to be addressed. Agentic AI means autonomous agents make real-time decisions, maintain memory, use tools and collaborate among themselves. Because they act on their own decisions, regulating them is necessary. Building agents is not the main challenge, but the real challenge is “actually giving them the right infrastructure.” Jandreski highlights. Alluding to an example of AI prototyping frameworks such as Longchain or Lama Index, he further explained that those frameworks work for demos. In reality, however, they can’t support a long-running system trigger workflows that demand high availability, fault tolerance, and deep integration with the enterprise data. This is because enterprises have multiple systems, and many of them are not connected. This way, the data forms into silos. When data is in silos, a unified streaming data platform becomes the key solution. “It provides a real-time event-driven contextual runtime where AI agents need to move from the lab experiments to production reality.”TakeawaysUnified streaming data platforms are essential for real-time AI.Batch processing creates lag, hindering AI effectiveness.Data fragmentation leads to unreliable AI decisions.A unified platform ensures data is fresh and trustworthy.Real-time AI requires a robust data infrastructure.Organisations must move beyond legacy batch systems.Governance and data quality are critical for AI success.Real-world applications...
"The tools we make are observability tools today. But it can never be the goal of our business to provide observability. The goal of our business as a vendor and as a partner with our customers is to give them understandability,” stated Nic Benders, the Chief Technical Strategist at New Relic.In this episode of the Don't Panic It's Just Data podcast, host Christina Stathopoulos, the Founder of Dare to Data, speaks with Benders about where observability is headed in IT systems. They discuss how AI is transforming observability into a more comprehensive understanding of complex systems, moving beyond traditional monitoring to achieve true understandability. Benders explained the importance of merging various data types to provide a complete picture of system performance and user experience. He believes AI can bridge the gap between mere observation of systems and a deeper understanding of their functionality. This could ultimately lead to enhanced incident response and operational efficiency. With maturing technology, complexity is expected to grow, too. The straightforward act of “observing” those complexities is like watching a green light on a machine. This is not enough. The major challenge is to “understand” the inside operations of the machine. This is the difference between simply seeing the data and knowing the "why."Observability to UnderstandabilityAs per Benders, the term observability "leaves a lot to be desired." While it’s the industry’s common label, it only describes seeing a system. The real goal, he argues, is to understand it.Alluding to an analogy, the technical strategist asks Stathopoulos to imagine a nuclear power plant full of a million blinking lights and screens. “You can have all the observability available, but if you're not an expert, you won't grasp what’s actually happening,” says Benders. Typically, software has been developed by a single person who knows every inch of it. However, today, technology has become more perplexing. AI, alongside teamwork and collaboration, provides the tools to solve this problem. An engineer might manage code they didn’t write, making a dashboard full of charts unhelpful. Understandability means moving beyond raw data to give context and meaning.Ultimately, Benders advises IT leaders to embrace change. The tech industry is constantly changing and advancing. Instead of fearing new tools, organizations should focus on what they need to grasp the unknown. As he puts it, "a lot of unknown is coming over the next few decades."TakeawaysObservability is not enough; understanding is crucial.AI can enhance the understanding of complex systems.The shift from observing to understanding is essential for modern IT.AI presents both challenges and opportunities in software development.New interfaces powered by AI can improve user interaction with data.AI can help reduce incident response times significantly.Collaboration with AI is becoming the norm in software development.Real-world applications show measurable benefits of AI in observability.IT decision-makers must prepare for ongoing changes in technology.Understanding the unknown is key to navigating future challenges.Chapters00:00 Introduction to Observability and Understandability05:00...
The phrase “AI agent” still brings to mind chatbots handling customer queries. Fast forward to today - AI agents are far more versatile, representing a new generation of systems capable of perceiving, reasoning, and acting autonomously. These bots are beginning to reshape how enterprises operate, not just in customer service but across software development, data analytics, and operational workflows.In this episode of Tech Transformed, Dare To Data Founder Christina Stathopoulos explores the rapid rise of AI agents with Ben Gilman, CEO of Dualboot Partners. Together, they unpack how AI agents differ from traditional automation and what this shift means for software development, enterprise operations, and the future of productivity.AI Agents vs. Traditional AutomationUnlike traditional automation, which follows strict, deterministic rules, AI agents can adapt to changing inputs, analyze complex data sets, and make autonomous decisions within defined parameters. This allows them to tackle tasks that were previously too intricate or time-consuming for automated systems. Dualboot Partners helps organizations harness these AI agents, integrating them into workflows to deliver real business value through a combination of product, design, and engineering expertise.“The biggest difference with an AI agent, between a standard tool, is that the agent can perceive information and reason about it, providing context and insights you don’t normally get in an algorithm.” — Ben Gilman, CEO, Dual Boot Partners.The Future of AI in EnterpriseOrganisations face several hurdles when integrating AI agents, including defining clear use cases, understanding the probabilistic nature of AI reasoning, and incorporating agents into existing processes and workflows. Despite the challenges, the potential payoff is substantial. AI agents can boost productivity, improve decision-making, and make enterprises more agile. As these systems mature, humans and AI are increasingly collaborating as true partners, reshaping what the workplace and work itself look like.Takeaways:AI Agents vs. Traditional Automation: AI agents can perceive and reason, offering more context and adaptability compared to deterministic systems.Real-World Applications: Examples include virtual vet agents and data analytics tools that enhance productivity and decision-making.Challenges in Adoption: Organizations face hurdles in defining specific use cases and integrating AI agents effectively.Future of AI in Tech: AI agents are expected to significantly boost productivity and innovation in software development and enterprise operations, with AI-first approaches like Dualboot's "DB90" driving structured adoption and accelerating modernization.Chapters0:00 - 3:00: Introduction to AI Agents3:01 - 6:00: Differences from Traditional Automation6:01 - 12:00: Real-World Applications and Examples12:01 - 18:00: Challenges in Adoption18:01 - 22:00: Future Impact on Tech and Operations22:01 - 24:00: Conclusion and Final ThoughtsAbout Dualboot Partners
In a time when the world is run by data and real-time actions, edge computing is quickly becoming a must-have in enterprise technology. In the recent episode of the Tech Transformed podcast, hosted by Shubhangi Dua, a Podcast Producer and B2B Tech Journalist, discusses the complexities of this distributed future with guest Dmitry Panenkov, Founder and CEO of emma.The conversation dives into how latency is the driving force behind edge adoption. Applications like autonomous vehicles and real-time analytics cannot afford to wait on a round trip to a centralised data centre. They need to compute where the data is generated.Rather than viewing edge as a rival to the cloud, the discussion highlights it as a natural extension. Edge environments bring speed, resilience and data control, all necessary capabilities for modern applications. Adopting Edge ComputingFor organisations looking to adopt edge computing, this episode lays out a practical step-by-step approach. The skills necessary in multi-cloud environments – automation, infrastructure as code, and observability – translate well to edge deployments. These capabilities are essential for managing the unique challenges of edge devices, which may be disconnected, have lower power, or be located in hard-to-reach areas. Without this level of operational maturity, Panenkov warns of a "zombie apocalypse" of unmanaged devices.Simplifying ComplexityManaging different APIs, SDKs, and vendor lock-ins across a distributed network can be a challenging task, and this is where platforms like emma become crucial.Alluding to emma’s mission, Panenkov explains, "We're building a unified platform that simplifies the way people interact with different cloud and computer environments, whether these are in a public setting or private data centres or even at the edge."Overall, emma creates a unified API layer and user interface, which simplifies the complexity. It helps businesses manage, automate, and scale their workloads from a singular perspective and reduces the burden on IT teams. They also reduce the need for a large team of highly skilled professionals leads to substantial cost savings. emma’s customers have experienced that their cloud bills went down significantly and updates could be rolled out much faster using the platform.TakeawaysEdge computing is becoming a reality for more organisations.Latency-sensitive applications drive the need for edge computing.Real-time analytics and industry automation benefit from edge computing.Edge computing enhances resilience, cost efficiency, and data sovereignty.Integrating edge into cloud strategies requires automation and observability.Maturity in operational practices, like automation and observability, is essential for...
"The real challenge that many manufacturers have dealt with for a long time and will keep facing is the shift from mass manufacturing to mass customisation," stated Daniel Joseph Barry, VP of Product Marketing at Configit. In a world that has moved from mass manufacturing to mass customisation, makers of complex products like cars and medical devices face a hidden problem. For more than a century, since the time of Henry Ford, manufacturers have worked in a separate, mass-production mindset. This method in the recent industrial scenario has caused a lot of friction and frustration.In this episode of the Tech Transformed podcast, Christina Stathopoulos, Dare To Data Founder, talks with Daniel Joseph Barry, VP of Product Marketing at Configit. They talk about Configuration Lifecycle Management (CLM) and its importance in tackling the challenges that manufacturers of complex products face recurrently.The speakers discuss the move from mass manufacturing to mass customisation, the various choices available to consumers, and the need to connect sales and engineering teams. Barry emphasises the value of working together to tackle these challenges. He points out that using CLM can make processes easier and enhance customer experiences (CX).What is Configuration Lifecycle Management (CLM)According to Barry, Configuration Lifecycle Management (CLM) is an approach that involves managing product configurations throughout their lifecycle. He describes it as an extension of Product Lifecycle Management (PLM) that focuses specifically on configurations. In today's highly bespoke world, customers are buying configurations of products instead of just the products themselves. The answer isn't to work harder within existing teams but to adopt a new, collaborative approach. This is where Configuration Lifecycle Management (CLM) comes in. CLM creates a single, shared source of truth for all product configuration information. It combines data from engineering, sales, and manufacturing. Configit’s patented Virtual Tabulation® (VT™) technology pre-computes all the different options, so there’s no longer a need for slow, real-time calculations. Barry says, "It's just a lookup, so it's lightning fast.” This represents a prominent shift that removes the delays and dead ends, frustrating customers and sales staff. Such a centralised system makes sure that every department uses the same, verified information, stopping errors from happening later on. One such company, and Configit’s customer, Vestas, a wind power company, automated its configuration process for complex wind turbines that have 160,000 options. By adopting a CLM approach, they cut the time to configure a solution from 60 minutes to just five.Tune into the podcast for more information on the transformational impact of Configuration Lifecycle Management (CLM). TakeawaysManufacturers are transitioning from mass manufacturing to mass customisation.Customisation leads to complexity and challenges in manufacturing.Siloed systems create inefficiencies and reliance on experienced employees.Configuration Lifecycle Management (CLM) can automate and streamline processes.Aligning sales and...
loading
Comments 
loading