Discover
Cloud Computing Insider
Cloud Computing Insider
Author: David Linthicum
Subscribed: 10Played: 93Subscribe
Share
© 2024
Description
Hosted by cloud computing pioneer David Linthicum, the Cloud Computing Insider podcast gets to the bottom of what cloud computing, and generative AI can bring to your enterprise. New content will focus on what's important to you as a user of cloud computing and generative AI, and the ability to find value the first time.
119 Episodes
Reverse
David Linthicum challenges the feel-good narratives that dominate cloud conversations and lays out five opinions that many teams avoid saying out loud. He argues that cloud repatriation is not a failure but a rational response to economics and performance, and that some workloads belong back on dedicated or private infrastructure. He warns that vendor lock-in isn't an edge case—it's the default outcome unless you design deliberately for portability. Linthicum also focuses on "cloud fragility": the hidden chain of dependencies that can turn a regional incident into broad service disruption, and why resilience must be engineered, not assumed. On costs, he pushes back on the idea that cloud is automatically cheaper, emphasizing that it can be a great bargain only when architectures, usage, and governance are disciplined. Finally, he questions whether hyperscalers pass efficiency gains to customers, urging viewers to measure unit costs and demand accountability. The video is a blunt, practical reset for leaders planning migrations, optimizing spend, or rethinking multicloud and hybrid strategy. Expect examples, migration mistakes and a reminder that cloud is a tool, not a religion. If you're struggling with surprise bills, outages, or strategy whiplash, his checklist helps you decide what to keep, move, or unwind.
OpenAI helped kick off the AI revolution, but behind the hype the numbers tell a much darker story. In this video, we break down how a company that once looked untouchable is now burning staggering amounts of cash, losing ground to faster, leaner rivals, and scrambling to bolt ads onto its flagship product just to keep the lights on. We'll look at leaks and estimates that suggest OpenAI could lose tens of billions of dollars, with compute and hardware costs that swallow a huge chunk of every dollar it makes. We'll show how Anthropic has quietly overtaken OpenAI in enterprise LLM market share, why Microsoft and Nvidia now effectively hold OpenAI's fate in their hands, and how the GPU arms race they helped create has driven up prices for everyone else. This isn't a hit piece; it's a reality check. If you care about where AI is really heading, you need to understand why OpenAI's early lead may not last—and why the next decade of AI power is likely to belong to players with deeper pockets, better margins, and a more sustainable plan.
Cloud didn't fail—cloud providers did. AWS, Microsoft, and Google sold "simplicity," then normalized pricing that needs a spreadsheet degree, architectures that punish small mistakes, and service catalogs so bloated that teams spend more time choosing tools than shipping products. This video calls out the five fixable parts of modern cloud that vendors control. First: predictable economics—transparent rates, sane defaults, and guardrails that stop surprise bills before they happen. Second: portability—egress and proprietary glue shouldn't be an exit fee; interoperability should be routine. Third: secure-by-default—shared responsibility has become shared confusion, and the safest configuration must also be the easiest. Fourth: reliability with real transparency—dependency maps, blast radii, and the true cost of resilience, not marketing uptime. Fifth: less sprawl—fewer overlapping services and more "golden paths" that get enterprises to outcomes fast. If cloud is going to earn long-term trust, it has to stop exporting risk and overhead to customers. Retention should come from value, not friction. Let's talk about what needs to change—and why the next cloud leader will win on clarity, security, and predictability, not catalog size. Drop your billing horror story in the comments, and if you run cloud for a living, share this with the people selling it.
Neoclouds are a new wave of GPU-first cloud providers built specifically for AI training and inference, offering a focused alternative to hyperscalers like AWS, Azure, and Google Cloud. Instead of optimizing for thousands of general-purpose services, neoclouds optimize for what modern AI teams actually bottleneck on: high-availability NVIDIA-class GPUs, fast provisioning, bare‑metal performance, and low-latency networking for distributed workloads. That specialization can translate into lower effective cost per training run, higher sustained utilization, and faster iteration cycles—especially when hyperscaler capacity is tight or pricing is unpredictable. Providers such as CoreWeave, Lambda, Crusoe Cloud, Nebius, and Vultr market themselves on speed-to-GPU, simplified scaling, and infrastructure tuned for HPC-style jobs, from fine-tuning foundation models to high-throughput inference. For startups, labs, and enterprise AI teams, neoclouds can reduce the friction of getting from experiment to production by removing layers of platform complexity and prioritizing raw compute. The tradeoff is that hyperscalers still lead in global footprint, compliance breadth, and deep managed service ecosystems, so many teams adopt a hybrid approach—using neoclouds for heavy GPU workloads while keeping core data and platform services on a hyperscaler.
Microsoft Copilot arrived as an AI layer across Windows, Microsoft 365, and cloud consoles, but many users experienced it less as a breakthrough and more as another interface demanding attention. In Word, Outlook, and Teams it could draft and summarize, yet the output often required careful editing for tone, accuracy, and missing context—work people didn't expect to add to already busy days. In Excel and PowerPoint, where users want precision and control, Copilot sometimes felt unreliable or slower than familiar formulas, templates, and search. The assistant also raised awkward "can I paste this?" moments: uncertainty about sensitive data and organizational policies led users to withhold the very details that would make results useful. When Copilot appeared prominently in UI, some interpreted it as being pushed rather than chosen, increasing resistance. Finally, the pricing model turned mild curiosity into hard scrutiny; if Copilot only saves a few minutes occasionally, a per‑user monthly fee looks like paying for prompts plus extra proofreading. The net effect was skepticism: helpful in pockets, but not essential, not trusted enough for critical work, and not compelling enough to budget for at scale. Adoption stalled where training was thin, and the benefit story never became personal enough.
A surge in fraudulent cloud mining schemes is pulling in unsuspecting investors with promises of guaranteed, sky-high returns—claims that regulators stress are not just too good to be true, but technically impossible. These scams mimic the model of legitimate cloud mining, where users lease real hashing power from authentic data centers, but instead operate like Ponzi schemes: returns are paid with the money of new recruits rather than genuine crypto mining profits. Hallmarks of these frauds include fixed daily payouts, multi-level marketing ploys, and cloned websites that present a façade of sophistication. Meanwhile, global regulators are stepping up enforcement, with the SEC recently securing a $46 million judgment against a major scam operator and exchanges like Binance actively freezing assets tied to these illicit activities. At the same time, transparency-focused regulations such as the EU's Digital Operational Resilience Act are setting standards only legitimate providers can meet, making it easier to spot and stop fakes. This video exposes the red flags every investor should know, explains how authentic cloud mining really works—including what "hashing power" actually means—and arms viewers with tools to separate financial opportunity from crypto fakery.
In this video, David Linthicum delivers a blunt critique of how large enterprises mishandled cloud adoption and are now repeating the same mistakes with AI. He explains that many IT leaders treated cloud as a simple outsourcing and cost‑shifting exercise rather than a deep architectural and operating‑model transformation, baking failure in from the start. Billions were spent lifting and shifting technical debt into the cloud, only to see complexity, fragility, and run‑rate costs rise while executives declared success. Linthicum argues that these outcomes were serious enough that many leaders should have been fired, yet boards and CEOs—lacking technical literacy—rewarded them and let vendors shape the narrative. Now, the same people are running AI like another procurement program, chasing hype metrics instead of measurable business value. He shows how poor data discipline, weak governance, and vague "transformation" goals are setting up a second wave of expensive disappointment. Finally, he explores why this keeps happening: corporate cultures punish technical dissent, reward optimistic PowerPoints, and let vendors and consultants create a halo of hype around failed strategies. His core message: allowing the architects of your cloud failures to lead AI isn't innovation—it's institutionalized incompetence at massive scale.
RSAC 2026 is shaping up to be the year cybersecurity stops talking about "using AI" and starts obsessing over securing it. In this video, we break down the top conference trends emerging from early session themes and Innovation Sandbox signals: the Securing AI pivot (agent governance, inference-time protection, prompt injection, supply-chain integrity, and data leakage), Identity as the new perimeter (machine identities/NHIs eclipsing human users, phishing-resistant authentication, PKI at IoT scale), and the rise of Shadow AI as a board-level risk (discovering, inventorying, and controlling unauthorized AI apps and agents). We'll also unpack why "vibe coding" accelerates delivery while amplifying software supply chain exposure—and what actionable security inside CI/CD actually looks like in 2026, from SBOM programs and license compliance to automated dependency updates and build integrity. Finally, we connect the dots to operational resilience: when breach times can be measured in seconds, microsegmentation, lateral-movement controls, and real-time quarantine matter as much as prevention. If you're planning your RSAC agenda—or your 2026 roadmap—this is your fast, practical briefing, plus a shortlist of Innovation Sandbox finalists to watch. Subscribe for weekly security strategy takeaways, and drop a comment with the tool or trend you want us to analyze next in depth. Jo's LinkedIn: https://www.linkedin.com/in/jopeterson1/ Jo's email: Jo@clarify360.com Dave's LinkedIn: https://www.linkedin.com/in/davidlinthicum/ Dave's email: david@linthicumresearch.com Top 10 Innovation Sandbox Finalists (RSAC 2026) to Watch: @Charm Security: Agentic AI Workforce to prevent scams. @Clearly AI: AI-powered code reviews. @Crash Override: CI/CD build security. @Fig Security: SecOps resilience. @Geordie AI: Security and governance for AI agents. @Glide Identity: Next-gen authentication. @Humanix: Stopping social engineering via behavioral AI. @Realm Labs: Monitoring AI agent behavior. @Token Security: Managing non-human identity (NHI).
Google Cloud Platform (GCP) lags in third place among the top cloud providers, despite impressive financial growth, due to systemic challenges and fierce competition from Amazon Web Services (AWS) and Microsoft Azure. As of Q3 2023, GCP generated USD 8.4 billion in revenue, significantly trailing AWS's USD 23.1 billion and Azure's estimated USD 24 billion, reflecting its smaller market share of about 11% compared to AWS's 31% and Azure's 25%. GCP entered the market later, missing the early adoption wave that entrenched its rivals, and struggles with enterprise trust due to a less comprehensive hybrid cloud strategy and a smaller ecosystem of third-party integrations. While Google excels in AI, machine learning, and data analytics—key differentiators—it lacks the breadth of industry-specific solutions and developer tools that AWS and Azure offer. Additionally, AWS benefits from Amazon's e-commerce-driven infrastructure, and Azure leverages Microsoft's enterprise software legacy, creating loyalty GCP can't easily replicate. Without significant strides in addressing enterprise needs, expanding partnerships, and accelerating innovation in hybrid environments, GCP is poised to remain in third place, unable to close the gap with its more established competitors in the near future.
For years, big consulting firms have been selling "cloud and AI strategy" as if it were a product you can just buy off the shelf. In 2026, the jig is up. This video is your no‑nonsense buyer's guide to how large consulting firms should be engaged for cloud and AI work—what to let them do, what to keep in‑house, and where you're most likely to get burned. We'll walk through the real incentives behind those "preferred partner" AI cloud recommendations, how vendor alliances quietly shape your architecture, and why so many roadmaps end up looking like a sales deck instead of an operating model. You'll learn how to structure engagements so you get industrial‑scale execution—landing zones, governance, migrations—without handing over your strategy, your data, or your future. We'll cover the critical questions to ask before you sign, the red flags in proposals and SOWs, and how to demand deliverables that leave you stronger and more independent, not permanently dependent on the same firm. If you're a CIO, CTO, architect, or tech leader about to bring in a big consultancy for cloud or AI, watch this first. It might change how you buy—and what you're willing to buy.
Oracle is reportedly tied to roughly USD 56 billion in AI data-center financing, and Wall Street is treating it like a stress test, not a victory lap. In this video I break down why "build it and they will come" can turn into "borrow it and you will bleed." Data centers are fixed-cost monsters: power commitments, depreciation, and interest expense don't care if enterprise customers take six quarters to migrate workloads. If demand ramps slower than supply, Oracle's only lever is discounting—lower prices to fill empty capacity—which can crush margins right when debt service rises. That's how overbuilds become spirals: weaker cash flow leads to tighter financing, tighter financing forces cuts, cuts weaken competitiveness, and the cycle feeds itself. I'll also compare Oracle's bet to the broader AI infrastructure boom—other builders using heavy leverage—and explain why a sector-wide capacity glut could trigger a price war. If you're an investor, operator, or just tired of AI hype, this is the cold-water analysis: what could go wrong, how it snowballs, and what signals to watch next. We'll talk contract reality, utilization math, and why "strategic capex" can become a balance-sheet hostage situation. Plus: the red flags—syndication strain, downgrades, and sudden price cuts.
In this crucial video, cloud security leader David Linthicum exposes a troubling statistic: 61 percent of cloud security incidents are completely preventable. Drawing on the latest security research, David reveals the most common—and avoidable—mistakes that leave enterprises vulnerable to cyberattacks, data leaks, and compliance violations. He breaks down the top culprits, from misconfigured cloud environments and weak access controls to lapses in ongoing monitoring and lack of employee training. David explains why organizations often falter on basic security hygiene and shows how these oversights create easy targets for attackers. Importantly, he translates these findings into actionable solutions. Viewers will learn why regular audits, continuous education, and automated security tools are critical for reducing risk, and how building a security-first culture can close the door on costly incidents. Packed with practical advice, this video gives IT leaders, security professionals, and business executives a blueprint for dramatically improving their organization's cloud security. Don't let your company become another statistic—discover how to avoid the 61 percent of incidents that should never happen, and transform the cloud from a weak spot into your enterprise's strongest line of defense.
Why did Microsoft stock drop even after a headline beat? In this episode, cloud analyst David Linthicum breaks down the market's "beat-and-drop" reaction to Microsoft's latest earnings and what it signals about Azure, AI, and hyperscaler spending. He explains how expectations for a clear cloud re-acceleration collided with guidance that sounded more like "we're investing ahead of demand," raising concerns about capital intensity and near-term margins. Linthicum walks through the optics around Azure growth, capacity build-outs for AI training and inference, and why investors are increasingly sensitive to capex without fast operating leverage. The conversation also explores a structural shift in enterprise cloud choices—hybrid, private and sovereign cloud, colocation, and managed service providers—and how workload economics (egress, governance, and consumption creep) can make public cloud less attractive for steady-state compute. Finally, he balances the bear case with the bull case: Microsoft's distribution advantage across Microsoft 365, security, and developer tools, and the possibility that today's spend is strategic moat-building. If you want a clearer framework for reading cloud earnings, this video delivers. You'll learn what metrics to watch next quarter, how to interpret "capacity constraints" versus demand, and where AI monetization may show up first in pricing, usage, and margins.
The tech industry is brilliant at building new things—and terrible at admitting when it gets them wrong. In this video, I break down why our predictions about cloud, AI, big data, blockchain, metaverse, and more so often miss reality by a mile. From wildly optimistic analyst forecasts (including early cloud growth predictions that were way off) to vendor-driven hype cycles, we've built a system that rewards confidence, not accuracy. I walk through concrete examples where the narrative sounded irresistible, the slideware looked perfect, and the pilots seemed promising—but large enterprises never followed at scale. The core problem isn't intelligence or innovation; it's confirmation bias, misaligned incentives, and a complete underestimation of real-world constraints: legacy systems, budgets, regulation, skills, and risk. You'll learn how to recognize the telltale signs of hype, how to separate "cool demo" from "sustainable value," and how to ask the uncomfortable questions that cut through the noise. Whether you're a CIO, architect, engineer, or business leader, my goal is simple: help you stop getting pushed around by tech narratives—and start making grounded, reality-based decisions. If you're tired of being sold "the future" that never quite arrives, this video is for you.
In this video, David Linthicum breaks down the sudden explosion of "agentic AI" playbooks, frameworks, and branded platforms now pouring out of the consulting industry. Every big firm wants to look like it owns the future of autonomous work, so the market is being flooded with glossy diagrams, maturity models, and "fast paths" that promise cheap, repeatable success—sometimes with language that feels close to a guarantee. But agentic AI is not a plug-in. It's an architecture, and architecture only works when it matches your processes, data quality, controls, integration realities, and operating model. When frameworks lead with the platform instead of the problem, enterprises end up force‑fitting agents into brittle systems, over-building orchestration layers, and running old processes in parallel "just in case." The hidden costs show up later: governance overhead, constant tuning, fragile pilots, and disappointing ROI. You'll learn the red flags to watch for, the questions to ask before funding an "agentic transformation," and how to pursue smaller, measurable wins without buying expensive theater. If you're a CIO, CTO, or business leader, this is your reality check before the next deck lands in your inbox. We'll also discuss when simpler automation beats agents—and when agents earn their keep.
Network-attached storage (NAS) is a dedicated, always‑on storage device that connects to your home or office network and lets multiple users and devices store, share, and back up data to a central box you physically own. In effect, it's your own private cloud: instead of renting space from iCloud, Google Drive, OneDrive, or AWS, you buy a NAS once and control the hardware, the capacity, and who can access it. This model is growing quickly; the global NAS market is already tens of billions of dollars in annual sales and is projected to roughly triple over the next decade, driven by exploding photo, video, and backup needs. Just as important as capacity is cost: a mid‑range NAS with several terabytes of usable storage often runs around 600 upfront, a figure that can undercut years of recurring cloud fees for 2–6 TB plans. Many consumers and small businesses are discovering that, at larger data sizes, NAS becomes cheaper over a three‑to‑five‑year horizon. And because the data lives on devices you own—often protected by encryption, redundancy, and local access controls—NAS is increasingly seen as a way to improve privacy, security, and peace of mind compared to relying solely on third‑party clouds.
Big Tech says it's "backup," "sync," and "convenience"—but what happens when your computer quietly starts moving your personal files into the cloud by default? In this episode, David Linthicum breaks down a growing industry pattern: technology providers designing defaults that automatically capture your data, route it into their storage platforms, and make that choice feel inevitable. We start with the Microsoft Windows 11 upgrade experience, where many users discover Desktop, Documents, and Pictures being pushed into OneDrive through folder redirection and persistent prompts—often without a clear, informed decision at setup. From there, we connect the dots to Apple's iCloud, where "it just works" can also mean "it just uploads," and to Google's Drive-first ecosystem that normalizes cloud storage as the primary home for files. Finally, we revisit AWS and the long-running idea that computing is something you rent—not own—turning the PC into a subscription and your data into recurring revenue. This isn't an anti-cloud rant: cloud storage can be genuinely useful. The issue is default capture, confusing consent, lock-in economics, and the shrinking space for truly local-first computing. If your files are your property, why do vendors treat them like a product funnel?
Serverless is marketed as "no servers, no ops, just code"—but that convenience hides a deeper tradeoff: long-term freedom. In this video, I break down how platforms like AWS Lambda, Google Cloud Functions, and Firebase quietly lock you into a single provider, not through the language you write in. Still, through the glue you adopt: event formats, IAM models, triggers, logging, deployment pipelines, and tightly coupled managed services. We'll look at where lock-in really lives architecturally, why leaning hard into proprietary auth, queues, databases, and logging can turn your system into a beautiful cage, and how to avoid that without giving up the speed that makes serverless attractive in the first place. You'll learn practical patterns like hexagonal/onion architecture, keeping business logic pure and side-effect-free, pushing cloud-specifics to the edges, and wrapping provider APIs behind your own interfaces for storage, messaging, and identity. I'll also cover strategies for keeping your data portable and planning for the day you might need to change clouds—or run on bare metal. Serverless isn't the enemy. Blind trust is. Use the cloud's superpowers, but design as if you'll have to leave.
Cloud providers are quietly rebuilding their platforms around generative AI—and dragging you along for the ride. In this episode of Cloud Computing Insider, Dave breaks down how AWS, Azure, and Google Cloud are shifting from general‑purpose cloud to AI‑native cloud, where everything is optimized (and monetized) around GPUs, proprietary models, and tightly integrated AI services. We'll look at why this is happening now, how it shows up in your architecture and your bill, and why "AI‑ready" often really means "AI‑locked‑in." From exploding inference costs to agentic AI baked into workflows, you'll see how the defaults are being stacked in the providers' favor. But this isn't just a rant—we'll also explore your options. Do you lean into the hyperscalers' AI platforms, or start carving out room for AltClouds like private, sovereign, and MSP‑run clouds that aren't rebuilding everything around AI? How do you keep data, models, and architecture portable enough that you still have real choices in three years? If you care about cloud costs, control, and long‑term flexibility, this is the AI/cloud conversation you actually need to hear.
Cloud Centers of Excellence were supposed to save your cloud strategy—yet in most enterprises, they've become the single biggest bottleneck. In this video, David Linthicum takes a brutally honest look at why so many CCoEs have devolved into "Cloud Centers of No," strangling innovation while pretending to provide governance. We'll dissect how these committees burn time, money, and engineering talent with endless review boards, PDFs, and politics, all while claiming to be "best practice." But this isn't just a rant; it's a blueprint. David lays out exactly how to blow up the gatekeeper model and rebuild your CCoE as a lean, product-focused cloud platform team that developers actually want to use. You'll learn how to replace manual approvals with automated guardrails, static standards with living golden paths, and ivory-tower architects with embedded, hands-on experts. If you suspect your CCoE is more theater than value, this video will give you the language, arguments, and patterns to force a reset—and turn cloud governance from a tax into a competitive advantage.























