Discover
Cloud Computing Insider
Cloud Computing Insider
Author: David Linthicum
Subscribed: 10Played: 98Subscribe
Share
© 2024
Description
Hosted by cloud computing pioneer David Linthicum, the Cloud Computing Insider podcast gets to the bottom of what cloud computing, and generative AI can bring to your enterprise. New content will focus on what's important to you as a user of cloud computing and generative AI, and the ability to find value the first time.
125 Episodes
Reverse
In this compelling exposé, we pull back the curtain on the grand narratives spun by today's AI leaders—and reveal the dramatic gap between their promises and reality. With names like Sam Altman (OpenAI), Elon Musk (Tesla, SpaceX), and Dario Amodei (Anthropic) at the forefront, bold claims about Artificial General Intelligence (AGI), world-changing productivity, and society-shifting job automation have gripped the media and investors alike. But how much of this hype stands up to scrutiny? We break down the flashy headlines, scrutinize the data, and show how many AI initiatives have failed to deliver real returns or transformative outcomes. From OpenAI's pivot to ad-based revenue to the shifting definitions of AGI used to secure investments, it's clear: the AI boom is fueled as much by marketing and financial necessity as by technical progress. Academics, economists, and internal reports challenge the myth of imminent AI dominance and expose the real motivations behind these public statements.
Everyone's talking about AGI—the idea that we're on the verge of creating an AI that can do anything a human can do, only faster and better. Tech billionaires are hyping it, headlines are breathless, and the race between the world's biggest companies seems unstoppable. But is the reality actually matching the hype? Not quite. Beneath the impressive demos and viral moments, today's AI still has some serious, stubborn problems that don't get talked about enough. It breaks under pressure, makes things up with total confidence, loses the plot on anything complicated, and doesn't truly understand the world the way even a child does. And simply throwing more money, more data, and more computing power at it may not fix any of that. In this video, we break down five fundamental reasons why AGI—true, all-purpose machine intelligence—is not coming anytime soon, in plain language anyone can understand. This isn't about being anti-AI or dismissing real progress. It's about cutting through the noise, being honest about where the technology actually stands, and understanding why the gap between "impressive AI" and "general intelligence" is still very wide.
Grab, Southeast Asia's leading super-app for ridesharing and food delivery, recently completed a transformative overhaul of its app-building infrastructure by moving more than 200 Mac Minis from the cloud into a self-managed datacenter. Previously relying on a US cloud provider for its macOS Continuous Integration/Continuous Delivery (CI/CD) needs, Grab faced major cost pressures—macOS build minutes on cloud platforms were ten times pricier than Linux, and Apple's requirements meant paying for 24-hour blocks even during off-peak periods. Attempts to boost efficiency with macOS virtualization were hampered by performance and stability trade-offs. By shifting to on-premises infrastructure, with four racks housing over 200 Mac Minis in Malaysia, Grab gained 20-40% faster CI/CD performance and slashed costs by an estimated $2.4 million over three years. Automated provisioning with Jamf management tools minimizes maintenance overhead, giving Grab tighter control and a competitive edge in mobile app development. This bold move aligns with a broader trend in tech—cloud repatriation—where companies reclaim cost and performance benefits for critical workloads by moving off public cloud platforms. Grab's experience is a key case study for businesses wrestling with cloud expenses versus operational agility.
David Linthicum returns with a follow-up to his RSA predictions video—this time to see what actually happened at last week's RSA Conference and which calls held up under real-world scrutiny. Before the event, David laid out his expectations for the biggest cybersecurity themes, vendor narratives, and industry shifts likely to dominate the conversation. Now that RSA is over, it's time to review the results, separate hype from substance, and look at where those predictions were right on target. In this video, David breaks down the major trends that emerged, compares them against his original forecast, and explains why certain themes gained traction while others fell flat. From AI security messaging to platform consolidation, cloud security strategy, and the ever-growing noise around cyber innovation, this is a candid scorecard on what RSA actually revealed. This is not a victory lap for the sake of it. It's a practical look at how to read industry events more clearly, spot patterns before they become obvious, and understand what really matters beyond flashy announcements and packed expo floors. If you want sharp analysis, honest reflection, and a no-nonsense take on RSA's biggest storylines, this follow-up delivers the receipts.
In this video, I explain how I built a successful YouTube channel by combining thought leadership, audience trust, and a focused content strategy. My growth did not start on YouTube alone. It was built on my existing followers and more than 20 years of podcasting, which gave me a strong foundation in technology media, IT analysis, and digital audience building. I also share why understanding audience demand, viewer engagement, and content performance metrics is essential for long-term growth. I discuss how cloud computing, artificial intelligence, enterprise technology, and IT strategy are high-value topics, but still serve a niche audience inside the broader technology industry. That means creators in these spaces need a smarter approach to YouTube marketing, social media promotion, audience targeting, and brand growth. I also cover the importance of choosing sponsors and brand partnerships that align with the mission of the channel and provide value to the audience. If you are interested in YouTube growth, tech influencer strategy, B2B content marketing, AI content strategy, or building authority in the cloud computing and enterprise IT space, this video offers practical insights you can apply right away.
We're on the edge of a real shift: the "cloud" may stop being purely a terrestrial phenomenon and become a layered network that includes orbit. The strongest case isn't that your favorite web app moves to space, but that space systems start acting like their own cloud region—compute, storage, and networking placed near satellites that generate massive amounts of data. If that happens, the first "clouds in space" won't look like hyperscale campuses; they'll look like compact, rugged orbital nodes that do AI inference, preprocessing, and caching, then beam results to Earth through high-throughput links. The big question is pace: in the near term, expect experiments and niche deployments for Earth observation, communications, and national security; mainstream adoption will require lower-cost launches, improved power and thermal designs, reliable optical crosslinks, and a clear cost advantage for specific workloads. Regulation and risk will shape it too—who owns the infrastructure, where the data "resides," and how you secure something you can't physically touch. So yes, we're likely to see "clouds in space," but as an extension of cloud architecture (edge + backbone), not a replacement for Earth regions—at least for a long while.
David Linthicum challenges the feel-good narratives that dominate cloud conversations and lays out five opinions that many teams avoid saying out loud. He argues that cloud repatriation is not a failure but a rational response to economics and performance, and that some workloads belong back on dedicated or private infrastructure. He warns that vendor lock-in isn't an edge case—it's the default outcome unless you design deliberately for portability. Linthicum also focuses on "cloud fragility": the hidden chain of dependencies that can turn a regional incident into broad service disruption, and why resilience must be engineered, not assumed. On costs, he pushes back on the idea that cloud is automatically cheaper, emphasizing that it can be a great bargain only when architectures, usage, and governance are disciplined. Finally, he questions whether hyperscalers pass efficiency gains to customers, urging viewers to measure unit costs and demand accountability. The video is a blunt, practical reset for leaders planning migrations, optimizing spend, or rethinking multicloud and hybrid strategy. Expect examples, migration mistakes and a reminder that cloud is a tool, not a religion. If you're struggling with surprise bills, outages, or strategy whiplash, his checklist helps you decide what to keep, move, or unwind.
OpenAI helped kick off the AI revolution, but behind the hype the numbers tell a much darker story. In this video, we break down how a company that once looked untouchable is now burning staggering amounts of cash, losing ground to faster, leaner rivals, and scrambling to bolt ads onto its flagship product just to keep the lights on. We'll look at leaks and estimates that suggest OpenAI could lose tens of billions of dollars, with compute and hardware costs that swallow a huge chunk of every dollar it makes. We'll show how Anthropic has quietly overtaken OpenAI in enterprise LLM market share, why Microsoft and Nvidia now effectively hold OpenAI's fate in their hands, and how the GPU arms race they helped create has driven up prices for everyone else. This isn't a hit piece; it's a reality check. If you care about where AI is really heading, you need to understand why OpenAI's early lead may not last—and why the next decade of AI power is likely to belong to players with deeper pockets, better margins, and a more sustainable plan.
Cloud didn't fail—cloud providers did. AWS, Microsoft, and Google sold "simplicity," then normalized pricing that needs a spreadsheet degree, architectures that punish small mistakes, and service catalogs so bloated that teams spend more time choosing tools than shipping products. This video calls out the five fixable parts of modern cloud that vendors control. First: predictable economics—transparent rates, sane defaults, and guardrails that stop surprise bills before they happen. Second: portability—egress and proprietary glue shouldn't be an exit fee; interoperability should be routine. Third: secure-by-default—shared responsibility has become shared confusion, and the safest configuration must also be the easiest. Fourth: reliability with real transparency—dependency maps, blast radii, and the true cost of resilience, not marketing uptime. Fifth: less sprawl—fewer overlapping services and more "golden paths" that get enterprises to outcomes fast. If cloud is going to earn long-term trust, it has to stop exporting risk and overhead to customers. Retention should come from value, not friction. Let's talk about what needs to change—and why the next cloud leader will win on clarity, security, and predictability, not catalog size. Drop your billing horror story in the comments, and if you run cloud for a living, share this with the people selling it.
Neoclouds are a new wave of GPU-first cloud providers built specifically for AI training and inference, offering a focused alternative to hyperscalers like AWS, Azure, and Google Cloud. Instead of optimizing for thousands of general-purpose services, neoclouds optimize for what modern AI teams actually bottleneck on: high-availability NVIDIA-class GPUs, fast provisioning, bare‑metal performance, and low-latency networking for distributed workloads. That specialization can translate into lower effective cost per training run, higher sustained utilization, and faster iteration cycles—especially when hyperscaler capacity is tight or pricing is unpredictable. Providers such as CoreWeave, Lambda, Crusoe Cloud, Nebius, and Vultr market themselves on speed-to-GPU, simplified scaling, and infrastructure tuned for HPC-style jobs, from fine-tuning foundation models to high-throughput inference. For startups, labs, and enterprise AI teams, neoclouds can reduce the friction of getting from experiment to production by removing layers of platform complexity and prioritizing raw compute. The tradeoff is that hyperscalers still lead in global footprint, compliance breadth, and deep managed service ecosystems, so many teams adopt a hybrid approach—using neoclouds for heavy GPU workloads while keeping core data and platform services on a hyperscaler.
Microsoft Copilot arrived as an AI layer across Windows, Microsoft 365, and cloud consoles, but many users experienced it less as a breakthrough and more as another interface demanding attention. In Word, Outlook, and Teams it could draft and summarize, yet the output often required careful editing for tone, accuracy, and missing context—work people didn't expect to add to already busy days. In Excel and PowerPoint, where users want precision and control, Copilot sometimes felt unreliable or slower than familiar formulas, templates, and search. The assistant also raised awkward "can I paste this?" moments: uncertainty about sensitive data and organizational policies led users to withhold the very details that would make results useful. When Copilot appeared prominently in UI, some interpreted it as being pushed rather than chosen, increasing resistance. Finally, the pricing model turned mild curiosity into hard scrutiny; if Copilot only saves a few minutes occasionally, a per‑user monthly fee looks like paying for prompts plus extra proofreading. The net effect was skepticism: helpful in pockets, but not essential, not trusted enough for critical work, and not compelling enough to budget for at scale. Adoption stalled where training was thin, and the benefit story never became personal enough.
A surge in fraudulent cloud mining schemes is pulling in unsuspecting investors with promises of guaranteed, sky-high returns—claims that regulators stress are not just too good to be true, but technically impossible. These scams mimic the model of legitimate cloud mining, where users lease real hashing power from authentic data centers, but instead operate like Ponzi schemes: returns are paid with the money of new recruits rather than genuine crypto mining profits. Hallmarks of these frauds include fixed daily payouts, multi-level marketing ploys, and cloned websites that present a façade of sophistication. Meanwhile, global regulators are stepping up enforcement, with the SEC recently securing a $46 million judgment against a major scam operator and exchanges like Binance actively freezing assets tied to these illicit activities. At the same time, transparency-focused regulations such as the EU's Digital Operational Resilience Act are setting standards only legitimate providers can meet, making it easier to spot and stop fakes. This video exposes the red flags every investor should know, explains how authentic cloud mining really works—including what "hashing power" actually means—and arms viewers with tools to separate financial opportunity from crypto fakery.
In this video, David Linthicum delivers a blunt critique of how large enterprises mishandled cloud adoption and are now repeating the same mistakes with AI. He explains that many IT leaders treated cloud as a simple outsourcing and cost‑shifting exercise rather than a deep architectural and operating‑model transformation, baking failure in from the start. Billions were spent lifting and shifting technical debt into the cloud, only to see complexity, fragility, and run‑rate costs rise while executives declared success. Linthicum argues that these outcomes were serious enough that many leaders should have been fired, yet boards and CEOs—lacking technical literacy—rewarded them and let vendors shape the narrative. Now, the same people are running AI like another procurement program, chasing hype metrics instead of measurable business value. He shows how poor data discipline, weak governance, and vague "transformation" goals are setting up a second wave of expensive disappointment. Finally, he explores why this keeps happening: corporate cultures punish technical dissent, reward optimistic PowerPoints, and let vendors and consultants create a halo of hype around failed strategies. His core message: allowing the architects of your cloud failures to lead AI isn't innovation—it's institutionalized incompetence at massive scale.
RSAC 2026 is shaping up to be the year cybersecurity stops talking about "using AI" and starts obsessing over securing it. In this video, we break down the top conference trends emerging from early session themes and Innovation Sandbox signals: the Securing AI pivot (agent governance, inference-time protection, prompt injection, supply-chain integrity, and data leakage), Identity as the new perimeter (machine identities/NHIs eclipsing human users, phishing-resistant authentication, PKI at IoT scale), and the rise of Shadow AI as a board-level risk (discovering, inventorying, and controlling unauthorized AI apps and agents). We'll also unpack why "vibe coding" accelerates delivery while amplifying software supply chain exposure—and what actionable security inside CI/CD actually looks like in 2026, from SBOM programs and license compliance to automated dependency updates and build integrity. Finally, we connect the dots to operational resilience: when breach times can be measured in seconds, microsegmentation, lateral-movement controls, and real-time quarantine matter as much as prevention. If you're planning your RSAC agenda—or your 2026 roadmap—this is your fast, practical briefing, plus a shortlist of Innovation Sandbox finalists to watch. Subscribe for weekly security strategy takeaways, and drop a comment with the tool or trend you want us to analyze next in depth. Jo's LinkedIn: https://www.linkedin.com/in/jopeterson1/ Jo's email: Jo@clarify360.com Dave's LinkedIn: https://www.linkedin.com/in/davidlinthicum/ Dave's email: david@linthicumresearch.com Top 10 Innovation Sandbox Finalists (RSAC 2026) to Watch: @Charm Security: Agentic AI Workforce to prevent scams. @Clearly AI: AI-powered code reviews. @Crash Override: CI/CD build security. @Fig Security: SecOps resilience. @Geordie AI: Security and governance for AI agents. @Glide Identity: Next-gen authentication. @Humanix: Stopping social engineering via behavioral AI. @Realm Labs: Monitoring AI agent behavior. @Token Security: Managing non-human identity (NHI).
Google Cloud Platform (GCP) lags in third place among the top cloud providers, despite impressive financial growth, due to systemic challenges and fierce competition from Amazon Web Services (AWS) and Microsoft Azure. As of Q3 2023, GCP generated USD 8.4 billion in revenue, significantly trailing AWS's USD 23.1 billion and Azure's estimated USD 24 billion, reflecting its smaller market share of about 11% compared to AWS's 31% and Azure's 25%. GCP entered the market later, missing the early adoption wave that entrenched its rivals, and struggles with enterprise trust due to a less comprehensive hybrid cloud strategy and a smaller ecosystem of third-party integrations. While Google excels in AI, machine learning, and data analytics—key differentiators—it lacks the breadth of industry-specific solutions and developer tools that AWS and Azure offer. Additionally, AWS benefits from Amazon's e-commerce-driven infrastructure, and Azure leverages Microsoft's enterprise software legacy, creating loyalty GCP can't easily replicate. Without significant strides in addressing enterprise needs, expanding partnerships, and accelerating innovation in hybrid environments, GCP is poised to remain in third place, unable to close the gap with its more established competitors in the near future.
For years, big consulting firms have been selling "cloud and AI strategy" as if it were a product you can just buy off the shelf. In 2026, the jig is up. This video is your no‑nonsense buyer's guide to how large consulting firms should be engaged for cloud and AI work—what to let them do, what to keep in‑house, and where you're most likely to get burned. We'll walk through the real incentives behind those "preferred partner" AI cloud recommendations, how vendor alliances quietly shape your architecture, and why so many roadmaps end up looking like a sales deck instead of an operating model. You'll learn how to structure engagements so you get industrial‑scale execution—landing zones, governance, migrations—without handing over your strategy, your data, or your future. We'll cover the critical questions to ask before you sign, the red flags in proposals and SOWs, and how to demand deliverables that leave you stronger and more independent, not permanently dependent on the same firm. If you're a CIO, CTO, architect, or tech leader about to bring in a big consultancy for cloud or AI, watch this first. It might change how you buy—and what you're willing to buy.
Oracle is reportedly tied to roughly USD 56 billion in AI data-center financing, and Wall Street is treating it like a stress test, not a victory lap. In this video I break down why "build it and they will come" can turn into "borrow it and you will bleed." Data centers are fixed-cost monsters: power commitments, depreciation, and interest expense don't care if enterprise customers take six quarters to migrate workloads. If demand ramps slower than supply, Oracle's only lever is discounting—lower prices to fill empty capacity—which can crush margins right when debt service rises. That's how overbuilds become spirals: weaker cash flow leads to tighter financing, tighter financing forces cuts, cuts weaken competitiveness, and the cycle feeds itself. I'll also compare Oracle's bet to the broader AI infrastructure boom—other builders using heavy leverage—and explain why a sector-wide capacity glut could trigger a price war. If you're an investor, operator, or just tired of AI hype, this is the cold-water analysis: what could go wrong, how it snowballs, and what signals to watch next. We'll talk contract reality, utilization math, and why "strategic capex" can become a balance-sheet hostage situation. Plus: the red flags—syndication strain, downgrades, and sudden price cuts.
In this crucial video, cloud security leader David Linthicum exposes a troubling statistic: 61 percent of cloud security incidents are completely preventable. Drawing on the latest security research, David reveals the most common—and avoidable—mistakes that leave enterprises vulnerable to cyberattacks, data leaks, and compliance violations. He breaks down the top culprits, from misconfigured cloud environments and weak access controls to lapses in ongoing monitoring and lack of employee training. David explains why organizations often falter on basic security hygiene and shows how these oversights create easy targets for attackers. Importantly, he translates these findings into actionable solutions. Viewers will learn why regular audits, continuous education, and automated security tools are critical for reducing risk, and how building a security-first culture can close the door on costly incidents. Packed with practical advice, this video gives IT leaders, security professionals, and business executives a blueprint for dramatically improving their organization's cloud security. Don't let your company become another statistic—discover how to avoid the 61 percent of incidents that should never happen, and transform the cloud from a weak spot into your enterprise's strongest line of defense.
Why did Microsoft stock drop even after a headline beat? In this episode, cloud analyst David Linthicum breaks down the market's "beat-and-drop" reaction to Microsoft's latest earnings and what it signals about Azure, AI, and hyperscaler spending. He explains how expectations for a clear cloud re-acceleration collided with guidance that sounded more like "we're investing ahead of demand," raising concerns about capital intensity and near-term margins. Linthicum walks through the optics around Azure growth, capacity build-outs for AI training and inference, and why investors are increasingly sensitive to capex without fast operating leverage. The conversation also explores a structural shift in enterprise cloud choices—hybrid, private and sovereign cloud, colocation, and managed service providers—and how workload economics (egress, governance, and consumption creep) can make public cloud less attractive for steady-state compute. Finally, he balances the bear case with the bull case: Microsoft's distribution advantage across Microsoft 365, security, and developer tools, and the possibility that today's spend is strategic moat-building. If you want a clearer framework for reading cloud earnings, this video delivers. You'll learn what metrics to watch next quarter, how to interpret "capacity constraints" versus demand, and where AI monetization may show up first in pricing, usage, and margins.
The tech industry is brilliant at building new things—and terrible at admitting when it gets them wrong. In this video, I break down why our predictions about cloud, AI, big data, blockchain, metaverse, and more so often miss reality by a mile. From wildly optimistic analyst forecasts (including early cloud growth predictions that were way off) to vendor-driven hype cycles, we've built a system that rewards confidence, not accuracy. I walk through concrete examples where the narrative sounded irresistible, the slideware looked perfect, and the pilots seemed promising—but large enterprises never followed at scale. The core problem isn't intelligence or innovation; it's confirmation bias, misaligned incentives, and a complete underestimation of real-world constraints: legacy systems, budgets, regulation, skills, and risk. You'll learn how to recognize the telltale signs of hype, how to separate "cool demo" from "sustainable value," and how to ask the uncomfortable questions that cut through the noise. Whether you're a CIO, architect, engineer, or business leader, my goal is simple: help you stop getting pushed around by tech narratives—and start making grounded, reality-based decisions. If you're tired of being sold "the future" that never quite arrives, this video is for you.






















