DiscoverDave Linthicum Is Not AI
Dave Linthicum Is Not AI
Claim Ownership

Dave Linthicum Is Not AI

Author: David Linthicum

Subscribed: 0Played: 0
Share

Description

Welcome to "Dave is not AI." I'm David Linthicum, and I take a skeptical look at the exploding AI marketplace. Forget the hype. We explore the true reality behind AI technology, its capabilities, and its limitations. Discover why enterprises and humans are struggling with AI today, and gain expert insights on how to best navigate a future where AI is everywhere. Join me for grounded, unbiased analysis to master the AI landscape. Because while AI might be the buzzword, clear understanding is your best strategy. Subscribe now for the real AI story.
43 Episodes
Reverse
Amazon just confirmed plans to cut roughly 16,000 roles, including in AWS—another reminder that even "safe" high-tech jobs can vanish fast. If you're being pushed out (or fear you might be), this video gives you a clear, practical playbook to regain control. First: a layoff is a business event, not a personal verdict. Your job is to stabilize your mindset and your runway so you can make smart decisions quickly. Next: stop "job hunting" and start value positioning. The market doesn't pay for buzzwords; it pays for outcomes—cost removed, risk reduced, reliability improved, delivery speed increased. Then choose one lane and go deep. Panic-skill-spamming with random certs wastes time. Focus on work that survives budget pressure: FinOps and cloud cost optimization, security/governance/compliance, platform engineering and reliability, modernization with real ROI, and production-grade data engineering. Finally, treat your search like operational excellence: a 30–60–90 day plan, shipped artifacts, targeted outreach, and interview stories tied to measurable impact. If you're navigating a layoff right now, you're not alone—and you're not powerless. Subscribe for balanced AI + cloud takes.
The U.S. Federal Trade Commission (FTC) is intensifying its investigation into Microsoft's dominance in the world of business software, cloud computing, and artificial intelligence. With a sharp focus on Microsoft's widely used products like Windows, Office, and its AI Copilot, regulators are probing claims that Microsoft may be leveraging its market power to unfairly push AI offerings onto its users and lock out competitors. Subpoenas have gone out to at least six rival companies, demanding deep insight into Microsoft's licensing, bundling, and cross-platform compatibility practices.   Central to the probe is whether Microsoft's integration of AI, security, and identity services into its dominant platforms effectively stymies fair competition by making it harder for customers to use competing cloud services or alternative software. Launched under Lina Khan during the Biden Administration and continuing under the Trump Administration's FTC Chairman Andrew Ferguson, this sweeping inquiry marks one of the toughest regulatory challenges to Microsoft's grip on enterprise technology since the 1990s. As the FTC consults with various companies and industry experts, the growing scrutiny signals that regulators are prepared to confront tech giants who may be using their position to reshape user choice and the future of AI and cloud technology.
AI regulation is moving fast—and it's not happening the same way everywhere. In this video, we break down what's underway right now across the EU, UK, US (federal + states), and Canada in a clear, non-hype way. You'll learn how the EU AI Act sets a risk-based framework (from banned practices to strict controls for high-risk systems and new rules for foundation models), while the UK leans on existing regulators and principles instead of one mega "AI law." In the United States, there's still no single national AI statute, so the real action is in agency guidance, enforcement, and government procurement rules—plus a rapidly growing state-by-state patchwork. We'll also zoom into major state momentum, including California's combination of privacy governance and AI transparency/synthetic media rules, and what's developing in Florida through targeted privacy and anti-deception approaches.   Finally, we cover Canada's direction with proposed "high-impact AI" requirements (AIDA) and how privacy law already shapes AI deployment. Whether you build AI products, buy AI tools, or just want to understand where policy is headed, this is your quick map of the emerging rules of the road. Subscribe for updates as these laws evolve.      
Microsoft pitched Copilot as the "everyday AI companion" that would turbocharge Windows. Then it landed like bloatware with a badge. This video breaks down how a hype-first launch, a forced Windows update rollout, and messy "everywhere at once" integration turned curiosity into backlash. When an assistant shows up in your taskbar, your search box, and your apps without asking, the question isn't "Is it cool?"—it's "Who's in control?" Add uneven answers, hallucinations, and the constant need to double-check, and the productivity promise collapses into friction. We'll look at why the average user couldn't see a clear win, why power users felt their workflow was being hijacked, and why trust debates (privacy, data, OS-level presence) hit harder than any feature list.   Finally, we'll cover the collateral damage: PC makers betting on "AI PC" narratives while Microsoft keeps shifting the goalposts. Copilot didn't just need to be smart. It needed to be optional, predictable, and genuinely useful. Microsoft shipped loud. Users demanded value. We'll replay the marketing, compare it to real-world tasks, and explain why "integrated" became "intrusive." If you've ever wondered how a feature can be both everywhere and nowhere at the same time, this is the autopsy. No sugarcoating—just receipts.
AI is rapidly turning modern marketing into a surveillance-and-optimization machine. What started with loyalty cards and basic customer databases has evolved into always-on tracking across apps, websites, and devices—feeding models that learn what people want, when they're vulnerable to buying, and how to push them toward a decision. In this video, we break down how "surveillance marketing" works in plain language: companies collect massive amounts of behavioral data, stitch it together with identity graphs and third-party sources, and use AI to target messages in real time.  Then comes the next step: dynamic pricing. Instead of one price for everyone, algorithms can adjust prices on the fly based on demand, timing, channel, device signals, and past behavior—essentially guessing what you're willing to pay. That may boost revenue, but it also creates real risks: bias, unfair outcomes, privacy exposure, and a growing "trust debt" when customers realize the system is opaque. We'll also cover why the vendor ecosystem matters—data brokers, ad platforms, CDPs, and personalization engines—and why governance is lagging behind. The takeaway: this isn't going away, but it must be architected responsibly, with limits, audits, fairness testing, and transparency.
The launch of the Amazon McKinsey Group (AMG), a high-profile partnership between AWS and McKinsey, is being presented as a game-changing initiative for enterprise-scale digital transformation. They tout end-to-end value, integrated teams, and billion-dollar business impact. But let's cut through the advertising: AMG's very existence highlights the growing desperation among cloud providers and consulting giants faced with the slow, challenging rollout of artificial intelligence across enterprise landscapes. These firms, each with their own vested interests, are combining efforts not to offer truly objective solutions, but to solidify their own revenue streams by locking clients into their own infrastructures and consulting engagements. This approach severely undermines impartiality, as AWS's core focus is expanding its cloud footprint, and McKinsey is monetizing its transformation playbooks. What gets sold as an "airtight" solution is, in reality, a packaged commercial bundle—engineered to deliver outcomes that benefit the sellers as much as, if not more than, their customers. Before buying into the AMG hype, business leaders need to recognize these partnerships as clever marketing moves rather than unbiased, best-of-breed solutions. Falling for these vendor-driven alliances can cost organizations flexibility, objectivity, and, ultimately, the agility required to thrive in an era of rapid technological change.    
I've spent decades watching enterprises adopt technology, and the pattern is always the same: innovation only creates growth when it reduces friction and increases trust. The automotive industry is pushing AI into the cabin as if "more intelligence" automatically means "more demand." But buyers don't purchase abstractions—they purchase outcomes. Right now, much of in-car AI adds complexity to routine tasks, introduces unpredictable behavior, and shifts capabilities behind subscriptions and post-sale updates. That's not a value story; it's a risk story.  What's worse is that automakers are treating the car like a software platform while customers still expect a durable product. If a UI changes every few months, or a feature degrades when connectivity is weak, the car feels less reliable—even if the drivetrain is excellent. And when the AI fails in basic moments—navigation, calling, climate control—people don't think "early adopter." They think "I overpaid." So instead of creating incremental sales, today's AI often inflates cost, increases buyer hesitation, and drives shoppers toward simpler alternatives. The industry needs fewer demos and more dependable, measurable utility.
In this video, I'm going to take you back to 1985, when building AI meant rolling up your sleeves and encoding expertise by hand. I'll tell a personal story from that era—using Prolog, Lisp, and Borland M1 to create rule-based systems that could make decisions in the real world, long before the cloud and GPUs made "intelligence" feel instant. Then we'll jump to 2026, where AI is defined by foundation models, tool-using agents, and systems that learn from enormous datasets rather than just following explicit rules. You'll see what we gained—speed, scale, and the ability to work with messy language and unstructured information—and what we gave up, including some of the determinism and straightforward explainability of classic expert systems. Finally, I'll lay out a practical view of where the industry is headed: the most valuable architectures don't pick sides, they combine modern models with governance, evaluation, and good old-fashioned business logic to deliver outcomes you can trust.  
  Windows 11 can run local AI, but in real day-to-day use it often feels like it's working against you—especially once you start stacking multiple AI tools, projects, and installs. What I found is that Linux generally delivers a smoother "AI desktop" experience: setups are more straightforward, common AI instructions match what you're actually running, and GPU-accelerated apps tend to behave more consistently. The result is less time spent troubleshooting and more time getting outputs. On Windows 11, the biggest pain points showed up around friction and interruptions—extra steps during installs, more chances for version mismatches, and occasional driver or update moments that break a working setup. Even when performance is similar, the overall workflow can feel slower because you're dealing with more overhead and more "little problems" that add up. Linux stood out for predictability: once things were working, they stayed working. Tools were easier to manage, projects were easier to separate, and the system felt more responsive while running AI tasks alongside normal desktop work. If you're building a desktop or laptop mainly to run AI locally, this video explains why Linux often ends up being the more reliable, less stressful choice—and how to decide if it makes sense for you.    
AI agents are the new buzzword in enterprise tech, but the real question isn't "can we build them?"—it's "can we actually sell them in a way enterprises will trust and fund?" This video, from a Dave Linthicum-style vantage point, cuts through the marketing gloss and treats agents as what they really are: autonomous software systems wired into messy, mission-critical environments. We unpack what an AI agent actually is, how it differs from a simple chatbot or workflow, and what it takes architecturally to move from a cool demo to a production-grade capability. From there, we tackle the harder part: the business model. Who's really buying agents, and what are they expecting to get—labor replacement, outcome guarantees, or just experimental toys? We look at why agent marketplaces are overhyped, why domain specificity and deep integration matter more than model choice, and why trust, governance, and accountability will determine who makes money in this space. If you're wondering whether "AI agents" are the next big product category or just another repackaged services play, this video gives you a brutally honest, enterprise-centric take.
Windows 11 wasn't "the future"—it was a forced pivot. Microsoft took an OS people relied on for speed, flexibility, and control, then locked the door behind TPM 2.0, Secure Boot, and arbitrary CPU lists that stranded millions of perfectly good PCs. And for what? A redesigned UI that's less customizable, a Start Menu that feels like a billboard, and a setup flow that tries to drag Home users into an always-online Microsoft account whether they want it or not. Then comes the real point: Windows 11 increasingly feels like a platform for Microsoft's priorities—cloud services, Edge, Copilot, and AI-first features—instead of a tool built around the user. Privacy concerns haven't eased either, with telemetry worries and controversial features like Recall raising the question: how much of your desktop is yours, and how much is being watched, indexed, and monetized? Meanwhile, everyday annoyances stack up: File Explorer weirdness, inconsistent performance, and "updates" that often add friction instead of fixing fundamentals. In this video, we break down why Windows 11's biggest failure isn't one bug or one design choice—it's the message behind it: comply, subscribe, and get out of the way.  
Everyone's hyped about AI breakthroughs—but almost nobody is talking about the bill that's being handed to normal people. In this video, we break down how the AI gold rush is quietly driving up the price of basic hardware: GPUs, RAM, SSDs, and even CPUs. Hyperscalers are signing multi‑billion‑dollar contracts and buying entire foundry runs of chips, and that doesn't just drain supply—it resets the global price floor for everyone else. The result? Gamers, PC builders, small IT shops, and indie ML labs are all forced to pay more for the same components they used to buy a few years ago. We'll look at how this "micro‑inflation" shows up as an extra 10–20% on a part here and there, and how that snowballs into a serious hidden tax across a full build or refresh cycle. This is the new two‑tier hardware market: trillion‑dollar AI giants at the top, and everyone else fighting over overpriced scraps. If you care about hardware, open ecosystems, or just not getting screwed by invisible market dynamics, this one's for you.  
Is AI coming for your job… or not yet? In this video, I break down a simple test to understand how exposed your role really is, based on the patterns of work AI is best at replacing. Instead of vague hype, we'll look at concrete signals that your job might be in the danger zone. You'll see the core pattern behind high‑risk roles: predictable, rules-based tasks, high repetition, and "good enough" outputs where speed and cost matter more than originality. I'll walk through real examples like customer support, basic content production, document processing, and reporting roles—and explain why they're so easy for AI to swallow. Then we'll zoom out: how many hours of your week look like this? How much of your workflow could AI already do end-to-end if your company really pushed it? By the end, you'll know whether AI is likely to replace big chunks of your job, merely assist you, or force you to move up the value chain. Use this as a wake-up call—not to panic, but to start deliberately shifting your skills toward judgment, relationships, and work that doesn't look like a template.  
   I developed this cost comparison to ground the AI discussion in economic reality instead of assumptions and marketing slides. Too often, generative and agentic AI are framed as inevitable next steps—something you add "on top" of existing systems—as if the only risk is moving too slowly. In truth, these approaches introduce substantial new costs: specialized skills, LLM usage, vector infrastructure, orchestration platforms, and ongoing governance. By putting three approaches—traditional development, generative AI–enhanced systems, and agentic AI solutions—side by side with approximate Year‑1 costs, I wanted to make that premium obvious and impossible to ignore.   The calculation is intentionally simple: if AI costs more, it must do more, and that "more" has to be expressed in concrete business terms. It's designed to prove that the right question is not, "Can we use AI in inventory control?" but, "When does AI outperform a well‑engineered traditional system on measurable outcomes such as labor savings, error reduction, margin improvement, or resilience?" This framework forces enterprises to build a defensible business case, set clear KPIs, and justify AI as one investment among many—not as a foregone conclusion.  
In this video, I break down why the metaverse never truly crossed the chasm from early adopters to the mainstream, despite billions in investment and nonstop hype. Looking at it through the lens of adoption curves, I explain how the core experience failed to solve any urgent, everyday problem for most people. Clunky headsets, motion sickness, empty social spaces, and awkward onboarding created huge friction, while the payoff was vague: low‑res meetings as cartoon torsos and gimmicky "future of work" demos.   I argue that Big Tech tried to decree a paradigm shift from the top down instead of letting real use cases and behaviors emerge organically. Meanwhile, mobile, short‑form content, AI, and lightweight creator tools quietly ate the world by delivering obvious gains in productivity, creativity, and convenience. Capital and talent simply followed the real value. If you're interested in product–market fit, adoption curves, and why some "inevitable" technologies stall out, this is a deep dive into what actually went wrong—and what it teaches us about the next wave of platforms. Perfect for founders, product managers, investors, and anyone trying to separate signal from hype when evaluating new technological frontiers and platform bets.
AI is suddenly everywhere. Not just in your phone or laptop, but in pins on your shirt, glasses on your face, gadgets in your pocket, and appliances in your kitchen. Every launch promises a sci‑fi future: your phone replaced, your life automated, your routines "optimized" by artificial intelligence.   But once the hype dies down and people actually live with these devices, a different story shows up: laggy experiences, short battery life, awkward interactions in public, and features that quietly stop getting used after a few weeks. In many cases, you're paying a premium—often with a subscription on top—for something your existing phone and a couple of good apps already do better, more privately, and far more reliably.   This isn't an anti‑AI rant. AI can be genuinely useful when it disappears into tools you already use and actually saves time or unlocks something new. The problem is with "AI gadgets" that exist mainly to sell you new hardware. In this piece, I'll break down the most overrated AI devices, why they don't live up to their promises, and how to spot the difference between meaningful innovation and expensive, overhyped tech.    
Artificial intelligence hasn't just changed how we work—it's rewriting the rules for how we build careers. For decades, the formula was simple: get a bachelor's degree, land a "good job," and enjoy a steady income premium over those who didn't go to college. That degree was your ticket in the door. But in an AI-first economy, that automatic advantage is disappearing. Today, AI systems can write code, generate marketing campaigns, analyze complex datasets, and handle research tasks that used to justify hiring entire layers of junior, degree-holding employees. At the same time, AI-accelerated learning platforms are making it possible to build real, marketable skills in months instead of years. Employers are responding by caring less about what's on your diploma and more about what's in your portfolio. In this video, we're going to unpack how AI is quietly devaluing the traditional four-year degree—not by making education irrelevant, but by exposing how slow, expensive, and static it often is. We'll look at what companies now prioritize, how wage dynamics are shifting, and what this all means for your next career move. Because in this new landscape, your real edge isn't the degree you earned—it's the value you can create with intelligent tools.  
In a world where productivity and information management are more critical than ever, AI-powered note-taking devices have quickly become essential tools for busy professionals. Plaud, the world's leading AI note-taking brand, is at the forefront of this transformation with its innovative lineup of devices—Plaud Note Pro, Plaud Note, and Plaud NotePin. Each product is designed to cater to unique user preferences, from studio-grade audio capture and intelligent noise isolation in the Plaud Note Pro, to balanced portability in the Plaud Note, and ultimate wearable convenience with the Plaud NotePin. Key features such as multilingual transcription in 112 languages, enterprise-grade data security, AI-driven smart summaries, and seamless workflow integration set Plaud devices apart, making them indispensable for executives, educators, clinicians, and content creators alike. Plaud also offers flexible subscription options including a feature-rich free tier and expansive paid plans to suit diverse needs. As AI note-taking rapidly redefines the way we record, distill, and share knowledge, Plaud's devices enable users to save time, stay organized, and turn every conversation into a strategic asset. Join us as we explore which Plaud product aligns best with your workflow and why this brand is making waves in the productivity space.
Big Tech's race to dominate AI is starting to look less like visionary innovation and more like a dangerous addiction. Since 2023, tech giants have poured hundreds of billions into AI infrastructure, models, and moonshot products that may not reach meaningful enterprise adoption for years. Meanwhile, the technologies that actually run businesses today—cloud platforms, core SaaS tools, security, analytics, and integrations—are being quietly deprioritized. Roadmaps slip, support thins out, and customers are nudged toward immature AI features instead of getting the reliability and improvements they actually need.   This imbalance isn't just a product strategy mistake; it's a looming revenue and trust crisis. Enterprise buyers are already feeling neglected as "legacy" products stagnate while marketing and engineering obsess over AI. In regulated and risk-averse industries, where AI adoption is inherently slow, the gap between investment and return is growing wider. That gap is where churn, budget cuts, and new competitors thrive. If Big Tech doesn't rebalance—protecting the core while building the future—it risks funding the AI revolution by eroding the very customer relationships that make it possible.
In recent years, the way we write code has transformed—with AI assistants entering our text editors, autocomplete becoming disturbingly insightful, and "vibes" taking precedence over documentation. This explosive trend, now coined "vibe coding," invites developers to follow their instincts and embrace what feels right, often with a nudge from AI, rather than adhering to time-tested engineering best practices. The allure is obvious: creativity, speed, and the thrill of letting generative models suggest the next big thing in your codebase. But as this method spreads like wildfire through tech communities and social media, serious questions arise: What happens to software quality when the pursuit of innovation means tossing out standards? Are teams sacrificing long-term reliability and maintainability for a quick hit of inspiration and AI-driven dopamine? In this video, we'll cut through the hype and examine why vibe coding, despite all its trendiness, might just be one of the riskiest impulses in our tech zeitgeist—a shortcut that could undermine collaboration, introduce hidden technical debt, and ultimately doom projects to chaos. Get ready for a brutally honest breakdown of why vibe coding is more than just a bad habit—it's a ticking time bomb for professional software development.
loading
Comments