Discover
The Miseducation of Technology
The Miseducation of Technology
Author: Attorney Danielle
Subscribed: 0Played: 0Subscribe
Share
© Attorney Danielle
Description
Technology isn’t neutral—it carries the biases of the past. Hosted by Danielle A. Davis, an attorney & tech policy expert, this podcast uncovers how technology reinforces systemic inequities while empowering you to navigate and reshape the digital world.
Each episode connects historical injustices to today’s digital landscape—from biased hiring algorithms to content suppression. More than critique, this podcast delivers expert insights & real-world strategies to help you reclaim agency in tech.
Because while "technology may have been miseducated, you do not have to be."
Each episode connects historical injustices to today’s digital landscape—from biased hiring algorithms to content suppression. More than critique, this podcast delivers expert insights & real-world strategies to help you reclaim agency in tech.
Because while "technology may have been miseducated, you do not have to be."
6 Episodes
Reverse
AI isn’t just digital. It’s land. Water. Electricity. And Black neighborhoods are paying the price.In this episode of The Miseducation of Technology, Attorney Danielle A. Davis breaks down what’s really behind Microsoft’s new “community-first” promise on AI data centers—and why that announcement didn’t come out of nowhere.The conversation starts where most tech policy discussions don’t: with culture.In 2025, R&B singer SZA publicly questioned the environmental cost of AI—calling out energy use, pollution, and why Black cities like Memphis keep ending up on the receiving end. What sounded like a celebrity tweet was actually a warning rooted in lived experience.Because while AI is often sold as “cloud-based” and abstract, for many Black communities it is physical, loud, and permanent—arriving in the form of massive data centers that consume enormous amounts of power and water, strain local grids, and reshape land use with little community input.So why did Microsoft suddenly promise to:• Cover electricity costs• Reduce and replenish water use• Stop asking for tax breaks• Invest in local training and educationAnd more importantly—what does that actually solve… and what does it leave untouched?In this episode, we break it all down in four parts:1️⃣ What AI data centers really are—and why AI requires this scale of physical infrastructure2️⃣ How these facilities impact Black communities environmentally, economically, and politically3️⃣ Why Microsoft made its “community-first” announcement now—and what pressure forced it4️⃣ What Black communities can do before harm occurs to demand transparency and accountabilityThis isn’t an anti-technology conversation.It’s a power, infrastructure, and accountability conversation.Because data centers don’t just power AI. They extract value from real communities to fuel digital convenience elsewhere.And once you understand that, you start asking better questions:Who benefits?Who pays?And who gets to decide?🎧 If you care about AI, environmental justice, infrastructure, or the future of Black communities in the digital economy—this episode is for you.Technology may have been miseducated.But you do not have to be.Follow, rate, and share The Miseducation of Technology to stay in the conversation.YouTube: https://www.youtube.com/@attorneydanielle_Instagram: @AttorneyDanielle_LinkedIn: https://www.linkedin.com/in/danielleadriannadavis/Website: AttorneyDanielle.AINewsletter: https://attorneydanielle.kit.com/Support my work: https://buymeacoffee.com/attorneydanielle Microsoft's announcement: https://blogs.microsoft.com/on-the-issues/2026/01/13/community-first-ai-infrastructure/
What does a high-profile celebrity documentary have to do with federal AI policy?More than you might think.In this episode of The Miseducation of Technology, attorney and tech policy expert Danielle A. Davis, Esq. examines an uncomfortable but necessary comparison: the system of silence exposed in the Diddy documentary and the Trump Administration’s emerging approach to artificial intelligence governance.This episode is not about celebrity misconduct. It is about how institutions respond when harm is widely understood but inconvenient to address—and how power restructures systems to keep that knowledge from producing accountability.The Diddy documentary did not reveal new facts so much as it revealed a pattern: an industry in which assistants, executives, and gatekeepers operated within an informal but well-understood structure that discouraged intervention, absorbed harm, and protected access to power. Knowledge existed. The failure was institutional.Danielle traces that same structural logic into current federal AI policy.The United States still lacks a comprehensive federal AI statute governing transparency, discrimination, or accountability. In that vacuum, executive actions matter. Under the Trump Administration, those actions have followed a consistent trajectory:narrowing the scope of legitimate oversight,reframing civil-rights and bias frameworks as ideological interference, andprioritizing speed, deployment, and industry flexibility over enforceable safeguards.This trajectory is visible across three developments:The AI Action Plan, which elevates innovation and competitiveness while treating accountability mechanisms as regulatory frictionThe Executive Order on “Preventing Woke AI,” which redefines bias-mitigation and equity frameworks as threats to “neutrality,” effectively limiting what forms of harm may be acknowledged in federal AI governanceEfforts to preempt state AI laws, despite the absence of a federal standard—seeking to block the only existing mechanisms addressing real-world algorithmic harmDanielle situates these moves within a broader pattern of governance: harm is not denied, but it is managed out of the regulatory record.Just as the entertainment industry reorganized itself around Diddy’s power—adjusting norms, incentives, and consequences—the technology sector has recalibrated its rhetoric and priorities to align with federal signals. Commitments to fairness and accountability that were prominent under the previous administration have quietly given way to language centered on innovation, competitiveness, and avoiding “patchworks of regulation.” Not because the evidence of harm changed—but because the governing environment did.This is not a claim of conspiracy. It is a description of institutional behavior.Grounded in concrete policy analysis, this episode connects executive actions, failed congressional moratoriums, state preemption efforts, and industry alignment into a single governing pattern: silence as a form of power—where knowledge exists, but structures prevent it from producing intervention.The episode closes by returning to Hosea 4:6—“My people are destroyed for lack of knowledge”—not as a warning about ignorance, but as a critique of leadership that possesses knowledge and chooses not to act on it. In both the entertainment industry and AI governance, the danger is not that harm is unknown—it is that truth is systematically constrained.If you’re interested in:AI policy and federal governanceCivil rights and technology regulationAlgorithmic accountabilityState versus federal authority in tech policyWhy “neutral” AI is a political constructionThis episode is for you.🎧 Listen, reflect, and remember:Technology may have been miseducated—but you do not have to be.
Do you ever feel like you’re being watched? Well, you’re not imagining it. From loyalty cards to smart devices, we live in a surveillance economy where nearly every click, tap, and purchase is tracked, stored, and sold.In this episode of The Miseducation of Technology, attorney and tech policy expert Danielle A. Davis, Esq. pulls back the curtain on how digital surveillance works, who profits from your data, and why these systems often hit Black communities hardest.You’ll learn:What “ambient surveillance” really is and how it hides in plain sightHow your data is bought, sold, and used to influence your choicesWhy bias in these systems creates unequal outcomes for Black communitiesPractical steps you can take today to protect your privacy and reclaim controlThis isn’t about fear—it’s about awareness. Because once you understand how the system works, you can make smarter choices that protect your data, your future, and your community.Resources:Visit AttorneyDanielle.ai for the companion blog post that provides all the links to the data broker opt-outs, privacy tools, VPNs, and more discussed in this episode.
What happens when the jobs Black communities have long relied on are the first to disappear in the age of automation?Automation is transforming the labor market, but not everyone is being affected equally. In this episode, Attorney Danielle unpacks how AI, robotics, and digital systems are rapidly reshaping industries where Black workers are overrepresented—and what that means for economic access, stability, and long-term equity in the Black community. We cover:The industries most at risk and why Black workers are being hit hardestHow this wave of automation echoes past patterns of exclusion and displacementThe structural consequences beyond job loss—including weakened access to healthcare, housing, and educationWhy this moment calls for more than awareness—and what preparation actually looks likeDanielle also walks through practical strategies for reskilling, policy change, and community investment, so Black workers aren’t left out of the future of work, but positioned to lead it.📍For training programs and scholarship links, read the companion blog post: “Your Guide to Reskilling in the Age of Automation."
What is The New Jim Code, and how are AI systems automating discrimination in policing, hiring, banking, and healthcare?In this episode of The Miseducation of Technology, Attorney Danielle takes a deep dive into The New Jim Code, a term coined by Ruha Benjamin to describe how artificial intelligence doesn’t just reflect racial bias—it automates and amplifies it, embedding discrimination into the very systems that shape our daily lives. AI is often framed as objective, but when it learns from biased data, it doesn’t correct historical injustices—it makes them worse.Inside this episode, Danielle explores:What is an algorithm? Understanding how AI systems make decisions and why their neutrality is a myth.Facial recognition bias: AI-powered surveillance tools misidentify Black faces at alarming rates, leading to wrongful arrests like those of Robert Williams, Nijeer Parks, Michael Oliver, and Porcha Woodruff—real people whose lives were disrupted by flawed technology.Algorithmic redlining in banking: AI-driven lending models continue the legacy of housing discrimination, disproportionately denying loans to Black and Hispanic borrowers, even when they meet financial qualifications.Predictive policing: AI models trained on biased crime data fuel over-policing of Black and Brown communities, keeping them under constant surveillance while reinforcing dangerous racial stereotypes.AI failures in hiring and image recognition: From tools that penalize women applicants to systems that have misclassified Black faces in harmful ways, Danielle unpacks how bias seeps into technologies designed to streamline decision-making.AI in healthcare: A 2019 study exposed how AI used in hospitals systematically deprioritized Black patients for critical medical care, reinforcing existing disparities in the healthcare system.But it’s not just about uncovering the harm—it’s about pushing for solutions. Attorney Danielle discusses:The role of diversity in AI development—why representation matters in designing fairer systems.The need for transparency and accountability—how tech companies must audit their AI for bias before deploying it.How we can take action—from advocating for ethical AI policies to staying informed about the ways technology impacts our lives.
What if everything you thought you knew about technology was wrong?In this debut episode, Attorney Danielle A. Davis introduces The Miseducation of Technology Podcast by unpacking how digital tools reflect the same systemic biases that have shaped society for generations. She explores how AI, social media, and internet infrastructure aren’t just flawed—they’re built on historical inequities that continue to shape opportunity, visibility, and access.Guided by Hosea 4:6 —"My people are destroyed for lack of knowledge"—Danielle reveals how biased algorithms, facial recognition failures, and surveillance-driven platforms reinforce existing disparities, particularly for Black communities. She draws from Carter G. Woodson’sThe Mis-Education of the Negro and Lauryn Hill’sThe Miseducation of Lauryn Hill, connecting exclusion from education and knowledge in the past to today’s digital world, where flawed data and human biases are embedded in the technology we rely on daily.This episode challenges the myth of tech neutrality, exposing how these systems don’t just mirror society—they actively shape it. But it’s not just about critique—Danielle outlines what’s ahead for the podcast, from unpacking social media regulation and algorithmic bias to practical strategies for making technology work for us—not against us.




