President-elect Trump recently announced that entrepreneurs Elon Musk and Vivek Ramaswamy will lead the Department of Government Efficiency. Musk had forecast the idea in the tail end of the presidential election, championing a commission focused on cutting government spending and regulation. In a statement posted to Truth Social, the president-elect said DOGE would “pave the way for my administration to dismantle government bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure federal agencies.” For his part, Musk said “this will send shockwaves through the system, and anyone involved in government waste, which is a lot people.”Government waste has long been a focus for Republicans in Washington. The phrase “waste, fraud, and abuse” often generates a chuckle in DC circles, given how much the federal bureaucracy, government spending, and the national debt have grown despite decades of professed fiscal hawkishness. While critics of Trump and Musk are rolling their eyes at what they perceive as a toothless commission, proponents welcome the focus on government efficiency from the president-elect and the world’s richest man, and are optimistic that Musk and Ramaswamy’s expertise in the business world would bring much-needed outside perspectives on how to optimize the federal government.The Foundation for American Innovation has operated a project on government efficiency and tech modernization since 2019. FAI fellows just published a new paper on the topic of “An Efficiency Agenda for the Executive Branch.” To discuss DOGE, the challenges of streamlining bureaucracy, how AI might play a role in the efforts, and what Congress can do to help make DOGE a success, Evan is joined by Sam Hammond, Senior Economist at FAI and Dan Lips, Head of Policy at FAI. For a quick take on FAI's recommendations, check out Dan's oped in The Hill linked here.
Donald Trump won the 2024 presidential election, Republicans won control of the Senate, and the GOP is slated to maintain control of the House. If you turn on cable news, you will see many pundits playing monday morning quarterback in the wake of this Republican trifecta, arguing about the merits of how people voted, speculating on cabinet secretaries, and pointing fingers on who to blame, or who to give credit to, for the results. But this is The Dynamist, not CNN. In today’s show, we focus on what the results mean for tech policy and tech politics. There are ongoing antitrust cases against Meta, Google, Apple, and Amazon. Investigations into Microsoft, Open AI, and Nvidia. How might the new president impact those cases? Congress is considering legislation to protect children from the harms of social media. Will we see action in the lame duck session or will the issue get kicked to January when the new Congress settles in? What about AI? Trump has vowed to repeal Biden’s Executive Order on artificial intelligence. What, if anything, might replace it? And for those in Silicon Valley who supported Trump, from Elon Musk to Peter Thiel, how might they wield influence in the new administration?Evan is joined by Nathan Leamer, CEO of Fixed Gear Strategies and Executive Director of Digital First Project, and Ellen Satterwhite, Senior Director at Invariant, a government relations and strategic communications firm in DC. Both Nathan and Ellen previously served in government at the Federal Communications Commission—Nathan under President Trump and Ellen under President Obama.
When people hear 'quantum physics,' they often think of sci-fi movies using terms like 'quantum realm' to explain away the impossible. But today we're talking about quantum computing, which has moved beyond science fiction into reality. Companies like IBM and Google are racing to build machines that could transform medicine, energy storage, and our understanding of the universe.But there's a catch: these same computers could potentially break most of the security protecting our digital lives, from WhatsApp messages to bank transfers to military secrets. To address this threat, the National Institute of Standards and Technology recently released quantum-safe cryptography standards, while new government mandates are pushing federal agencies to upgrade their security before quantum systems become cryptographically relevant—in other words, vulnerable to hacks by quantum computers.To help us understand both the promise and peril of quantum computing, we're joined by Travis Scholten, Technical Lead in the Public Sector at IBM and former quantum computing researcher at the company. He’s also a former policy hacker at FAI, author of the Quantum Stack newsletter and co-author of a white paper on the benefits and risks of quantum computers.
When voters head to the polls next week, tech policy won't be top of mind—polling shows immigration, the economy, abortion, and democracy are the primary concerns. Yet Silicon Valley's billionaire class is playing an outsized role in this election, throwing millions at candidates and super PACs while offering competing visions for America's technological future.The tech industry is in a much different place in 2024 than in past elections. Big Tech firms, who once enjoyed minimal government oversight, now face a gauntlet of regulatory challenges—from data privacy laws to antitrust lawsuits. While some tech leaders are hedging their bets between candidates, others are going all in for Harris or Trump—candidates who offer different, if not fully developed, approaches to regulation and innovation.Trump's vision emphasizes a return to American technological greatness with minimal government interference, attracting support from figures like Elon Musk and Marc Andreessen despite Silicon Valley's traditionally Democratic lean. Harris presents a more managed approach, a generally pro-innovation stance tempered by a desire for government to help shape AI and other tech outcomes. Democratic donors like Mark Cuban and Reid Hoffman are backing Harris while hoping she'll soften Biden's tough antitrust stance. Meanwhile, crypto billionaires are flexing their political muscle, working to unseat skeptics in Congress after years of scrutiny under Biden's financial regulators.What are these competing visions for technology, and how would each candidate approach tech policy if elected? Will 2024 reshape the relationship between Silicon Valley and Washington? Evan is joined by Derek Robertson, a veteran tech policy writer who authors the Digital Future Daily newsletter for Politico.*Correction: The audio clip of Trump was incorrectly attributed to his appearance on the Joe Rogan Experience. The audio is from Trump’s appearance on the Hugh Hewitt Show
Over the past few years, Elon Musk’s political evolution has been arguably as rapid and disruptive as one of his tech ventures. He has transformed from a political moderate to a vocal proponent of Donald Trump and the MAGA movement and his outspokenness on issues like illegal immigration make him an outlier among tech entrepreneurs and CEOs.Musk's increasing political involvement has added a layer of scrutiny to his businesses, particularly as SpaceX aims to secure more contracts and regulatory permissions. Labor tensions also loom, with Tesla facing unionization efforts and accusations of unfair labor practices, adding a wrinkle into an election where both presidential candidates are vying for the labor vote in the midst of several high-profile strikes this year.Through all this, Musk’s companies—SpaceX, Tesla, and X—are pressing forward, but the stakes have arguably never been higher with regulatory bodies and the court of public opinion keeping a close watch. Many conservatives have embraced Musk as a Randian hero of sorts, a champion of free speech and innovation. Others sound a note of caution, warning that his emphasis on “efficiency” could undermine certain conservative values, and question whether his record on labor and China are worth celebrating. So, should conservatives embrace, or resist, Musk-ification? Evan is joined by Chris Griswold, Policy Director at American Compass, a New Right think tank based in DC. Check out his recent piece, “Conservatives Must Resist Musk-ification.” Previously, he served as an advisor to U.S. Senator Marco Rubio, where he focused on innovation, small business, and entrepreneurship.
Have tech companies become more powerful than governments? As the size and reach of firms like Google and Apple have increased, there is growing concern that these multi-trillion dollar companies are too powerful and have started replacing important government functions.The products and services of these tech giants are ubiquitous and pillars of modern life. Governments and businesses increasingly rely on cloud services like Microsoft Azure and Amazon Web Services to function. Elon Musk's Starlink has provided internet access in the flood zones of North Carolina and the battlefields of Ukraine. Firms like Palantir are integrating cutting-edge AI into national defense systems.In response to these rapid changes, and resulting concerns, regulators in Europe and the U.S. have proposed various measures—from antitrust actions to new legislation like the EU's AI Act. Critics warn that overzealous regulation could stifle the very innovation that has driven economic growth and technological advancement, potentially ceding Western tech leadership to China. Others, like our guest, argue that these actions to rein in tech don’t go nearly far enough, and that governments must do more to take back the power she says that tech companies have taken from nation states.Evan and Luke are joined by Marietje Schaake, a former MEP and current fellow at Stanford’s Cyber Policy Center. She is the author of The Tech Coup: How to Save Democracy from Silicon Valley. You can read her op-ed in Foreign Affairs summarizing the book.
On September 29th, Governor Newsom vetoed SB 1047, a controversial bill aimed at heading off catastrophic risks of large AI models. We previously covered the bill on The Dynamist in episode 64. In a statement, Newsom cited the bill’s “stringent standards to even the most basic functions” and said he does “not believe this is the best approach to protecting the public from real threats posed by the technology.” Senator Scott Wiener, the bill’s author, responded, “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers[.]”The bill had passed the California senate back in August by a vote of 30-9, having been the subject of fierce debate between AI companies big and small and researchers and advocates who fear a catastrophic AI event. Proponents want to get ahead of AI cyberattacks, AI weapons development, or doomsday scenarios by making developers liable to implement safety protocols. Opponents argue that the bill will stifle innovation in California, calling it an “assault on open source” and a “harm to the budding AI ecosystem.”Aside from the merits of the legislation, it is arguably the first major political fight over AI in the U.S. where competing interests fought all the way to the governor’s desk, attempting to sway the pen of Governor Newsom. The story featured a cast of characters from California Democrats like Nancy Pelosi to billionaires like Elon Musk to major companies like Google and OpenAI. What does this battle say about who holds sway in emerging AI politics? What are the factions and alignments? And what does this all mean for next year in California and beyond?Evan is joined by Sam Hammond, Senior Economist at FAI and author of the Substack Second Best, and Dean Ball, a research fellow at the Mercatus Center, author of the Substack Hyperdimensional, and a non-resident fellow at FAI.
Since the advent of platforms like Uber, Instacart, and DoorDash, the so-called gig economy has been intertwined with technology. While the apps no doubt created loads of opportunity for people seeking flexible work on their own schedules, they have been lambasted by critics who say they don’t provide drivers and grocery shoppers with a minimum wage and health benefits.This tech-labor debate has largely played out in state legislatures and in the courts. Voters have weighed in as well, with gig companies DoorDash and Lyft spending some $200 million to win the Prop 22 ballot initiative in California that exempted their workers from new labor laws. Should Uber be forced to provide benefits to employees? Should government stay out and let these markets continue to operate?As labor leaders and progressive lawmakers continue to battle with the companies, and governments, companies, and unions struggle to apply old principles to an increasingly digital economy, some argue for a third way, including our guest today. Wingham Rowan is the founder and managing director of Modern Markets for All, a non-profit that develops infrastructure for people working outside of traditional 9-5 jobs. Prior to that, he was a TV host and producer at the BBC. Read more about his work at PeoplesCapitalism.org.
When the average person thinks of nuclear energy, there’s a good chance they’re thinking in terms influenced by pop culture—Homer Simpson’s union job at the Springfield plant, or the HBO miniseries Chernobyl, which dramatized the world’s biggest meltdown.For all its promise in the mid-20th century, U.S. nuclear energy largely stalled in the 1970s and 80s. While public anxiety over its safety played a role, experts have pointed to the hefty cost of building plants and poor regulatory/policy decisions as having more impact. But in recent years, as demand for low-carbon energy surges and companies like OpenAI, Microsoft, and Google are burning through energy to train artificial intelligence, there is a renewed interest in making nuclear work in this century.But concerns over cost and safety remain, and even among proponents of nuclear energy, there is a robust debate about exactly how to approach future builds, whether to rely on conventional methods or hold off until new research potentially yields a smaller, more cost-effective method of unlocking atomic energy. What is the state of nuclear power in the U.S. and around the world today? What policies could shape its future? And how might AI, other market dynamics, geopolitics, and national security concerns impact the debate and its outcomes?Evan is joined by Emmet Penney, the creator of Nuclear Barbarians, a newsletter and podcast about industrial history and energy politics, and a contributing editor at COMPACT magazine. Thomas Hochman, Policy Manager at FAI, is also joining. You can read Emmet’s recent piece on how why nuclear energy is a winning issue for the populist GOP here. You can read Thomas’s piece for The New Atlantis on “nuclear renaissance” here, and his writeup of the ADVANCE Act here.
The recent riots in the United Kingdom raise new questions about online free speech and misinformation. Following the murder of three children in Southport, England, false rumors spread across social media about the killer’s identity and religion, igniting simmering resentment over the British government’s handling of immigration in recent years. X, formerly Twitter, has come under fire for allowing the rumors to spread, and the company’s owner Elon Musk has publicly sparred with British politicians and European Union regulators over the issue. The incident is the latest in an ongoing debate abroad and in the U.S. about free speech and the real-world impact of online misinformation. In the U.S., politicians have griped for years about the content policies of major platforms like YouTube and Facebook—generally with conservatives complaining the companies are too censorious and liberals bemoaning that they don’t take down enough misinformation and hate speech. Where should the line be? Is it possible for platforms to respect free expression while removing “harmful content” and misinformation? Who gets to decide what is true and false, and what role, if any, should the government play? Evan is joined by Renee Diresta who studies and writes about adversarial abuse online. Previously, she was a research manager at the Stanford Internet Observatory where she researched and investigated online political speech and foreign influence campaigns. She is the author of Invisible Rulers: The People Who Turn Lies into Reality. Read her recent op-ed in the New York Times here.
Minnesota Governor Tim Walz has made headlines for being picked as Vice President Kamala Harris’s running mate. One underreported aspect of his record is signing Minnesota’s first “right to repair” law last year. The bill took effect last month.The concept sounds simple enough: if you buy something like a phone or a car, you should have the right to fix it. But as our world becomes more digitized, doing it yourself, or having your devices repaired by third-party mechanics or cell phone shops, can be complicated. Everything from opening a car door to adjusting your refrigerator can now involve complex computer code, giving manufacturers more control over whether, and how, devices can be repaired. Frustrations over this dynamic sparked the “right to repair” movement, which advocates for legislation to require manufacturers to provide parts, tools, and guides to consumers and third parties. While powerful companies like John Deere and Apple have cited cybersecurity and safety concerns with farmers and iPhone users tinkering with their devices, right-to-repair advocates say irreparability undermines consumer rights, leads to higher prices and worse quality, and harms small businesses that provide third-party repair services.As more states continue to adopt and debate these laws, which industries will be impacted? And will the federal government consider imposing the policy nationwide? Evan and Luke are joined by Kyle Wiens, perhaps the most vocal proponent of the right to repair in the U.S. Wiens is the co-founder and CEO of IFixit, which sells repair parts and tools and provides free how-to-guides online. Read Kyle’s writing on repair rights and copyright in Wired and his article in The Atlantic on how his grandfather helped influence his thinking. See Luke’s piece in Reason on how the debate impacts agriculture.
OpenAI unleashed a controversy when the famed maker of Chat GPT debuted its new voice assistant Sky. The problem? For many, her voice sounded eerily similar to that of Scarlett Johansson, who had ironically starred in the dystopian movie Her about a man, played by Joaquin Phoenix, who developed a romantic relationship with a virtual assistant. While OpenAI claimed that Sky’s voice belonged to a different actress, the company pulled it down shortly after the launch given the furor from Johansson and the creative community. But a flame had already been lit in the halls of Congress, as the controversy has inspired multiple pieces of legislation dealing with serious questions raised by generative AI.Should AI companies be allowed to train their models without compensating artists? What exactly is “fair use” when it comes to AI training and copyright? What are the moral and ethical implications of training AI products with human-created works when those products could compete with, or replace, those same humans? What are the potential consequences of regulation in this area, especially as the U.S. government wants to beat out China in the race for global AI supremacy?Evan is joined by Josh Levine, Tech Policy Manager at FAI, and Luke Hogg, Director of Policy and Outreach at FAI. Read Josh’s piece on the COPIED Act here, and Luke’s piece on the NO AI FRAUD Act here.
Trump’s pick of J.D. Vance as his running mate is seen by many as the culmination of a years-long realignment of Republican and conservative politics—away from trickle-down economics toward a more populist, worker-oriented direction. While the pick ushered in a flood of reactions and think pieces, it’s unclear at this stage what Vance’s impact would truly be in a Trump second term. Will Vance be able to overcome some of Trump’s more establishment-friendly positions on taxes and regulation? Will he advocate that Trump continue some of Biden’s policies on tech policy, particularly the administration’s actions against companies like Google, Amazon, and Apple? How might Vance influence policies on high-tech manufacturing, defense technology, and artificial intelligence? Evan is joined by Oren Cass, Chief Economist and Founder of American Compass and the author of The Once and Future Worker: A Vision for the Renewal of Work in America. Read his recent op-ed in the New York Times on populism and his recent piece in Financial Times on Vance. Subscribe to his Substack, “Understanding America.”Evan is also joined by Marshall Kosloff, co-host of The Realignment podcast, sponsored by FAI, that has been chronicling the shifting politics of the U.S. for several years, as well as by Jon Askonas, professor of politics at Catholic University and senior fellow at FAI.
On July 1, the Supreme Court issued a 9-0 ruling in NetChoice v. Moody, a case on Florida and Texas’s social media laws aimed at preventing companies like Facebook and YouTube from discriminating against users based on their political beliefs. The court essentially kicked the cases back down to lower courts, the Fifth and Eleventh Circuits, because they hadn’t fully explored the First Amendment implications of the laws, including how they might affect direct messages or services like Venmo and Uber. While both sides declared victory, the laws are currently enjoined until the lower court complete their remand, and a majority of justices in their opinions seemed skeptical that regulating the news feeds and content algorithms of social media companies wouldn’t violate the firms’ First Amendment rights. Other justices like Samuel Alito argued the ruling is narrow and left the door open for states to try and regulate content moderation.So how will the lower courts proceed? Will any parts of the Florida and Texas laws stand? What will it mean for the future of social media regulation? And could the ruling have spillover effects into other areas of tech regulation, such as efforts to restrict social media for children or impose privacy regulations? Evan and Luke are joined by Daphne Keller, Platform Regulation Director at Stanford’s Cyber Policy Center. Previously, she was Associate General Counsel at Google where she led work on web search and other products. You can read her Wall Street Journal op-ed on the case here and her Lawfare piece here.
It’s time for American industry’s Lazarus moment. At least, that’s what a growing coalition of contrarian builders, investors, technologists, and policymakers have asserted over the past several years. American might was built on our industrial base. As scholars like Arthur Herman detail in Freedom’s Forge, the United States won World War 2 with industrial acumen and might. We built the broadest middle class in the history of the world, put men on the moon, and midwifed the jet age, the Internet, semiconductors, green energy, revolutionary medical treatments, and more in less than a century. But the optimism that powered this growth is fading, and our public policy ecosystem has systematically deprioritized American industry in favor of quick returns and cheap goods from our strategic competitors. Is there a way to restore our domestic industry? What does movement-building in this space look like? We're joined by Austin Bishop, a partner at Tamarack Global, co-founder of Atomic Industries, and co-organizer of REINDUSTRIALIZE, and Jon Askonas, Senior Fellow with FAI and Professor of Politics at the Catholic University of America. You can follow Austin on X here and Jon here. Read more about REINDUSTRIALIZE and the New American Industrial Alliance here and check out some of Jon's research on technological stagnation for American Affairs here.
For this special edition episode, FAI Senior Fellow Jon Askonas flew down to Palm Bay, FL to mix and mingle with the brightest minds in aerospace, manufacturing, and defense at the Space Coast Hard Tech Hackathon, organized by stealth founder Spencer Macdonald (also an FAI advisor). Jon sits down with a friend of the show and Hyperstition founder Andrew Côté for a wide-ranging conversation on the space tech revolution, the “vibe shift” towards open dialogue, AI’s role in shaping reality, and the challenges Silicon Valley faces in fomenting new innovation. They critique regulatory moats that hamper entrepreneurship, safetyism's risk to progress, and explore the concept of “neural capitalism,” where AI enhances decentralized decision-making. You can follow Jon at @jonaskonas and Andrew at @andercot. Andrew recently hosted Deep Tech Week in San Francisco, and he's gearing up to host the next one in New York City.
Silicon Valley was once idolized for creating innovations that seemed like modern miracles. But the reputations of tech entrepreneurs have been trending downward of late, as Big Tech companies are blamed for any number of societal ills, from violating users’ privacy and eroding teenagers’ mental health, to spreading misinformation and undermining democracy.As the media and lawmakers focus on modern gripes with Big Tech, the origin stories of companies like Meta and Google feel like ancient history or almost forgotten. Our guest today argues that these stories, filled with youthful ambitions and moral tradeoffs—even “original sins”—help explain how the companies came to be, amass profits, and wield power. And the lessons learned could provide a path for more responsible innovations, especially as the gold rush for artificial intelligence heats up.Evan is joined by Rob Lalka, Professor at Tulane University’s Freeman School of Business and Executive Director of the Albert Lepage Center for Entrepreneurship and Innovation. He is the author of a new book, The Venture Alchemists: How Big Tech Turned Profits Into Power. Previously he served in the U.S. State Department.
This is how many assume the tech economy is supposed to work. Big, established companies are at risk of getting disrupted as they get set in their ways; the internal bureaucracies grow too large and they lose their nimbleness and take fewer risks. The pressure from upstarts forces larger firms to innovate – otherwise, they lose market share and may even fold. But is that how it works in practice? An increasing share of policymakers believe Big Tech giants don’t face meaningful competition because their would-be competitors get bought, copied, or co-opted by essentially the same five companies: Google, Amazon, Apple, Meta, and Microsoft. While antitrust regulators have been focusing a lot on what they believe are “killer acquisitions,” such as then-Facebook buying Instagram, there seems to be less focus on what some experts call “co-opting disruption,” where large firms seek to influence startups and steer them away from potentially disruptive innovations. So what does that look like in practice? And is this a fair characterization of how the tech market works?Evan is joined by Adam Rogers, senior tech correspondent at Business Insider. Prior to that, he was a longtime editor and writer at Wired Magazine. You can read his article on co-opting disruption, “Big Tech’s Inside Job,” here. He is also the author of Full Spectrum: How the Science of Color Made Us Modern.
Tornado Cash is a decentralized cryptocurrency mixing service built on Ethereum. Its open-source protocol allows users to obscure the trail of their cryptocurrency transactions by pooling funds together, making it difficult to trace the origin and destination of any given transfer.In August 2022, the U.S. Treasury Department took the unprecedented step of sanctioning Tornado Cash, effectively criminalizing its use by American citizens and businesses. Authorities accused the service of facilitating money laundering, including processing hundreds of millions in stolen funds linked to North Korean hackers. In the wake of the sanctions, Tornado Cash's website was taken down, its GitHub repository removed, and one of its developers arrested in Amsterdam.The crackdown has sent shockwaves through the crypto and privacy advocacy communities. Proponents argue that Tornado Cash is a neutral tool, akin to VPNs or Tor, with many legitimate uses beyond illicit finance. They warn that banning a piece of code sets a dangerous precedent and undermines fundamental rights to privacy and free speech. On the other hand, regulators contend that mixers like Tornado Cash have become a haven for cybercriminals and rogue state actors, necessitating more aggressive enforcement.As the legal and political battle unfolds, Coin Center, a leading crypto policy think tank, has taken up the mantle of defending Tornado Cash and its users. Director of Research Peter Van Valkenburgh, who also serves as a board member for Zcash, joins The Dynamist today to walk through this crackdown and the implications for decentralized finance and open-source software today. Luke Hogg, director of policy and outreach, guest hosts this episode. You can read more from Peter on this issue here.
Social media undermines democracy. Small businesses are more innovative than big ones. Corporate profits are at all-time highs. America’s secret weapon is laissez-faire capitalism. These are widely held beliefs, but are they true? Our guest today argues that these statements aren’t just wrong, but that they’re holding America back—discouraging talented people from entering the technology field and making companies too cautious and wary of regulators. Is America losing its faith in innovation? If so, what can companies and governments do to turn the tide? Has America’s “free-market” really been as free as we think, and what can policymakers learn from Alexander Hamilton when it comes to industrial policy?Evan is joined by Robert Atkinson, Founder and President of the Information Technology and Innovation Foundation, an independent, nonpartisan research and educational institute, often recognized as the world’s leading think tank on science and tech policy. He is also the co-author of the Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy. Read his article on Hamiltonian industrial policy here.