Discover
Machines Like Us

Machines Like Us
Author: The Globe and Mail
Subscribed: 184Played: 3,353Subscribe
Share
© Copyright 2024 The Globe and Mail Inc. All rights reserved.
Description
Machines Like Us is a technology show about people.
We are living in an age of breakthroughs propelled by advances in artificial intelligence. Technologies that were once the realm of science fiction will become our reality: robot best friends, bespoke gene editing, brain implants that make us smarter.
Every other Tuesday Taylor Owen sits down with the people shaping this rapidly approaching future. He’ll speak with entrepreneurs building world-changing technologies, lawmakers trying to ensure they’re safe, and journalists and scholars working to understand how they’re transforming our lives.
We are living in an age of breakthroughs propelled by advances in artificial intelligence. Technologies that were once the realm of science fiction will become our reality: robot best friends, bespoke gene editing, brain implants that make us smarter.
Every other Tuesday Taylor Owen sits down with the people shaping this rapidly approaching future. He’ll speak with entrepreneurs building world-changing technologies, lawmakers trying to ensure they’re safe, and journalists and scholars working to understand how they’re transforming our lives.
93 Episodes
Reverse
In this episode of Big Tech, host Taylor Owen speaks with Ephrat Livni, a lawyer and journalist who reports from Washington on the intersection of business and policy for DealBook at The New York Times. One of Livni’s focuses has been how cryptocurrencies have moved from the periphery of the financial world into the mainstream. The cryptocurrency movement originated with a commitment to the decentralization of money and the removal of intermediaries and government to enable person-to-person financial transactions. Early on, governments viewed cryptocurrency as a tool for illicit criminal activity and a threat to institutional power. In the last two years, cryptocurrency has moved into the mainstream, with sporting arenas named after crypto companies and flashy celebrity endorsements and Super Bowl ads. Crypto markets are extremely volatile with great interest from retail investors and venture capitalists. There’s a lot of enthusiasm about crypto, but not a lot of information.With crypto moving into the mainstream, companies that wish to create trust with their customers must be more transparent, accept regulations and act more like the institutions they initially sought to disrupt. As Livni and Owen discuss, this is not a sector that regulators can ignore: it is complicated, fast-changing, multinational, and demanding a great deal of thought about how best to proceed.
The internet is an ever-evolving thing, with new features and services popping up daily. But these innovations are happening in the current internet space, known as Web 2.0. The potential next big leap is to what is being called Web3 or Web 3.0. You have likely heard some of the terms associated with this next age — the token economy, blockchain, NFTs. Our guest this week walks us through what all this “future stuff” means, and how it could impact our daily lives.In this episode of Big Tech, host Taylor Owen speaks with Shermin Voshmgir, founder of Token Kitchen and BlockchainHub Berlin and the author of Token Economy: How the Web3 reinvents the Internet. Her work focuses on making technology accessible to a non-tech audience to ensure everyone can be part of the decision-making process. Early adopters in the Web3 space see this new iteration of the Web as liberating, an innovation that will decentralize power, facilitate peer-to-peer transactions, enable individual data ownership and challenge the dominance of tech giants. There are many questions about the governance of Web3 and its impacts to society that regulators, still stuck on platform content moderation, are not yet looking at. The conversation between Taylor and Shermin provides a foundational understanding of Web3 and a look ahead at areas where regulators should focus their attention.
Humanity has long imagined a future where humans could live for hundreds of years, if not forever. But those ideas have been the stuff of science fiction, up until now. There’s growing interest and investment in the realm of biohacking and de-aging, and leading scientists such as Harvard’s David A. Sinclair are bringing the idea of extended lifespans out of fantasy into a reality we may see within our generation. But a world where more people are living a lot longer than ever thought possible will have sweeping economic and social consequences. In this episode of Big Tech, host Taylor Owen speaks with journalist Matthew D. LaPlante, co-author of Lifespan: Why We Age — And Why We Don’t Have To with David A. Sinclair. LaPlante’s focus is on the impacts longer lifespans will have, rather than on the technology involved in achieving de-aging. For example: When people live longer, where do we set the retirement age? Can the planet support more humans? And how will we deal with our past choices when we live long enough to see their impacts on our great-great-grandchildren?In this wide-ranging conversation, Taylor and Matthew discuss more implications longer life would have on our society. In the justice system, appointing a 50-year-old to the Supreme Court looks very different when that person could live to 110 rather than 80. What about geopolitical stability, if autocrats and dictators can extend their lives to maintain power for much longer periods? And what are the implications for medical privacy when technology companies are using monitoring devices, such as the ubiquitous smart watch, in conjunction with artificial intelligence to predict when someone may develop an illness or have a heart attack?
A fundamental feature of the internet is its ability to transcend borders, connecting people to one another and all forms of information. The World Wide Web was heralded as a global village that would remove the traditional gatekeepers and allow anyone a platform to be heard. But the reality is that access to the internet and online services is very much bound to geography. A benign example is the location lockouts to online streaming platforms depending on which country you access. But more extreme examples of how location is inherently tied to internet access occur in authoritarian regimes that will limit access during uprisings, filter and block content, and surveil online conversations and then make real-world arrests. In this episode of Big Tech, host Taylor Owen speaks with Nanjala Nyabola, a CIGI fellow, political analyst and author of Digital Democracy, Analogue Politics: How the Internet Era is Transforming Politics in Kenya and Travelling While Black: Essays Inspired by a Life on the Move.Governments have been working on platform governance and content moderation reforms for a few years now, and the need to find solutions and set rules becomes increasingly important – just look at how misinformation and censorship have been playing out in Russia and other authoritarian states over the last few weeks during the war in Ukraine. In Nyabola’s work on internet governance, she proposes that rather than look for global consensus on regulation, we need to think of the internet as a public good. “Water isn’t administered the same way in Kenya as it is in Uganda, as it is in Ethiopia, as it is in the United States; different municipalities will have different codes. But there is a fundamental agreement that water is necessary for life and should, as far as possible, be administered as a public utility.” Nyabola explains that governing the internet requires first setting out its fundamental aspects that humanity wants to safeguard and then protecting those common principles while allowing jurisdictions deliver this public good in their own unique ways.
The speed at which the Russia-Ukraine war has played out across the internet has led to some interesting insights about how different groups have been experiencing and responding to information and misinformation about it. The West found unity across political divides, and the big tech platforms, breaking their long-held stance, have quickly acted to limit the spread of disinformation by making changes to their algorithms.However, across much of the non-English-language internet, the information ecosystem is very different. Many Russians aren’t even aware that there is a war going on. And technology companies that are discontinuing their operations in Russia as a well-meaning sign of solidarity with Ukraine may be making the problem worse.In this episode of Big Tech, host Taylor Owen speaks with Ben Scott and Frederike Kaltheuner about various aspects of communications technology and the social media platforms that are being used by all sides in the Russia-Ukraine war. We begin with a conversation between Taylor and Ben, the executive director of Reset, on the state of the information ecosystem both inside Russia and around the world. In the second half, Taylor speaks with Frederike, the director of the technology and rights division at Human Rights Watch, about the importance of access to information during wartime in the monitoring and documenting of human rights abuses, as well as the critical role that communications systems play in helping citizens inside conflict zones.
In this episode of Big Tech, host Taylor Owen speaks with Margaret O’Mara, a historian of modern America and author of The Code: Silicon Valley and the Remaking of America. Silicon Valley and the massive wealth it has generated have long symbolized the wonders of free market capitalism, viewed as proof of how innovation can thrive when it is not burdened by government oversight. Silicon Valley is infused with this libertarian ethos, centred on the idea that it was guys in their garages, setting out to create something new and make the world a better place, who built the Valley. But O’Mara looks back into history and says that’s all just a myth. During the Cold War, the United States was looking for ways to bolster its technological advantage over the Soviets. Knowing that state-led projects would appear “Communist” to the American people, the government funnelled federal funding for research and development through universities, research institutions and defence companies. This influx of funds enabled private companies to expand and innovate and universities to subsidize tuition. The Apollo space program offers one such example, where federal funds supported tech companies working in electronic miniaturization and semiconductors. The upshot is that the entire Silicon Valley tech sector was built on government intervention and support, and even the guys in their garages benefited from the access to affordable university education. “To pull yourself up by your bootstraps is an American myth that’s very corrosive — there are very, very few truly self-made people,” explains O’Mara. By demystifying Silicon Valley’s origins we can better approach regulation and oversight of the tech industry.
Do you feel as if you can’t get through a single task without distractions? Perhaps you are watching a movie and stop it to check social media or respond to a message. You aren’t alone; studies show that collectively our attention spans have been shrinking for decades. Many factors contribute to our fractured focus, including the processed foods we eat, which cause energy highs and lows, but the greatest culprit of all is technology. In this episode of Big Tech, host Taylor Owen speaks with Johann Hari, the author of three New York Times bestsellers: Stolen Focus, Lost Connections and Chasing the Scream. Hari has been writing about depression, addiction and drugs for many years. Using that as background, Hari seeks to understand how social media has been changing our ability to deeply focus on important tasks. Hari argues that we must not think of this as a personal failing and charge the individual with finding a way out of this crisis, as we have done with obesity and drug addictions. Instead, society must change its relationship with technology so that we can regain our human ability to focus. Technology has increased the speed at which we work and live; as we try to consume so much information, we begin to focus less and less on the details. Hari compares it to speed reading: “It’s surprisingly effective, but it always comes with a cost, even for professional speed readers, which is the faster you read, the less you understand, the less you remember, and the more you’re drawn to shallow and simplistic documents.” Couple that with the way platforms prioritize certain types of content and you have a recipe for disaster. “Everyone has experienced it. Human beings will stare longer at something that makes them angry and upset than they will at something that makes them feel good,” says Hari. What Hari worries is that rather than take collective action, society will put the onus on individuals much as in dealing with obesity it ignores the wider food supply network and instead sells fad diets and supplements to individuals. “And if you come to the attention crisis the same way [we responded] to the obesity crisis, we’ll get the same outcome, which is an absolute disaster.”
In the history of computers and the internet, a few names likely come to mind: Alan Turing, Tim Berners-Lee, Bill Gates and Steve Jobs. Undoubtedly, these men’s contributions to computer sciences have shaped much of our modern life. In the case of Jobs and Gates, their financial success shifted the landscape of software development and the metrics of success in Silicon Valley. Some sectors of the industry, such as programming, hypertext and databases, had been dominated by women in the early days, but once those areas became economic drivers, men flooded in, pushing aside the women. In the process, many of their contributions have been overlooked.In this episode of Big Tech, host Taylor Owen speaks with Claire L. Evans, a musician, internet historian and author of Broad Band: The Untold Story of the Women Who Made the Internet. Evans’s book chronicles the work of women involved in creating the internet but left out of its history. Owen and Evans reflect on several important milestones of the early internet where women were innovating in community building and the moderation of message boards. Evans reveals a little-known history of the early web and the women involved. One aspect that stands out is how the projects that women led focused on building trust with users and the production of knowledge rather than the technical specifications of microprocessors or memory storage. Today, in the face of online harms, misinformation, failing institutional trust and content moderation challenges, there is a great deal we can learn from the work women were already doing decades ago in this space.
Nicholas Carr is a prolific blogger, author and critic of technology since the early days of the social web. Carr began his blog Rough Type in 2005, at a time when some of today’s biggest companies where still start-ups operating out of college dorms. In 2010, he wrote the Pulitzer Prize for Nonfiction finalist The Shallows, in which he discussed how technology was changing the human brain. At the time, many were skeptical about Carr’s argument, but in just over a decade many of his predictions have come true. In this episode of Big Tech, host Taylor Owen and guest Nicholas Carr reflect on how he was able to identify these societal shifts long before others. The social web, known as Web 2.0, was billed as a democratizing tool for breaking down barriers so that anyone could share information and have their voices heard. Carr had concerns; while others saw college kids making toys, he saw the potential for major shifts in society. “As someone who had studied the history of media, I knew that when you get these kinds of big systems, particularly big communication systems, the unexpected, unanticipated consequences are often bigger than what everybody thinks is going to happen,” Carr explains. We are again on the verge of the next online shift, called Web3, and as new online technologies like non-fungible tokens, cryptocurrencies and the metaverse are being built, we can learn from Web 2.0 in hopes of mitigating future unanticipated consequences. As Carr sees it, we missed the opportunity to become involved early on with social platforms, before they became entrenched in our lives. “Twitter was seen as a place where people, you know, describe what they had for breakfast, and so society didn’t get involved in thinking about what are the long-term consequences here and how it’s going to play out. So I think if we take a lesson from that, even if you’re skeptical about virtual reality and augmented reality, now is the time that society has to engage with these visions of the future.”
People are divided: you are either pro-vaccination or against it, and there seems to be no middle ground. Whether around the dinner table or on social media, people are entrenched in their positions. A deep-seated mistrust in science, despite its contributions to the flourishing of human life, is being fuelled by online misinformation. For the first time in history, humanity is in the midst of a pandemic with communication tools of almost unlimited reach and potential benefit, yet social media and the information economy appear structured to promote polarization. Take the case of The Joe Rogan Experience podcast on Spotify: Rogan, a comedian, is able to engage millions of listeners and spread, unchecked, misinformation about COVID-19 “cures” and “treatments” that have no basis in evidence. What responsibility does Spotify have as the platform enabling Rogan to spread this misinformation, and is it possible for the scientific community to break through to skeptics? In this episode of Big Tech, host Taylor Owen speaks with Timothy Caulfield, the author of bestselling books such as Is Gwyneth Paltrow Wrong About Everything? and The Vaccination Picture. He is also the Canada Research Chair in Health Law and Policy at the University of Alberta. Throughout the COVID-19 pandemic, Caulfield has been outspoken on Twitter about medical misinformation with the #ScienceUpFirst campaign. What we have learned though the pandemic is how critical it is to have clear public health communication, and that it is remarkably difficult to share information with the public. As everyone rushed to provide medical advice, people were looking for absolutes. But in science, one needs to remain open to new discoveries, so, as the pandemic evolved, guidelines were updated. As Caulfield explains, “I think it’s also a recognition of how important it is to bring the public along on that sort of scientific ride, saying, Look, this is the best advice we can give right now based on the science available.” When health guidelines are presented in a dogmatic way, it becomes difficult to share new emerging research; misunderstood or outdated facts become weaponized by those trying to discredit the public health sector who point to what was previously known and attempt to muddy the discourse and sow doubt. And that doubt leads to mistrust in institutions, the rise of “alternative facts,” the sharing of untested therapeutics on popular podcasts — and a convoy of truckers camped out in the Canadian capital to protest COVID lockdown and vaccine mandates.
Time and time again, we see the billionaire tech founder or CEO take the stage to present the latest innovation meant to make people’s lives better, revolutionize industries and glorify the power of technology to save the world. While these promises are dressed up in fancy new clothes, in reality, the tech sector is no different than other expansionist enterprises from the past. Their core foundation of growth and expansion is deeply rooted in the European and American colonialization and Manifest Destiny doctrines. And just as in the past, the tech sector is engaging in extraction, exploitation and expansion.In this episode of Big Tech, host Taylor Owen speaks with Jeff Doctor, who is Cayuga from Six Nations of the Grand River Territory. He is an impact strategist for Animikii, an Indigenous-owned technology company.Doctor isn’t surprised that technology is continuing to evolve in the same colonial way that he saw growing up and was built into television shows, movies and video games, such as the popular Civilizations franchise, which applies the same European expand-and-conquer strategy to winning the game regardless of the society a player represents in the game. “You see this manifested in the tech billionaire class, like all of them are literally trying to colonize space right now. It’s not even a joke any more. They grew up watching the same crap,” Doctor says.Colonialism and technology have always been entwined. European expansionism depended on modern technology to dominate, whether it be through deadlier weapons, faster ships or the laying of telegraph and railway lines across the west. Colonization continues through, for example, English-only development tools, and country selection dropdown options limited to “Canada” or the “United States” that ignore Indigenous peoples’ communities and nations. And, as governments grapple with how to protect people’s personal data from the tech sector, there is little attention paid to Indigenous data sovereignty, to ensure that every nation and community has the ability to govern and benefit from its own data.
Governments around the world are looking at their legal frameworks and how they apply to the digital technologies and platforms that have brought widespread disruptive change to their economies, societies and politics. Most governments are aware that their regulations are inadequate to address the challenges of an industry that crosses borders and pervades all aspects of daily life. Three regulatory approaches are emerging: the restrictive regime of the Chinese state; the lax, free-market approach of the United States; and the regulatory frameworks of the European Union, which are miles ahead of those of any other Western democratic country.In this episode of Big Tech, host Taylor Owen speaks with Mark Scott, the chief technology correspondent at Politico, about the state of digital technology and platform regulations in Europe. Following the success of implementing the General Data Protection Regulation, which went into effect in 2018, the European Parliament currently has three big policy proposals in the works: the Digital Services Act, the Digital Markets Act and the Artificial Intelligence Act. Taylor and Mark discuss how each of these proposals will impact the tech sector and discuss their potential for adoption across Europe — and how many other nations, including Canada, are modelling similar regulations within their own countries.
Many unlocked mysteries remain about the workings of the human brain. Neuroscientists are making discoveries that are helping us to better understand the brain and correct preconceived notions about how it works. With the dawn of the information age, the brain’s processing was often compared to that of a computer. But the problem with this analogy is that it suggested the human brain was hard-wired, able to work in one particular way only, much as if it were a computer chip, and which, if damaged, could not reroute itself or restore function to a damaged pathway. Taylor Owen’s guest this week on the Big Tech podcast is a leading scholar of neuroplasticity, which is the ability of the brain to change its neural networks through growth and reorganization. Dr. Norman Doidge is a psychiatrist and author of The Brain That Changes Itself and The Brain’s Way of Healing. His work points to just how malleable the brain can be.Dr. Doidge talks about the brain’s potential to heal but also warns of the darker side of neuroplasticity, which is that our brains adapt to negative influences just as they do to positive ones. Today, our time spent in front of a screen and how we interact with technology are having significant impacts on our brains, and those of our children, affecting attention span, memory and recall, and behaviour. And all of these changes have societal implications.
Democracy is in decline globally. It’s one year since the Capitol Hill insurrection, and many worry that the United States’ democratic system is continuing to crumble. Freedom House, an America think tank, says that nearly three-quarters of the world’s population lives in a country that experienced democratic deterioration last year. The rise of illiberalism is one reason for this, but another may be that democratic governments simply haven’t been performing all that well in recent years. In this episode of Big Tech, host Taylor Owen speaks with Hélène Landemore, author of Open Democracy and Debating Democracy and professor of political science at Yale University. Landemore’s work explores the limitations of casting a vote every few years for a candidate or political party and how in practice that isn’t a very democratic process. “Electoral democracy is a closed democracy where power is restricted to people who can win elections,” she says. Positions on issues become entrenched within party lines; powerful lobbyists exert influence; and representatives, looking ahead to the next election, lack political will to lead in the here and now. In an open democracy, citizens would be called on to debate issues and create policy solutions for problems. “If you include more people in the conversation, in the deliberation, you get the benefits of cognitive diversity, the difficulties of looking at problems and coming up with solutions, which benefits the group ultimately,” Landemore explains. In response to the yellow jacket movement in France, the government asked 150 citizens to come up with climate policies. Over seven weekend meetings, that group came up with 149 proposals on how to reduce France’s greenhouse gas emissions. In Ireland, a group of citizens was tasked with deliberating the abortion topic, a sensitive issue that was deadlocked in the political arena. The group included pro-life and pro-choice individuals and, rather than descending into partisan mud-slinging, was able to come to the recommendation, after much civil deliberation, that abortion be decriminalized. Landemore sees the French and Irish examples as precedents for further exploration and experimentation and that “it means potentially going through constitutional reforms to create a fourth or so chamber called the House of the People or something else, where it would be like a parliament but just made up of randomly selected citizens.”
On the first anniversary of the January 6 insurrection at the United States Capitol, Big Tech host Taylor Owen sits down with Craig Silverman to discuss how the rise of false facts led us to that moment. Silverman is a journalist for ProPublica and previously worked at Buzzfeed News, and is the editor of the Verification Handbook series. Before Donald Trump popularized “fake news” as a blanket term to attack mainstream news outlets, Silverman had been using it to mean something different and very specific. Fake news, also known as misinformation, disinformation or false facts, is online content that has been intentionally created to be shared on social media platforms. Before it was weaponized as a tool for election interference, fake news was simply a lucrative clickbait market that saw higher engagement than traditional media. And social media platforms’ algorithms amplified it because that higher engagement meant people spent more time on the platforms and boosted their ad revenue. After establishing the origins of misinformation and how it was used to manipulate the 2016 US presidential election, Owen and Silverman discuss how Facebook, in particular, responded to the 2020 US presidential election. Starting in September 2020, the company established a civic integrity team focusing on, among other issues, its role in elections globally and removed posts, groups and users that were promoting misinformation. Silverman describes what happens next. “After the election, what does Facebook do? Well, it gets rid of the whole civic integrity team, including the group’s task force. And so, as things get worse and worse leading up to January 6, nobody is on the job in a very focused way.” Before long, Facebook groups had “become an absolute hotbed and cesspool of delegitimization, death threats, all this kind of stuff,” explains Silverman. The lie that the election had been rigged was spreading unchecked via organized efforts on Facebook. Within a few weeks of the civic integrity team’s dismantling, Trump’s supporters arrived on Capitol Hill to “stop the steal.” It was then, as Silverman puts it, “the real world consequences came home to roost.”
In this episode of Big Tech, Taylor Owen speaks with Nicole Perlroth, New York Times cybersecurity journalist and author of This Is How They Tell Me the World Ends: The Cyberweapons Arms Race.Nicole and Taylor discuss how that the way in which nation-states go about acquiring cyber weapons through underground online markets creates an incentive structure that enables the entire cyberwarfare complex to thrive while discouraging these exploits from being patched. “So they don’t want to tell anyone about their zero-day exploits, or how they’re using them, because the minute they do, that $3 million investment they just made turns to mud,” Perlroth explains. As Perlroth investigated the world of cyberwarfare, she noticed how each offensive action was met with a response in kind, the United States is under constant attack. The challenge with countering cyber-based attacks is the many forms they can take and their many targets, from attacks on infrastructure such as the power grid, to corporate and academic espionage, such as stealing intellectual property or COVID-19 vaccine research, to ransomware. “The core thesis of your book,” Taylor reflects, “is for whatever gain the US government might get from using these vulnerabilities, the blowback is both an unknowable and uncontrollable uncontainable.”Early on, Perlroth was concerned about the infrastructure attacks, the ones that could lead to a nuclear power plant meltdown. However, the main focus of cyberattacks is on intelligence and surveillance of mobile phones and internet-connected devices. There is a tension between Silicon Valley’s efforts to encrypt and secure user data and law enforcement’s search for tools to break that encryption. Several jurisdictions are looking to force tech companies to build back doors into their products. Certainly, providing access to devices to aid in stopping terrorist attacks and human trafficking would be beneficial. But back doors, like other vulnerabilities found in code, can be weaponized and used by authoritarian regimes to attack dissidents or ethnic minorities.Cybersecurity is a multi-faceted issue that needs to be addressed at all levels, because the nature of cyberwarfare is that we can no longer protect just our physical borders. “We have no choice but to ask ourselves the hard questions about what is in our network and who’s securing it — and where is this code being built and maintained and tested, and are they investing enough in security?” says Perlroth.
In the early days of the internet, information technology could be viewed as morally neutral. It was simply a means of passing data from one point to another. But, as communications technology has advanced by using algorithms, tracking and identifiers to shape the flow of information, we are being presented with moral and ethical questions about how the internet is being used and even reshaping what it means to be human.In this episode of Big Tech, Taylor Owen speaks with the Right Reverend Dr. Steven Croft, the Bishop of Oxford, Church of England. Bishop Steven, as he is known to his own podcast audience, is a board member of the Centre for Data Ethics and Innovation and has been part of other committees such as the House of Lords’ Select Committee on Artificial Intelligence.Bishop Steven approaches the discussions around tech from a very different viewpoint, not as an academic or technologist but as a theologian in the Anglican church: “I think technology changes the way we relate to one another, and that relationship is at the heart of our humanity.” He compares what is happening now in society with the internet to the advent of the printing press in the fifteenth century, which democratized knowledge and changed the world in profound ways. The full impacts of this current technological shift in our society are yet to be known. But, he cautions, we must not lose sight of our core human principles when developing technology and ensure that we deploy it for “the common good of humankind.” “I don’t think morals and ethics can be manufactured out of nothing or rediscovered. And if we don’t have morality and ethics as the heart of the algorithms, when they’re being crafted, then the unfairness will be even greater than they otherwise have been.”
Social media has become an essential tool for sharing information and reaching audiences. In the political realm, it provides access to constituents in a way that going door to door can’t. It also provides a platform for direct access to citizens without paying for advertising or relying on news articles. We’ve seen how Donald Trump used social media to his advantage, but what happens when social media turns on the politician? In this episode of Big Tech, Taylor Owen speaks with Catherine McKenna, Canada’s minister of environment and climate change from 2015 to 2019. McKenna’s experience with online hate is not unique; many people and groups face online harassment and, in some cases, real-world actions against them. What does make McKenna’s case interesting is the convergence of online harassment on social media and the climate change file. In her role as minister, McKenna was responsible for implementing the federal government’s environmental policy, including the Paris Agreement commitments, carbon pricing and pipeline divestment. No matter what she said in her social posts, they were immediately met with negative comments from climate change deniers. Attacks against her escalated to the point where her constituency office was vandalized and a personal security detail was assigned to her. Finding solutions to climate change is complicated, cross-cutting work that involves many stakeholders and relies on dialogue and engagement with government, industry and citizens. McKenna found that the online expression of extremism, amplified by social media algorithms, made meaningful dialogue all but impossible. McKenna, no longer in politics, is concerned that the online social space is having negative impacts on future youth who may want to participate in finding climate solutions. “I’ve left public life not because of the haters, but because I just want to focus on climate change. But…I want more women to get into politics. I want broader diversity. Whether you’re Indigenous, part of the LGBTQ+ community, or a new immigrant, whatever it is, I want you to be there, but it needs to be safe.” Which raises the question: To find climate solutions, must we first address misinformation and online hate?
Humans need privacy — the United Nations long ago declared it an inalienable and universal human right. Yet technology is making privacy increasingly difficult to preserve, as we spend fewer and fewer moments of time disconnected from our computers, smartphones and wearable tech. Edward Snowden’s revelations about the scope of surveillance by the National Security Agency and journalists’ investigations into Cambridge Analytica showed us how the tech products and platforms we use daily make incursions on our privacy. But we continue to use these services and allow our personal data to be collected and sold and, essentially, used against us — through advertising, political advertising and other forms of targeting, sometimes even surveillance or censorship — all because many feel that the benefits these services provide outweigh their negative impacts on our privacy. This week’s guest, Carissa Véliz, believes that our current relationship with online privacy needs to change, and there are ways to go about it. Véliz is the author of Privacy Is Power and associate professor in the Faculty of Philosophy at the University of Oxford. Véliz speaks with host Taylor Owen about how sharing private information is often not simply individual information. “Whenever I share my personal data, I’m generally sharing personal data about others as well. So, if I share my genetic data, I’m sharing data about my parents, about my siblings, about my cousins,” which, she explains, can lead to unintended consequences for others, such as being denied medical insurance or deportation. As she sees it, users have the power to demand better controls over their personal data, because it is so valuable to the big tech companies that collect, sell and use it for advertising. “The most valuable kind of data is the most recent one, because personal data expires pretty quickly. People change their tastes. They move houses. They lose weight or gain weight. And so companies always want the most updated data.” Véliz wants people to know that even if they believe their data is already out there on the internet, it’s not too late to improve their privacy practices or demand change from technology companies. “Because you’re creating new data all the time, you can make a really big difference by protecting your data as of today,” she says. The battle is not lost — there is always an opportunity to change the way our data is used. But Véliz warns that we must act now to establish those guardrails, because technology will continue to invade ever more of our private spaces if left unchecked.
Tech billionaire Peter Thiel is an enigmatic, controversial and hugely influential power broker in both Silicon Valley and the political arena. He is often seen as a libertarian, who at one point was exploring the idea of building floating stateless cities in international waters. But at the same time Thiel is very much an insider. He is actively involved in American politics, through funding political candidates, and in tech, through co-founding PayPal and Palantir, as well as supporting other venture capital projects, and is even funding an “anti-woke” university. In this episode of Big Tech, host Taylor Owen speaks with Max Chafkin, author of The Contrarian: Peter Thiel and Silicon Valley’s Pursuit of Power. Chafkin’s study of Thiel seeks to understand how he has built a dual persona as a heroic Ayn Randian libertarian entrepreneurial superhero, on the one hand, and a vampiric supervillain, on the other. What has confused many about Thiel is how he seems to play on both sides of the political divide. When Thiel spoke at the Republican National Convention in support of Donald Trump, many on the left couldn’t square the contradiction of how, in Chafkin’s words, “a futurist who happens to be gay, who happens to be an immigrant, who happens to have two Stanford degrees, you know, support this, like, reactionary, anti-tech, you know, crazy guy from New York?” By seeking to understand what one of the most influential men in both tech and politics is about, as well as his beliefs and goals, perhaps we can better understand how our societies are being reshaped. And perhaps that understanding will make us better prepared to counteract those shifts in ways that serve the best interests of society rather than those of the powerful few.
Science fiction has long been a medium for bringing light to societal issues, including religion, culture and race. It helps us imagine futures of hope and prosperity or warns of dystopian nightmares. And our experience of race plays a central role in our understanding of science fiction. “There’s this amazing quote from Junot Díaz, the Pulitizer Prize-winning writer, where he basically says that if it wasn’t for race, X-Men doesn’t make sense. If it wasn’t for the history of breeding human beings through chattel slavery, Dune doesn’t make sense. If it wasn’t for the history of colonialism and imperialism, Star Wars doesn’t make sense, right?. If it wasn’t for the extermination of so many Indigenous First Nations, most of science fiction first-contact stories don’t make sense,” says C. Brandon Ogbunu. What Afrofuturism seeks to do is reimagine the future by putting the Black diaspora community at the centre. In this episode of Big Tech, host Taylor Owen speaks to C. Brandon Ogbunu, a computational biologist and technologist and assistant professor of ecology and evolutionary biology at Yale University. Taylor and Brandon discuss how Afrofuturism can bring about a more diverse, inclusive community of tech start-ups and tech platforms. Technology platforms are riddled with algorithmic bias, resulting from blind spots that exist within each development team. As many platforms span geographies, cultures and languages, it is increasingly important to be aware of the many potential harms that can result from the way the software is developed and deployed. Traditionally, Silicon Valley companies are managed by white males with a specific world view. “Anybody who is familiar with the technology and has ever been racially profiled would immediately see the problems there. I think the problem, the reason why this stuff was not a part of the conversation up front, is because the people designing the technology have never been affected by it,” says Ogbunu. He sees works of Afrofuturism, such as art, music and film, as vehicles to lift the sense of impossibility and constraint and to inspire Black and other under-represented communities to create and build new technologies and businesses.
Since concluding the last season this past August, a lot has happened in the big tech governance and regulation space. Whistle-blower Frances Haugen and the Facebook Papers shone light on social media’s harmful impacts on our society and reignited the debate over how we regulate platforms. There have been employee-led labour movements at Amazon, Uber and Netflix. And Mark Zuckerberg unveiled his vision for a Metaverse and a new company name, Meta. Join host Taylor Owen in conversation with leading experts as they make sense of how technology is transforming our world.
Big Tech returns on Thursday, November 25, 2021. New episodes will be released weekly, every Thursday morning. Subscribe now on your favourite podcasting application.
This season of Big Tech has featured conversations with experts across many fields —law makers, academics, journalists, authors, activists and a bishop — who are working to address technologies’ impact on our lives. In this episode of Big Tech, Taylor Owen looks back on those conversations and highlights six themes that have emerged across the season. Taylor begins with the topic of how the debate about tech and society is maturing, reflecting that we have moved past the superficial “tech issues” and entered into some very challenging, complicated questions around speech, encryption and anonymity. One reason why these issues are becoming more complex is that technology’s global reach often brings it into conflict with national laws. Which is the second theme that emerged in conversations: the dynamics between, on the one hand, the layers of global interconnectedness and, on the other, a fractured system of regulations between the United States, Europe and China. Many guests of the show highlighted a third theme, the materiality of technology and its impact on our planet. Kate Crawford, for example, discussed how rare earth minerals and server farms are having a lasting impact on climate.In our rush to connect the world, we have created many vulnerabilities, both digital and physical. The fourth major theme this season was around surveillance — either by ad tech companies, policing or repressive regimes — and cyberattacks on our critical infrastructures. The fifth theme is recognizing capitalism’s and the tech giants’ business model as at the root of many of the tech-related harms and risks users face today. Those financial incentive structures have created environments that prioritize users’ engagement with content above all, even if that content is false or misleading, harmful or inciting violence. Finally, our experts presented solutions and optimism in their conversations. While grappling with these problems can be a daunting task, the sixth theme this season is that there are paths forward, and many people committed to finding ways to make technology work for the betterment of society.
Liberal democracies around the world have protections for free speech, such as the Canadian Charter of Rights and Freedoms or, more famously, the US First Amendment. Many of the free speech activities that are protected by law, such as the right to organize and protest, have moved onto social media platforms. But, as we have seen, the power of social media to amplify content can have disastrous impacts. Nations looking to reform and enact online protection regulations to address issues of terrorism, human trafficking and hate speech, among others, are experiencing pushback from those who fear they will infringe on civil liberties. In this episode of Big Tech, Taylor Owen speaks with Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University. Previously, Jaffer served as deputy legal director at the American Civil Liberties Union, where he was involved in several monumental cases, including a challenge to the USA PATRIOT Act, a lawsuit against the National Security Agency and access to information requests on secret torture and drone programs.
In this episode of Big Tech, Taylor Owen speaks with Geoffrey Cain, author of The Perfect Police State, about the technology enabled in China’s Xinjiang region to oppress its Uighur population. Through a network of surveillance systems, social credit scores, algorithm-driven pre-crime computer software and a society where people are now fearful of their neighbours, China has built a chillingly real Orwellian police state. In their conversation, Taylor and Geoffrey discuss how these technologies are used to identify, detain and brainwash the Uighur people in what may be the first genocide in history driven by big data and artificial intelligence (AI). However, one of the most powerful aspects of this surveillance system has nothing to do with the advanced technology. “It’s a crude system that is designed to keep people on their toes, to, you know, turn them against each other. … If my good friend might be snitching on me, well, I’m going to snitch on him first, and hopefully he’ll be taken to a camp. And then my ranking with the government rises and maybe the AI and the computer systems won’t take me away,”Cain explains. With no transparency in the system — no one knows how it functions or decides who’s a threat, or what computer code might determine their fate — an entire population lives in constant uncertainty and fear.
Western democracies and tech companies have long painted the Chinese tech sector as not only a threat to the US sector but also as operating in direct conflict with American companies. They say that China is walled off from the rest of the world, that these tech companies are just an extension of the state, and that they create and promote state surveillance and censorship tools. While China, the country, isn’t completely innocent — there are clear examples of state interventions and human rights abuses — this episode’s guest argues that a Western-centric framing of how the sector operates isn’t quite accurate.In this episode of Big Tech, Taylor Owen speaks with Hong Shen, a systems scientist at the Human-Computer Interaction Institute at Carnegie Mellon University and author of Alibaba: Infrastructuring Global China. Shen’s research focuses on the global internet industry and the social implications of algorithmic systems, with an emphasis on China. Shen explains how China’s tech sector is not walled off from the rest of the world but instead highly integrated with it. International venture capital has been flowing into the Chinese tech sector for years. And the artificial intelligence (AI) development that is popularly depicted as an “us” vs “them” arms race is in reality better described as a production chain, with Chinese companies providing the labour to develop American AI systems.
The journalism industry in America has grown and adapted over its 300-year history. Different business models and ownership schemes have been tried, mostly in an attempt to establish an independent free press. Social media platforms have contributed to both the decline in revenue for news outlets and the echo-chamber effect that results when users are only consuming news that fits their political viewpoint. In this episode of Big Tech, Taylor Owen speaks with Victor Pickard, a professor of media policy and political economy at the Annenberg School for Communication and a co-director of the Media, Inequality and Change Center at the University of Pennsylvania. In his work, Pickard explores how the journalism industry could be transformed to meet the needs of society and support a functioning democracy. Pickard has studied the different stages in the American news industries’ history, which we explore in this episode, and concludes that there has never been a time when the industry was properly configured to support democracy. The current debates focus on restoring ad revenue sources that have been diverted to social media platforms but, as Pickard explains, small tweaks to the market will not solve the problem. “I think clearly we’re seeing something that is irredeemable, especially for providing local journalism. We don’t need to shore up these commercial models.” Instead, Pickard says, we need to shift away from large corporations that consolidate all news markets at a national level to new funding models that support local community-based journalism.
Tech platforms from around the world have turned their attention to the India as a new area for user growth. These tech giants are keen to see mass adoption of their products and services by the one-billion-strong Indian market. At the same time, politicians in India have leveraged the platforms’ powers to entrench their own power. In late May, Prime Minister Narendra Modi’s government announced new “IT rules” that give authorities power to ask platforms and digital news media to trace chats, break encryption and block content. The companies must weigh the importance of the Indian market to their bottom line against their users’ right to privacy. In this episode of Big Tech, Taylor Owen speaks with Pranav Dixit, tech reporter for BuzzFeed News. Dixit has been reporting on tech-related news in India since 2016. A article for Buzzfeed, “I Thought My Job Was To Report On Technology In India. Instead, I Got A Front-Row Seat To The Decline Of My Democracy,” explored the current platform governance landscape in India. Pranav and Taylor discuss how India’s approach to platform governance is similar to that of other Western-style democracies, and how the same rules that can be used to target hate speech can also be used to stifle political dissent.
Artificial intelligence (AI) is hailed as a great technological leap forward, one that will enable immense efficiencies and better decision making across countless sectors. But AI has also been met with serious questions about who is using it and how, about the biases baked in and the ethics surrounding new applications. And an ever-growing concern about AI is the environmental toll it takes on our planet. Do the benefits of AI innovations outweigh all these concerns? Is it even worth it to develop AI at all? In this episode of Big Tech, Taylor Owen speaks with Kate Crawford, research professor of communication and science and technology studies at USC Annenberg, a senior principal researcher at Microsoft Research Lab – NYC and author of the new book Atlas of AI. Crawford studies the social and political implications of AI.Crawford’s work gets to the core of AI, looking at it as an extractive industry. AI extracts data, but it also extracts labour, time and untold quantities of natural resources to build and power the vast banks of computers that keep it running. Crawford argues that much of the work in AI is not, in fact, built in some “algorithmic nowhere space”on pure data, objective and neutral, but instead grounded on the ghost work and human labour that trains these systems. The industry mythologizes the internal workings, “our deeply abstract mathematical and immaterial understanding of AI,” as a way to avoid scrutiny and oversight. As Crawford explains, rather than try to govern AI as a whole, we need to take a broader approach addressing the many extractive aspects of AI to effectively tackle this problem and its wider planetary costs.
In this episode of Big Tech, Taylor Owen speaks with Eliot Higgins, the founder of Bellingcat.com, an open-source intelligence and investigative journalism website. Higgins’s site uses publicly accessible online data to investigate and fact-check human rights abuses, war zone atrocities and other criminal activities. Bellingcat’s reporting on the downing of Malaysia Airlines Flight 17 over eastern Ukraine gained the site wide attention, including from the Kremlin, whose public statements on their role in the incident were being refuted by the site. The online open-source intelligence and investigation community operates adjacent to state-run intelligence gathering and journalism. As Higgins explains, Bellingcat can be an interface between the person on the ground and justice and journalism institutions: “The person on the ground who’s filming these things, you know, they film it because they want there to be accountability. It’s not just [about] raising awareness of it.” Bellingcat is now doing mock trials to show how these open-sourced investigatory efforts could lead to justice for victims. Additionally, Higgins sees the work of online investigations as a counter movement to the increasing polarization of media and the distrust in institutions that are leading people into online conspiracy communities and fringe thinking. “What I think we need to do is start teaching people actually to, you know, use things like … open-source investigation in their own lives,” says Higgins. He sees this as an opportunity to teach the younger generation how to engage positively online: “Rather than them going off [and] just being mad about something in the world and then finding some community of people that are going to kind of draw them into conspiracy theories, they actually are able to find communities that can actually help them positively contribute to these issues.”
Americans’ trust in democratic institutions has been strained over the past few years. The stress fractures are expressed in everything from cries of “fake news” against the media, to outrage over violent and racist police actions, to suspicion about COVID-19 precautions and vaccines, to the siege on the US Capitol Building in the aftermath of the presidential election. When trust in institutions falls, the populace can be stirred into action. Depending on the issue and the group, this action can have negative or positive impacts on society. In this episode of Big Tech, Taylor Owen speaks with Ethan Zuckerman, professor of public policy, communications and information at the University of Massachusetts at Amherst, and author of the newly released Mistrust: Why Losing Faith in Institutions Provides the Tools to Transform Them.Zuckerman’s work looks at how mistrust in the United States has led to activism and institutional change, from the civil rights movement of the 1950s and 1960s, to Black Lives Matter, #MeToo and Occupy Wall Street. But he warns that mistrust can also be used in negative ways. “One thing mistrust is very, very good at doing is getting people to disengage. If you feel like you have no control over a political situation, if you feel like you have no way of making your voice heard, if everything is a fait accompli, you stop playing the game,” Zuckerman explains. “The other thing that mistrust seems to do, and this is fascinating, is that mistrust sort of reads as a choppy sea. It reads as a complicated scenario that no one knows how to find their way through. And then whoever it is who manages to thrive in that scenario, that person is destined to lead”—a tactic, Zuckerman says, that both Donald Trump and Vladimir Putin have exploited. Ultimately, Zuckerman says, “mistrust is a perfectly reasonable thing to have”: it is up to society to critically think about its institutions and determine which ones are worth protecting, which ones need reform, and which ones should be torn down altogether.
Global technology companies that power websites and services, like Amazon and Microsoft, and platforms, like Facebook and Google, have created spaces and tools that allow corporations, states and themselves to exert power in many sectors of our lives.In this episode of Big Tech, Taylor Owen speaks with Naomi Klein, author, social activist and filmmaker. Over her two decades of work, books such as No Logo: Taking Aim at the Brand Bullies and The Shock Doctrine: The Rise of Disaster Capitalism have, in many respects, foreshadowed the rise of big tech. For example, No Logo, published in 1999, long before the rise of Instagram and the influencer economy, examined the marketing trend of making citizens an extension of the corporations’ brand.Recently, Klein has been raising concerns about how tech companies are positioning themselves as essential services and capitalizing on this moment to enter new sectors, with little pushback. For example, Klein explains how the education sector rushed to platforms to enable virtual learning at the start of the pandemic: “I wish public universities had been better prepared with our own technologies.” The education space is one example of how the public space is being eroded. “If we are going to be using these platforms, we have to be serious about developing public sector, common-space alternatives to these private platforms. I don’t think we did do that. And so, they moved very quickly. Shock Doctrine–style.”While this situation may seem bleak, Klein does see a silver lining: “We probably would have ended up in this place in 10 years, and instead, we got there in a matter of months,” Klein says, in reference to the pandemic’s shelter-at-home and work-from-home orders around the world. “And we probably would have boiled slowly. And now I feel like we’re jumping around in this pot going, ‘This is terrible.’” This rapid shift has enabled us, as a society, to see more clearly how technology is impacting our lives, presenting an opportunity to course-correct.
In this episode of Big Tech, Taylor Owen speaks with Nicole Perlroth, New York Times cybersecurity journalist and author of This Is How They Tell Me the World Ends: The Cyberweapons Arms Race. Nicole and Taylor discuss how that the way in which nation-states go about acquiring cyber weapons through underground online markets creates an incentive structure that enables the entire cyberwarfare complex to thrive while discouraging these exploits from being patched. “So they don’t want to tell anyone about their zero-day exploits, or how they’re using them, because the minute they do, that $3 million investment they just made turns to mud,” Perlroth explains. As Perlroth investigated the world of cyberwarfare, she noticed how each offensive action was met with a response in kind, the United States is under constant attack. The challenge with countering cyber-based attacks is the many forms they can take and their many targets, from attacks on infrastructure such as the power grid, to corporate and academic espionage, such as stealing intellectual property or COVID-19 vaccine research, to ransomware. “The core thesis of your book,” Taylor reflects, “is for whatever gain the US government might get from using these vulnerabilities, the blowback is both an unknowable and uncontrollable uncontainable.” Early on, Perlroth was concerned about the infrastructure attacks, the ones that could lead to a nuclear power plant meltdown. However, the main focus of cyberattacks is on intelligence and surveillance of mobile phones and internet-connected devices. There is a tension between Silicon Valley’s efforts to encrypt and secure user data and law enforcement’s search for tools to break that encryption. Several jurisdictions are looking to force tech companies to build back doors into their products. Certainly, providing access to devices to aid in stopping terrorist attacks and human trafficking would be beneficial. But back doors, like other vulnerabilities found in code, can be weaponized and used by authoritarian regimes to attack dissidents or ethnic minorities. Cybersecurity is a multi-faceted issue that needs to be addressed at all levels, because the nature of cyberwarfare is that we can no longer protect just our physical borders. “We have no choice but to ask ourselves the hard questions about what is in our network and who’s securing it — and where is this code being built and maintained and tested, and are they investing enough in security?” says Perlroth.
In this episode of Big Tech, Taylor Owen speaks with Mutale Nkonde, founder of AI for the People (AFP). She shares her experiences of discrimination and bias working in journalism and at tech companies in Silicon Valley. Moving into government, academia and activism, Nkonde has been able to bring light to the ways in which biases baked into technology’s design disproportionately affect racialized communities. For instance, during the 2020 US presidential campaign, her communications team was able to detect and counter groups who were targeting Black voters in social media groups, by weaponizing misinformation, with the specific message to not vote. In her role with AFP, she works to produce content that empowers people to combat racial bias in tech. One example is the “ban the scan” advocacy campaign with Amnesty International, which seeks to ban the use of facial recognition technology by government agencies. In their conversation, Mutale and Taylor discuss the many ways in which technology reflects and amplifies bias. Many of the issues begin when software tools are designed by development teams that lack diversity or actively practise forms of institutional racism, excluding or discouraging decision-making participation by minority ethnic group members. Another problem is the data sets included in training the systems; as Nkonde explains, “Here in the United States, if you’re a white person, 70 percent of white people don’t actually know a Black person. So, if I were to ask one of those people to bring me a hundred pictures from their social media, it’s going to be a bunch of white people.” When algorithms that are built with this biased data make it into products — for use in, say, law enforcement, health care and financial services — they begin to have serious impacts on people’s lives, most severely when law enforcement misidentifies a suspect. Among the cases coming to light, “in New Jersey, Nijeer Parks was not only misidentified by a facial recognition system, arrested, but could prove that he was 30 miles away at the time,” Nkonde recounts. “But, because of poverty, [Parks] ended up spending 10 days in jail, because he couldn’t make bail. And that story really shows how facial recognition kind of reinforces other elements of racialized violence by kind of doubling up these systems.” Which is why Nkonde is working to ban facial recognition technology from use, as well as fighting for other legislation in the United States that will go beyond protecting individual rights to improving core systems for the good of all.
The world watched as the Australian government passed a new law in February 2021 requiring Facebook and Google to pay news businesses for linking to their work. In the lead-up to its passing, Facebook followed through on its threat to remove news from its platform. But many viewed Facebook’s move as only reinforcing the government’s position that big tech had market dominance. In this episode of Big Tech, Taylor Owen speaks with Rod Sims, the chairman of the Australian Competition and Consumer Commission (ACCC). The ACCC conducted a market study to determine if there was a market failure in the journalism sector. It found that Facebook and Google were benefiting from the local news industry’s content and that these businesses were unable to seek appropriate compensation. The ACCC’s recommendation, the News Media and Digital Platforms Mandatory Bargaining Code, creates a code of conduct that Australian news businesses can use to bargain with Facebook and Google, using the negotiate-arbitrate model. Taylor and Rod discuss how the ACCC came to the decision that a negotiate-arbitrate model needed to be applied, how the new code will function, why journalism’s role in democratic society is more essential than ever, and what these issues mean for the average citizen and social media user.
In the early days of the internet, information technology could be viewed as morally neutral. It was simply a means of passing data from one point to another. But, as communications technology has advanced by using algorithms, tracking and identifiers to shape the flow of information, we are being presented with moral and ethical questions about how the internet is being used and even reshaping what it means to be human. In this episode of Big Tech, Taylor Owen speaks with the Right Reverend Dr. Steven Croft, the Bishop of Oxford, Church of England. Bishop Steven, as he is known to his own podcast audience, is a board member of the Centre for Data Ethics and Innovation and has been part of other committees such as the House of Lords’ Select Committee on Artificial Intelligence. Bishop Steven approaches the discussions around tech from a very different viewpoint, not as an academic or technologist but as a theologian in the Anglican church: “I think technology changes the way we relate to one another, and that relationship is at the heart of our humanity.” He compares what is happening now in society with the internet to the advent of the printing press in the fifteenth century, which democratized knowledge and changed the world in profound ways. The full impacts of this current technological shift in our society are yet to be known. But, he cautions, we must not lose sight of our core human principles when developing technology and ensure that we deploy it for “the common good of humankind.” “I don’t think morals and ethics can be manufactured out of nothing or rediscovered. And if we don’t have morality and ethics as the heart of the algorithms, when they’re being crafted, then the unfairness will be even greater than they otherwise have been.”
Are retail investors and message boards rewriting financial markets? The GameStop shakeup over the past week demonstrated how the market could be manipulated by users of the subreddit group WallStreetBets and robo-investing apps like Robinhood. The activities that happened in cyberspace on social media and financial digital platforms made waves in the real-world financial markets. Our guest Lana Swartz calls this moment infrastructural inversion, “when suddenly something stops working, and you realize all the things that went into making it work in the first place.”Lana Swartz is an assistant professor of media studies at the University of Virginia and author of New Money: How Payment Became Social Media. Swartz has been researching the connection between social movements and money, such as Occupy Wall Street and cryptocurrencies.Fintech has been a democratizing tool in the finance sector just as the early internet was in bringing social change. But payment platforms are acting much like social media platforms as they seek to centralize power and establish moderation rules. Swartz explains that “all of the work that has been done over the last decade or so to understand platforms now has to be applied to platforms like Robinhood and to see if there is something different about money, and if so, what is it? … Do they owe us more of a fiduciary role [and] how can we understand the mechanisms by which they are, in fact, constrained?” Robinhood’s decision to prevent users from purchasing GameStop stock demonstrated their position of power and has led to questions over whether they had the legal ability to do so. While moderating content on social media, especially disinformation and election manipulation, is important, the decisions around moderating the flow of money on payment apps has become a new area of importance to regulators.
During his term, President Donald Trump has vilified the media, spread mis- and disinformation and built a loyal base of followers who believe his message. His refusal to accept the results of the 2020 election, failed legal challenges and Stop the Steal campaign further rallied his supporters. The tipping point came on January 6, 2021, just as law makers were meeting at the Capitol to certify the election. In this episode of Big Tech, Taylor Owen speaks with Joan Donovan, research director of the Shorenstein Center on Media, Politics and Public Policy at Harvard University. Donovan studies social movements and their use of media and technology to spread their message. Social media platforms provide tools for individuals and groups to share information and organize, which has been valuable for societal movements such as Black Lives Matter and Standing Rock. But those same tools have been harnessed by bad actors aiming to incite destructive actions. In the case of the Make America Great Again movement, her team could see that those leading the Stop the Steal campaign were setting the stage for the Capitol attack. According to Donovan, “It’s about creating the conditions by which people feel [that] if they don’t do something, nothing will change.” As Donovan sees it, governments need to regulate and oversee the digital space because online activities have real-world effects. “I think everybody knows that people form communities online, and it’s in those bonds and in those spaces that people make decisions about, ‘Do I go to this protest or not?’ ‘Do I spend $400 to take a flight to DC to save Trump from the Democrats and Republicans?” … because I don’t think people make much of a distinction between ‘this is my online account for Patriot2467150’ and ‘my life.’”
In this episode of Big Tech, Taylor Owen speaks with Baroness Beeban Kidron, OBE, across-bench member of the British House of Lords, a filmmaker and the chair of 5Rights Foundation. Kidron is a strong advocate for children’s online rights. She is a member of the UK government’s Democracy and Digital Technologies Committee working to implement the Age Appropriate Design Code, which explains how the General Data Protection Regulation (GDPR) applies in the context of children using digital services and sets 15 flexible standards to ensure that children’s best interests are the primary consideration when designing and developing these services. While the internet is still a relatively new technology, it has rapidly come to play a major role in children’s education, entertainment and play. There is now a generation of children who are growing up with online services that only launched a decade or so ago. Children are using platforms such as YouTube, Instagram and TikTok never designed for their age group, with unintended risks and consequences. “We must stop talking about the services that are directed at children, because children spend most of their time in services that are not directed at them… . when you have your discussions about facial recognition, about biometric data, about misinformation, disinformation, all of these things are affecting children. And yet every piece of policy is about YouTube Kids,” explains Kidron. She advocates for the whole of technology to consider younger users when designing and building digital services. But online platforms are failing to meet even minimum standards of protection, which is why Kidron is working on a regulatory approach. “History is littered with things that were not possible until they were mandated. And I am sufficiently old enough to remember when two-factor authentication was going to bring the internet down. And it didn’t. And I’m definitely old enough to remember when GDPR was going to bring the internet down. And I’m not saying all these things are perfect, they are desperately imperfect, but it is amazing how when things are mandated, they are also made possible.”
However you use telecommunications technology — and billions use it for everything from routine daily tasks and entertainment to seeking help, sharing confidential information or organizing civil actions — your communications are all running on decades-old network protocols with gaping vulnerabilities that can enable cybercrime and security breaches. High-risk individuals and organizations, in particular, are vulnerable, not only to surveillance but to targeted retaliation by autocratic states who use these security holes to abuse their power. But democratic countries have also exploited these weaknesses in, for example, law enforcement.In this episode of Big Tech, Taylor Owen speaks with Ronald J. Deibert, founder and director of the Citizen Lab at the Munk School of Global Affairs & Public Policy and the author of Reset: Reclaiming the Internet for Civil Society. Citizen Lab has worked for many years monitoring communication networks for state-run surveillance. Their 2018 report Hide and Seek: Tracking NSO Group’s Pegasus Spyware to Operations in 45 Countries uncovered how mobile phone spyware has been used to target individuals, including Saudi Arabian journalist Jamal Khashoggi. Deibert believes that we need to rethink how telecommunications equipment and protocols are built, to ensure privacy and security. Until we have these safeguards, malicious actors, whether states or private individuals, will continue to hack the vulnerabilities in the communications ecosystem, leaving citizens unsafe, and civil society to suffer.
Vaccine hesitancy and anti-vax sentiment have been around for as long as vaccines themselves have been available. Misinformation about vaccines has, for example, led to a decline in early childhood vaccination, resulting in the worldwide resurgence of the measles virus. In 2020, a vaccine appears to be the only viable path to ending the COVID-19 pandemic and returning to normal. But many distrust this new vaccine (or vaccines) and could refuse to get vaccinated.In this episode of Big Tech, Taylor Owen speaks with Heidi J. Larson, the author of Stuck: How Vaccine Rumors Start — and Why They Don’t Go Away and the director of the Vaccine Confidence Project. Larson’s work in vaccine hesitancy traces the root causes to a lack of trust in and anxieties about our institutions. That distrust is shared and amplified by the social media platforms we have available today. The COVID-19 vaccine is an opportunity to restore trust in governments, big pharma, scientists and the media, but only if it is handled correctly. Larson explains: “To me, it’s the ultimate litmus test at multiple trust levels. Because, as I say in the book, I don’t think we have a misinformation problem, as much as we have a relationship problem, and that the misinformation, in a sense, is kind of symptomatic of not trusting, in any of those domains.”
Where does the tech industries’ power lie? Are they “mind-control” platforms, as some have described them, capable of influencing everything from consumer choices to election results, or does their true threat to society lie in market concentration? In this episode of Big Tech, Taylor Owen speaks with Cory Doctorow, a science fiction author, activist and journalist. Doctorow’s latest book, How to Destroy Surveillance Capitalism, argues that big tech’s purported powers of manipulation and control, peddled to advertisers and based on an overcollection of our data, are essentially an illusion. “Being able to target cheerleaders with cheerleading uniform ads does not make you a marketing genius or a mind controller. It just makes you someone who’s found an effective way to address an audience, so that even though your ad may not be very persuasive, you’re not showing an unpersuasive ad to someone who will never buy a cheerleading uniform,” Doctorow explains. Doctorow’s view is that the threats to society that big tech present are far less sinister than tech critics such as Shoshana Zuboff and Tristan Harris make them out to be. Rather, the big fives’ monopolistic practices are the real issues to wrestle with.
Depending who you ask, big tech is either going to save humanity or destroy us. Taylor Owen thinks it’s a little more complicated than that. Join him in conversation with leading thinkers as they make sense of a world transformed by technology.
In this episode of Big Tech, co-hosts David Skok and Taylor Owen discuss how our understanding of the impacts big tech has on society has shifted over the past year. Among these changes is the public’s greater awareness of the need for regulation in this sector.In their conversation, David and Taylor reflect upon some of the major events that have contributed to this shift. The COVID-19 pandemic highlighted the need for better mechanisms to stop the spread of misinformation. And it has shown that social media platforms are capable of quickly implementing some measures to curb the spread of misinformation. However, the Facebook Oversight Board, which their guest Kate Klonick talked about in season 1, is not yet operational, and won’t be until after the US presidential election; even then, its powers will be limited to appeals rather than content oversight.In July 2020, the big tech CEOs testified in an antitrust hearing before the US Subcommittee on Antitrust, Commercial and Administrative Law. “That moment,” Taylor Owen says, “represented a real turning point in the governance agenda.” This growing big tech antitrust movement is showing that law makers, now better prepared and understanding the issues more clearly, are catching up to big tech. The public is starting to recognize the harms alongside the benefits of these companies’ unfettered growth. In season 2, Matt Stoller spoke with David and Taylor about monopoly power, and how these modern giants are starting to look like the railroad barons of old.From diverse perspectives, all the podcast’s guests have made the point that technology is a net good for society but that the positives do not outweigh the negatives — appreciating the many benefits that platforms and technology bring to our lives does not mean we can give them free rein. As Taylor explains, “When we found out the petrochemical industry was also polluting our environment, we didn’t just ban the petrochemical industry and ignore all the different potential positives that came out of it. We just said you can’t pollute any more.” With the technology sector embedded in all aspects of our democracies, economies and societies, it’s clear we can no longer ignore the need for regulation.
Biotechnology — the use of biological processes for industrial and other purposes, especially through genetic manipulation of micro-organisms — is a field experiencing massive growth worldwide. For many decades, advances in biology have been made in large academic or corporate institutions, with entry to the field restricted by knowledge and financial barriers. Now, through information sharing and new means of accessing lab space and equipment, a whole new community of amateur scientists are entering the molecular biology space. The emergence of this growing do-it-yourself “biohacker” community raises ethical questions of what work should be allowed to proceed.In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Ellen Jorgensen, a molecular biologist and Chief Scientific Officer at Aanika Biosciences. She is an advocate for democratizing biotechnology by enabling more individuals to have access to lab space and equipment. Jurisdictions are taking different approaches to biotechnology, with some, such as the European Union and Africa, being more restrictive than others, such as China. What makes the fragmentation of governance surrounding genetic modification different from fragmentation in internet or tech governance is that biotechnology’s raw material is a global interconnected web of life. A biological modification can have unintended, even disastrous, impacts worldwide. For example, says Jorgensen, “What I’m concerned with is things like gene drives, which is a variety of CRISPR gene editing that [is] self-perpetuating.…So, 100 percent of the offspring, where one of the parents has this gene drive, all have the gene drive. So, it can spread through a population, particularly one with a short lifespan, like mosquitoes, within a very short period of time. And here, for the first time, we have the ability to potentially wipe out a species.” As Jorgensen points out, with such high stakes, we have an “inherent motivation to regulate.” Working together on a global set of standards, and setting aside their own ethical or moral understandings to find a solution that works for everyone, will present a challenge for nations.
Online advertisement and social media platforms have had a major impact on economies and societies around the globe. Those impacts are happening in retail, with the shift in spending from brick and mortar to online; in advertising, where revenues have moved from print and broadcast to online social platforms; and in society more broadly, through algorithmic-amplified extremism and hate speech. The big tech companies at the centre of these shifts have little incentive to change the nature of their operations. It now falls to nations around the globe to find ways to regulate big tech in the face of what many view as a market failure. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak to Damian Collins, a British member of Parliament and former chair of the House of Commons Digital, Culture, Media and Sport (DCMS) select committee. As chair of DCMS, Collins led the investigation into Cambridge Analytica’s role in the Brexit referendum. He was also involved in the creation of the International Grand Committee on Disinformation and “Fake News.” Collins doesn’t blame the tech giants for their inaction, but rather sees the problem as governance policies that have lagged behind. There is a need for policy to catch up and ensure citizens are protected, just as other complex global markets, such as the financial industry, have done. International cooperation and information sharing enable nations to take on the large global tech companies together without each needing to start from scratch.
Journalism has had a storied history with the internet. Early on, the internet was a niche market, something for traditional publishers to experiment with as another medium for sharing news. As it gained popularity as a news source, newsrooms began to change as well, adapting their business models to the digital age. Newspapers had historically generated revenue through a mix of subscriptions, advertising and classifieds. But internet platforms Craigslist and Kijiji soon took over classified. Google Ads presented advertisers with more refined marketing tools than the newspapers could offer. And Facebook and Twitter made it possible for readers to consume news for free without visiting newspaper’s website. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak to Emily Bell, the director of the Tow Center for Digital Journalism at Columbia University’s Graduate School of Journalism. Newsrooms are left with few options to make money now. Unless you are a large outlet with a sizable online subscriber base like The New York Times, your capacity for local reporting will be hampered by the economic need to focus on stories that have the broadest reach. Many media conglomerates have cut back their local reporting, creating news deserts across large regions. Not having local reporters on the case is having negative impacts on democracy, too. As Bell explains, “Where there is no local press…local officials tend to stay in office for longer. They tend to pay themselves more.” Smaller local news outlets that can build a relationship with their readers can see success if their readers are able to pay the subscription fees. But it is often poorer communities, where people can't afford local news subscriptions, that most need the services of good local journalism. Bell sees an opportunity to rethink the way news is funded: first, by looking to communities to decide what level of reporting they require, and second, by resourcing it accordingly.
Is it possible to access the internet without interacting with the big five American tech companies? Technically, yes, but large swaths of the web would be inaccessible to consumers without the products and platforms created by Apple, Amazon, Facebook, Microsoft and Alphabet, Google’s parent company. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Matt Stoller, the director of research at the American Economic Liberties Project and the author of Goliath: The 100-Year War Between Monopoly Power and Democracy. Stoller looks at how the political landscape has changed from the time of the railroad tycoons to the modern Silicon Valley tech monopolies. Each of these companies has established itself in a market-leading position in different ways. On mobile, Apple’s App Store is the only way for software developers to reach iPhone customers. Google controls search, maps and online advertising. Amazon’s website is the dominant online retail platform. “In some ways, it’s a little bit like saying, well, you know, that railroad that goes through this one narrow valley that you have to take to get to market, well, that’s not a monopoly, because there are other railroads in the country,” Stoller says. “Well, yeah, maybe there are, but it doesn’t matter if you need that particular railroad to get where you’re going…. that’s what Amazon is like in a lot of the sectors that it deals with.” Finally, underpinning much of the internet are Microsoft Azure and Amazon Web Services cloud data centres. Is corporate power a political problem or a market problem? Skok, Owen and Stoller discuss topics ranging from the robber barons of the 1930s and the antitrust reforms that followed, to the current environment, one that evolved over several political generations to become, as Stoller describes it, a crisis of concentration separated from “caretaking,” in which profits can amass through domination rather than through better products or services.
Social media platforms have assumed the role of news distribution sources, but have largely rejected the affiliated gatekeeper role of fact-checking the content they allow on their sites. This abdication has led to the rise of fake news, disinformation and propaganda. In this episode of Big Tech, co-hosts David Skok and Taylor Owen spoke with journalist and Rappler founder Maria Ressa just days before her conviction in a high-profile cyber-libel case against her, as well as her colleague Reynaldo Santos, Jr. and Rappler Inc. as a whole. On Monday, June 15, the Manila Regional Trial Court Branch 46 ruled that Ressa and Santos, Jr. were liable, but that Rappler as a company was not. This case is viewed in the larger context as an attack on journalistic freedoms protected under the Filipino Constitution. Ressa has repeatedly come under fire by the Duterte government for calling out what she sees as illiberal-leaning and propaganda. Facebook was a key component of President Rodrigo Duterte’s election in 2016. Ressa explained, “On Facebook, a lie told a million times becomes a fact.” The disinformation that spreads on social media platforms is having real-world impacts on how citizens view democratic institutions. “If you debate the facts, you can’t have integrity of markets. You can’t have integrity of elections….This is democracy’s death by a thousand cuts,” said Ressa.
Efforts to contain the COVID-19 pandemic have essentially shut down the economy. Now, as regions now look to reopen, the focus is shifting to minimizing further infection by monitoring the virus’s spread and notifying people who may have been exposed to it. Public health authorities around the globe have deployed or are developing contact-tracing or contact-notification mobile apps. Apple and Google have partnered to develop a Bluetooth-based, contact notification application programming interface, now available as an update for their mobile operating systems. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Carly Kind, director of the Ada Lovelace Institute, an organization studying issues around data and artificial intelligence. In April, the institute published a report on digital contact tracing called Exit through the App Store?There are concerns about this technology’s rapid and expansive rollout. First, around data collection: some jurisdictions are developing their apps to store data centrally rather than at device level. Next, technical considerations: for example, Bluetooth-based apps only register proximity and don’t account for other metrics, such as whether the contact happened outdoors, where infection risk is lower. Further, concerns about Apple and Google’s tech-focused solution, which infringes on the public health space: “From a power and control perspective, you can’t help but feel somewhat afraid that two companies control almost every device in every hand in the world and are able to wield that power in ways that contradict, right or wrong, the desires of national governments and public health authorities,” Kind cautions. Finally, there are concerns about how health-tracking apps, and our access to them, could impact our freedom to move about: we need to think about the ways these apps could marginalize individuals who don’t have the technology to prove their health status.
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
Telecommunications play a crucial role in our increasingly connected world, providing the backbone for digital communication across various platforms. A significant player in this industry is Shaw Direct, a Canadian satellite television provider that delivers a broad range of TV channels and services to its customers. Shaw Direct offers extensive coverage, even in remote areas where traditional cable services are unavailable, ensuring that people across Canada have access to high-quality entertainment and information. You need to check this https://shaw-direct-canada.pissedconsumer.com/review.html and learn more new things about Telecommunications. This highlights the importance of advanced telecommunications infrastructure in bridging the digital divide and enhancing the quality of life. As technologies evolve, Shaw Direct and other telecom providers are set to play an even more vital role in fostering connectivity and supporting the digital economy.
Media hates him because he’s crushing them in the ratings
Rogan hate