Claim Ownership

Author:

Subscribed: 0Played: 0
Share

Description

 Episodes
Reverse
On the season finale of Big Tech, host Taylor Owen discusses the future of tech governance with Azeem Azhar, author of The Exponential Age: How Accelerating Technology is Transforming Business, Politics, and Society. In addition to his writing, Azeem hosts the Exponential View podcast, which, much like this podcast, looks at how technology is transforming business and society.Taylor and Azeem reflect on some of the broad themes that have concerned them this season, from platform governance, antitrust and competition, to polarization, deliberative democracy and Web3. As listeners have come to know, Taylor often views technology’s future through a cautionary lens, while Azeem has a more optimistic outlook. They begin with the recent news of Elon Musk’s attempt to purchase Twitter and what that might mean for the platform. As the episode unfolds, Taylor and Azeem touch on the varied approaches to tech regulation around the world, and how polarization and its amplification via social media are impacting democracy. They discuss Web3’s potential to foster more transparency and trust building on the internet, as well as the need for states to be involved in shaping our future online. Ultimately, there are opportunities to make positive changes at many levels of these complex, multilayered issues. As a concluding thought, Azeem points to the coal industry as an example of how, regardless of political winds, many factors in a system can bring about change.
All Eyes on Crypto

All Eyes on Crypto

2022-04-1437:59

In this episode of Big Tech, host Taylor Owen speaks with Ephrat Livni, a lawyer and journalist who reports from Washington on the intersection of business and policy for DealBook at The New York Times. One of Livni’s focuses has been how cryptocurrencies have moved from the periphery of the financial world into the mainstream. The cryptocurrency movement originated with a commitment to the decentralization of money and the removal of intermediaries and government to enable person-to-person financial transactions. Early on, governments viewed cryptocurrency as a tool for illicit criminal activity and a threat to institutional power. In the last two years, cryptocurrency has moved into the mainstream, with sporting arenas named after crypto companies and flashy celebrity endorsements and Super Bowl ads. Crypto markets are extremely volatile with great interest from retail investors and venture capitalists. There’s a lot of enthusiasm about crypto, but not a lot of information.With crypto moving into the mainstream, companies that wish to create trust with their customers must be more transparent, accept regulations and act more like the institutions they initially sought to disrupt. As Livni and Owen discuss, this is not a sector that regulators can ignore: it is complicated, fast-changing, multinational, and demanding a great deal of thought about how best to proceed. 
The internet is an ever-evolving thing, with new features and services popping up daily. But these innovations are happening in the current internet space, known as Web 2.0. The potential next big leap is to what is being called Web3 or Web 3.0. You have likely heard some of the terms associated with this next age — the token economy, blockchain, NFTs. Our guest this week walks us through what all this “future stuff” means, and how it could impact our daily lives.In this episode of Big Tech, host Taylor Owen speaks with Shermin Voshmgir, founder of Token Kitchen and BlockchainHub Berlin and the author of Token Economy: How the Web3 reinvents the Internet. Her work focuses on making technology accessible to a non-tech audience to ensure everyone can be part of the decision-making process. Early adopters in the Web3 space see this new iteration of the Web as liberating, an innovation that will decentralize power, facilitate peer-to-peer transactions, enable individual data ownership and challenge the dominance of tech giants. There are many questions about the governance of Web3 and its impacts to society that regulators, still stuck on platform content moderation, are not yet looking at. The conversation between Taylor and Shermin provides a foundational understanding of Web3 and a look ahead at areas where regulators should focus their attention. 
Humanity has long imagined a future where humans could live for hundreds of years, if not forever. But those ideas have been the stuff of science fiction, up until now. There’s growing interest and investment in the realm of biohacking and de-aging, and leading scientists such as Harvard’s David A. Sinclair are bringing the idea of extended lifespans out of fantasy into a reality we may see within our generation. But a world where more people are living a lot longer than ever thought possible will have sweeping economic and social consequences. In this episode of Big Tech, host Taylor Owen speaks with journalist Matthew D. LaPlante, co-author of Lifespan: Why We Age — And Why We Don’t Have To with David A. Sinclair. LaPlante’s focus is on the impacts longer lifespans will have, rather than on the technology involved in achieving de-aging. For example: When people live longer, where do we set the retirement age? Can the planet support more humans? And how will we deal with our past choices when we live long enough to see their impacts on our great-great-grandchildren?In this wide-ranging conversation, Taylor and Matthew discuss more implications longer life would have on our society. In the justice system, appointing a 50-year-old to the Supreme Court looks very different when that person could live to 110 rather than 80. What about geopolitical stability, if autocrats and dictators can extend their lives to maintain power for much longer periods? And what are the implications for medical privacy when technology companies are using monitoring devices, such as the ubiquitous smart watch, in conjunction with artificial intelligence to predict when someone may develop an illness or have a heart attack? 
A fundamental feature of the internet is its ability to transcend borders, connecting people to one another and all forms of information. The World Wide Web was heralded as a global village that would remove the traditional gatekeepers and allow anyone a platform to be heard. But the reality is that access to the internet and online services is very much bound to geography. A benign example is the location lockouts to online streaming platforms depending on which country you access. But more extreme examples of how location is inherently tied to internet access occur in authoritarian regimes that will limit access during uprisings, filter and block content, and surveil online conversations and then make real-world arrests. In this episode of Big Tech, host Taylor Owen speaks with Nanjala Nyabola, a CIGI fellow, political analyst and author of Digital Democracy, Analogue Politics: How the Internet Era is Transforming Politics in Kenya and Travelling While Black: Essays Inspired by a Life on the Move.Governments have been working on platform governance and content moderation reforms for a few years now, and the need to find solutions and set rules becomes increasingly important – just look at how misinformation and censorship have been playing out in Russia and other authoritarian states over the last few weeks during the war in Ukraine. In Nyabola’s work on internet governance, she proposes that rather than look for global consensus on regulation, we need to think of the internet as a public good. “Water isn’t administered the same way in Kenya as it is in Uganda, as it is in Ethiopia, as it is in the United States; different municipalities will have different codes. But there is a fundamental agreement that water is necessary for life and should, as far as possible, be administered as a public utility.” Nyabola explains that governing the internet requires first setting out its fundamental aspects that humanity wants to safeguard and then protecting those common principles while allowing jurisdictions deliver this public good in their own unique ways.
The speed at which the Russia-Ukraine war has played out across the internet has led to some interesting insights about how different groups have been experiencing and responding to information and misinformation about it. The West found unity across political divides, and the big tech platforms, breaking their long-held stance, have quickly acted to limit the spread of disinformation by making changes to their algorithms.However, across much of the non-English-language internet, the information ecosystem is very different. Many Russians aren’t even aware that there is a war going on. And technology companies that are discontinuing their operations in Russia as a well-meaning sign of solidarity with Ukraine may be making the problem worse.In this episode of Big Tech, host Taylor Owen speaks with Ben Scott and Frederike Kaltheuner about various aspects of communications technology and the social media platforms that are being used by all sides in the Russia-Ukraine war. We begin with a conversation between Taylor and Ben, the executive director of Reset, on the state of the information ecosystem both inside Russia and around the world. In the second half, Taylor speaks with Frederike, the director of the technology and rights division at Human Rights Watch, about the importance of access to information during wartime in the monitoring and documenting of human rights abuses, as well as the critical role that communications systems play in helping citizens inside conflict zones.
In this episode of Big Tech, host Taylor Owen speaks with Margaret O’Mara, a historian of modern America and author of The Code: Silicon Valley and the Remaking of America. Silicon Valley and the massive wealth it has generated have long symbolized the wonders of free market capitalism, viewed as proof of how innovation can thrive when it is not burdened by government oversight. Silicon Valley is infused with this libertarian ethos, centred on the idea that it was guys in their garages, setting out to create something new and make the world a better place, who built the Valley. But O’Mara looks back into history and says that’s all just a myth. During the Cold War, the United States was looking for ways to bolster its technological advantage over the Soviets. Knowing that state-led projects would appear “Communist” to the American people, the government funnelled federal funding for research and development through universities, research institutions and defence companies. This influx of funds enabled private companies to expand and innovate and universities to subsidize tuition. The Apollo space program offers one such example, where federal funds supported tech companies working in electronic miniaturization and semiconductors. The upshot is that the entire Silicon Valley tech sector was built on government intervention and support, and even the guys in their garages benefited from the access to affordable university education. “To pull yourself up by your bootstraps is an American myth that’s very corrosive — there are very, very few truly self-made people,” explains O’Mara. By demystifying Silicon Valley’s origins we can better approach regulation and oversight of the tech industry. 
Do you feel as if you can’t get through a single task without distractions? Perhaps you are watching a movie and stop it to check social media or respond to a message. You aren’t alone; studies show that collectively our attention spans have been shrinking for decades. Many factors contribute to our fractured focus, including the processed foods we eat, which cause energy highs and lows, but the greatest culprit of all is technology. In this episode of Big Tech, host Taylor Owen speaks with Johann Hari, the author of three New York Times bestsellers: Stolen Focus, Lost Connections and Chasing the Scream. Hari has been writing about depression, addiction and drugs for many years. Using that as background, Hari seeks to understand how social media has been changing our ability to deeply focus on important tasks. Hari argues that we must not think of this as a personal failing and charge the individual with finding a way out of this crisis, as we have done with obesity and drug addictions. Instead, society must change its relationship with technology so that we can regain our human ability to focus. Technology has increased the speed at which we work and live; as we try to consume so much information, we begin to focus less and less on the details. Hari compares it to speed reading: “It’s surprisingly effective, but it always comes with a cost, even for professional speed readers, which is the faster you read, the less you understand, the less you remember, and the more you’re drawn to shallow and simplistic documents.” Couple that with the way platforms prioritize certain types of content and you have a recipe for disaster. “Everyone has experienced it. Human beings will stare longer at something that makes them angry and upset than they will at something that makes them feel good,” says Hari. What Hari worries is that rather than take collective action, society will put the onus on individuals much as in dealing with obesity it ignores the wider food supply network and instead sells fad diets and supplements to individuals. “And if you come to the attention crisis the same way [we responded] to the obesity crisis, we’ll get the same outcome, which is an absolute disaster.”
In the history of computers and the internet, a few names likely come to mind: Alan Turing, Tim Berners-Lee, Bill Gates and Steve Jobs. Undoubtedly, these men’s contributions to computer sciences have shaped much of our modern life. In the case of Jobs and Gates, their financial success shifted the landscape of software development and the metrics of success in Silicon Valley. Some sectors of the industry, such as programming, hypertext and databases, had been dominated by women in the early days, but once those areas became economic drivers, men flooded in, pushing aside the women. In the process, many of their contributions have been overlooked.In this episode of Big Tech, host Taylor Owen speaks with Claire L. Evans, a musician, internet historian and author of Broad Band: The Untold Story of the Women Who Made the Internet. Evans’s book chronicles the work of women involved in creating the internet but left out of its history. Owen and Evans reflect on several important milestones of the early internet where women were innovating in community building and the moderation of message boards. Evans reveals a little-known history of the early web and the women involved. One aspect that stands out is how the projects that women led focused on building trust with users and the production of knowledge rather than the technical specifications of microprocessors or memory storage. Today, in the face of online harms, misinformation, failing institutional trust and content moderation challenges, there is a great deal we can learn from the work women were already doing decades ago in this space. 
Nicholas Carr is a prolific blogger, author and critic of technology since the early days of the social web. Carr began his blog Rough Type in 2005, at a time when some of today’s biggest companies where still start-ups operating out of college dorms. In 2010, he wrote the Pulitzer Prize for Nonfiction finalist The Shallows, in which he discussed how technology was changing the human brain. At the time, many were skeptical about Carr’s argument, but in just over a decade many of his predictions have come true. In this episode of Big Tech, host Taylor Owen and guest Nicholas Carr reflect on how he was able to identify these societal shifts long before others. The social web, known as Web 2.0, was billed as a democratizing tool for breaking down barriers so that anyone could share information and have their voices heard. Carr had concerns; while others saw college kids making toys, he saw the potential for major shifts in society. “As someone who had studied the history of media, I knew that when you get these kinds of big systems, particularly big communication systems, the unexpected, unanticipated consequences are often bigger than what everybody thinks is going to happen,” Carr explains. We are again on the verge of the next online shift, called Web3, and as new online technologies like non-fungible tokens, cryptocurrencies and the metaverse are being built, we can learn from Web 2.0 in hopes of mitigating future unanticipated consequences. As Carr sees it, we missed the opportunity to become involved early on with social platforms, before they became entrenched in our lives. “Twitter was seen as a place where people, you know, describe what they had for breakfast, and so society didn’t get involved in thinking about what are the long-term consequences here and how it’s going to play out. So I think if we take a lesson from that, even if you’re skeptical about virtual reality and augmented reality, now is the time that society has to engage with these visions of the future.”
People are divided: you are either pro-vaccination or against it, and there seems to be no middle ground. Whether around the dinner table or on social media, people are entrenched in their positions. A deep-seated mistrust in science, despite its contributions to the flourishing of human life, is being fuelled by online misinformation. For the first time in history, humanity is in the midst of a pandemic with communication tools of almost unlimited reach and potential benefit, yet social media and the information economy appear structured to promote polarization. Take the case of The Joe Rogan Experience podcast on Spotify: Rogan, a comedian, is able to engage millions of listeners and spread, unchecked, misinformation about COVID-19 “cures” and “treatments” that have no basis in evidence. What responsibility does Spotify have as the platform enabling Rogan to spread this misinformation, and is it possible for the scientific community to break through to skeptics? In this episode of Big Tech, host Taylor Owen speaks with Timothy Caulfield, the author of bestselling books such as Is Gwyneth Paltrow Wrong About Everything? and The Vaccination Picture. He is also the Canada Research Chair in Health Law and Policy at the University of Alberta. Throughout the COVID-19 pandemic, Caulfield has been outspoken on Twitter about medical misinformation with the #ScienceUpFirst campaign. What we have learned though the pandemic is how critical it is to have clear public health communication, and that it is remarkably difficult to share information with the public. As everyone rushed to provide medical advice, people were looking for absolutes. But in science, one needs to remain open to new discoveries, so, as the pandemic evolved, guidelines were updated. As Caulfield explains, “I think it’s also a recognition of how important it is to bring the public along on that sort of scientific ride, saying, Look, this is the best advice we can give right now based on the science available.” When health guidelines are presented in a dogmatic way, it becomes difficult to share new emerging research; misunderstood or outdated facts become weaponized by those trying to discredit the public health sector who point to what was previously known and attempt to muddy the discourse and sow doubt. And that doubt leads to mistrust in institutions, the rise of “alternative facts,” the sharing of untested therapeutics on popular podcasts — and a convoy of truckers camped out in the Canadian capital to protest COVID lockdown and vaccine mandates.
Time and time again, we see the billionaire tech founder or CEO take the stage to present the latest innovation meant to make people’s lives better, revolutionize industries and glorify the power of technology to save the world. While these promises are dressed up in fancy new clothes, in reality, the tech sector is no different than other expansionist enterprises from the past. Their core foundation of growth and expansion is deeply rooted in the European and American colonialization and Manifest Destiny doctrines. And just as in the past, the tech sector is engaging in extraction, exploitation and expansion.In this episode of Big Tech, host Taylor Owen speaks with Jeff Doctor, who is Cayuga from Six Nations of the Grand River Territory. He is an impact strategist for Animikii, an Indigenous-owned technology company.Doctor isn’t surprised that technology is continuing to evolve in the same colonial way that he saw growing up and was built into television shows, movies and video games, such as the popular Civilizations franchise, which applies the same European expand-and-conquer strategy to winning the game regardless of the society a player represents in the game. “You see this manifested in the tech billionaire class, like all of them are literally trying to colonize space right now. It’s not even a joke any more. They grew up watching the same crap,” Doctor says.Colonialism and technology have always been entwined. European expansionism depended on modern technology to dominate, whether it be through deadlier weapons, faster ships or the laying of telegraph and railway lines across the west. Colonization continues through, for example, English-only development tools, and country selection dropdown options limited to “Canada” or the “United States” that ignore Indigenous peoples’ communities and nations. And, as governments grapple with how to protect people’s personal data from the tech sector, there is little attention paid to Indigenous data sovereignty, to ensure that every nation and community has the ability to govern and benefit from its own data.
Governments around the world are looking at their legal frameworks and how they apply to the digital technologies and platforms that have brought widespread disruptive change to their economies, societies and politics. Most governments are aware that their regulations are inadequate to address the challenges of an industry that crosses borders and pervades all aspects of daily life. Three regulatory approaches are emerging: the restrictive regime of the Chinese state; the lax, free-market approach of the United States; and the regulatory frameworks of the European Union, which are miles ahead of those of any other Western democratic country.In this episode of Big Tech, host Taylor Owen speaks with Mark Scott, the chief technology correspondent at Politico, about the state of digital technology and platform regulations in Europe. Following the success of implementing the General Data Protection Regulation, which went into effect in 2018, the European Parliament currently has three big policy proposals in the works: the Digital Services Act, the Digital Markets Act and the Artificial Intelligence Act. Taylor and Mark discuss how each of these proposals will impact the tech sector and discuss their potential for adoption across Europe — and how many other nations, including Canada, are modelling similar regulations within their own countries. 
Many unlocked mysteries remain about the workings of the human brain. Neuroscientists are making discoveries that are helping us to better understand the brain and correct preconceived notions about how it works. With the dawn of the information age, the brain’s processing was often compared to that of a computer. But the problem with this analogy is that it suggested the human brain was hard-wired, able to work in one particular way only, much as if it were a computer chip, and which, if damaged, could not reroute itself or restore function to a damaged pathway. Taylor Owen’s guest this week on the Big Tech podcast is a leading scholar of neuroplasticity, which is the ability of the brain to change its neural networks through growth and reorganization. Dr. Norman Doidge is a psychiatrist and author of The Brain That Changes Itself and The Brain’s Way of Healing. His work points to just how malleable the brain can be.Dr. Doidge talks about the brain’s potential to heal but also warns of the darker side of neuroplasticity, which is that our brains adapt to negative influences just as they do to positive ones. Today, our time spent in front of a screen and how we interact with technology are having significant impacts on our brains, and those of our children, affecting attention span, memory and recall, and behaviour. And all of these changes have societal implications.
Democracy is in decline globally. It’s one year since the Capitol Hill insurrection, and many worry that the United States’ democratic system is continuing to crumble. Freedom House, an America think tank, says that nearly three-quarters of the world’s population lives in a country that experienced democratic deterioration last year. The rise of illiberalism is one reason for this, but another may be that democratic governments simply haven’t been performing all that well in recent years. In this episode of Big Tech, host Taylor Owen speaks with Hélène Landemore, author of Open Democracy and Debating Democracy and professor of political science at Yale University. Landemore’s work explores the limitations of casting a vote every few years for a candidate or political party and how in practice that isn’t a very democratic process. “Electoral democracy is a closed democracy where power is restricted to people who can win elections,” she says. Positions on issues become entrenched within party lines; powerful lobbyists exert influence; and representatives, looking ahead to the next election, lack political will to lead in the here and now. In an open democracy, citizens would be called on to debate issues and create policy solutions for problems. “If you include more people in the conversation, in the deliberation, you get the benefits of cognitive diversity, the difficulties of looking at problems and coming up with solutions, which benefits the group ultimately,” Landemore explains. In response to the yellow jacket movement in France, the government asked 150 citizens to come up with climate policies. Over seven weekend meetings, that group came up with 149 proposals on how to reduce France’s greenhouse gas emissions. In Ireland, a group of citizens was tasked with deliberating the abortion topic, a sensitive issue that was deadlocked in the political arena. The group included pro-life and pro-choice individuals and, rather than descending into partisan mud-slinging, was able to come to the recommendation, after much civil deliberation, that abortion be decriminalized. Landemore sees the French and Irish examples as precedents for further exploration and experimentation and that “it means potentially going through constitutional reforms to create a fourth or so chamber called the House of the People or something else, where it would be like a parliament but just made up of randomly selected citizens.” 
On the first anniversary of the January 6 insurrection at the United States Capitol, Big Tech host Taylor Owen sits down with Craig Silverman to discuss how the rise of false facts led us to that moment. Silverman is a journalist for ProPublica and previously worked at Buzzfeed News, and is the editor of the Verification Handbook series. Before Donald Trump popularized “fake news” as a blanket term to attack mainstream news outlets, Silverman had been using it to mean something different and very specific. Fake news, also known as misinformation, disinformation or false facts, is online content that has been intentionally created to be shared on social media platforms. Before it was weaponized as a tool for election interference, fake news was simply a lucrative clickbait market that saw higher engagement than traditional media. And social media platforms’ algorithms amplified it because that higher engagement meant people spent more time on the platforms and boosted their ad revenue. After establishing the origins of misinformation and how it was used to manipulate the 2016 US presidential election, Owen and Silverman discuss how Facebook, in particular, responded to the 2020 US presidential election. Starting in September 2020, the company established a civic integrity team focusing on, among other issues, its role in elections globally and removed posts, groups and users that were promoting misinformation. Silverman describes what happens next. “After the election, what does Facebook do? Well, it gets rid of the whole civic integrity team, including the group’s task force. And so, as things get worse and worse leading up to January 6, nobody is on the job in a very focused way.” Before long, Facebook groups had “become an absolute hotbed and cesspool of delegitimization, death threats, all this kind of stuff,” explains Silverman. The lie that the election had been rigged was spreading unchecked via organized efforts on Facebook. Within a few weeks of the civic integrity team’s dismantling, Trump’s supporters arrived on Capitol Hill to “stop the steal.” It was then, as Silverman puts it, “the real world consequences came home to roost.”
In this episode of Big Tech, Taylor Owen speaks with Nicole Perlroth, New York Times cybersecurity journalist and author of This Is How They Tell Me the World Ends: The Cyberweapons Arms Race.Nicole and Taylor discuss how that the way in which nation-states go about acquiring cyber weapons through underground online markets creates an incentive structure that enables the entire cyberwarfare complex to thrive while discouraging these exploits from being patched. “So they don’t want to tell anyone about their zero-day exploits, or how they’re using them, because the minute they do, that $3 million investment they just made turns to mud,” Perlroth explains. As Perlroth investigated the world of cyberwarfare, she noticed how each offensive action was met with a response in kind, the United States is under constant attack. The challenge with countering cyber-based attacks is the many forms they can take and their many targets, from attacks on infrastructure such as the power grid, to corporate and academic espionage, such as stealing intellectual property or COVID-19 vaccine research, to ransomware. “The core thesis of your book,” Taylor reflects, “is for whatever gain the US government might get from using these vulnerabilities, the blowback is both an unknowable and uncontrollable uncontainable.”Early on, Perlroth was concerned about the infrastructure attacks, the ones that could lead to a nuclear power plant meltdown. However, the main focus of cyberattacks is on intelligence and surveillance of mobile phones and internet-connected devices. There is a tension between Silicon Valley’s efforts to encrypt and secure user data and law enforcement’s search for tools to break that encryption. Several jurisdictions are looking to force tech companies to build back doors into their products. Certainly, providing access to devices to aid in stopping terrorist attacks and human trafficking would be beneficial. But back doors, like other vulnerabilities found in code, can be weaponized and used by authoritarian regimes to attack dissidents or ethnic minorities.Cybersecurity is a multi-faceted issue that needs to be addressed at all levels, because the nature of cyberwarfare is that we can no longer protect just our physical borders. “We have no choice but to ask ourselves the hard questions about what is in our network and who’s securing it — and where is this code being built and maintained and tested, and are they investing enough in security?” says Perlroth.
In the early days of the internet, information technology could be viewed as morally neutral. It was simply a means of passing data from one point to another. But, as communications technology has advanced by using algorithms, tracking and identifiers to shape the flow of information, we are being presented with moral and ethical questions about how the internet is being used and even reshaping what it means to be human.In this episode of Big Tech, Taylor Owen speaks with the Right Reverend Dr. Steven Croft, the Bishop of Oxford, Church of England. Bishop Steven, as he is known to his own podcast audience, is a board member of the Centre for Data Ethics and Innovation and has been part of other committees such as the House of Lords’ Select Committee on Artificial Intelligence.Bishop Steven approaches the discussions around tech from a very different viewpoint, not as an academic or technologist but as a theologian in the Anglican church: “I think technology changes the way we relate to one another, and that relationship is at the heart of our humanity.” He compares what is happening now in society with the internet to the advent of the printing press in the fifteenth century, which democratized knowledge and changed the world in profound ways. The full impacts of this current technological shift in our society are yet to be known. But, he cautions, we must not lose sight of our core human principles when developing technology and ensure that we deploy it for “the common good of humankind.” “I don’t think morals and ethics can be manufactured out of nothing or rediscovered. And if we don’t have morality and ethics as the heart of the algorithms, when they’re being crafted, then the unfairness will be even greater than they otherwise have been.”
Social media has become an essential tool for sharing information and reaching audiences. In the political realm, it provides access to constituents in a way that going door to door can’t. It also provides a platform for direct access to citizens without paying for advertising or relying on news articles. We’ve seen how Donald Trump used social media to his advantage, but what happens when social media turns on the politician? In this episode of Big Tech, Taylor Owen speaks with Catherine McKenna, Canada’s minister of environment and climate change from 2015 to 2019. McKenna’s experience with online hate is not unique; many people and groups face online harassment and, in some cases, real-world actions against them. What does make McKenna’s case interesting is the convergence of online harassment on social media and the climate change file. In her role as minister, McKenna was responsible for implementing the federal government’s environmental policy, including the Paris Agreement commitments, carbon pricing and pipeline divestment. No matter what she said in her social posts, they were immediately met with negative comments from climate change deniers. Attacks against her escalated to the point where her constituency office was vandalized and a personal security detail was assigned to her. Finding solutions to climate change is complicated, cross-cutting work that involves many stakeholders and relies on dialogue and engagement with government, industry and citizens. McKenna found that the online expression of extremism, amplified by social media algorithms, made meaningful dialogue all but impossible. McKenna, no longer in politics, is concerned that the online social space is having negative impacts on future youth who may want to participate in finding climate solutions. “I’ve left public life not because of the haters, but because I just want to focus on climate change. But…I want more women to get into politics. I want broader diversity. Whether you’re Indigenous, part of the LGBTQ+ community, or a new immigrant, whatever it is, I want you to be there, but it needs to be safe.” Which raises the question: To find climate solutions, must we first address misinformation and online hate? 
Humans need privacy — the United Nations long ago declared it an inalienable and universal human right. Yet technology is making privacy increasingly difficult to preserve, as we spend fewer and fewer moments of time disconnected from our computers, smartphones and wearable tech. Edward Snowden’s revelations about the scope of surveillance by the National Security Agency and journalists’ investigations into Cambridge Analytica showed us how the tech products and platforms we use daily make incursions on our privacy. But we continue to use these services and allow our personal data to be collected and sold and, essentially, used against us — through advertising, political advertising and other forms of targeting, sometimes even surveillance or censorship — all because many feel that the benefits these services provide outweigh their negative impacts on our privacy. This week’s guest, Carissa Véliz, believes that our current relationship with online privacy needs to change, and there are ways to go about it. Véliz is the author of Privacy Is Power and associate professor in the Faculty of Philosophy at the University of Oxford. Véliz speaks with host Taylor Owen about how sharing private information is often not simply individual information. “Whenever I share my personal data, I’m generally sharing personal data about others as well. So, if I share my genetic data, I’m sharing data about my parents, about my siblings, about my cousins,” which, she explains, can lead to unintended consequences for others, such as being denied medical insurance or deportation. As she sees it, users have the power to demand better controls over their personal data, because it is so valuable to the big tech companies that collect, sell and use it for advertising. “The most valuable kind of data is the most recent one, because personal data expires pretty quickly. People change their tastes. They move houses. They lose weight or gain weight. And so companies always want the most updated data.” Véliz wants people to know that even if they believe their data is already out there on the internet, it’s not too late to improve their privacy practices or demand change from technology companies. “Because you’re creating new data all the time, you can make a really big difference by protecting your data as of today,” she says. The battle is not lost — there is always an opportunity to change the way our data is used. But Véliz warns that we must act now to establish those guardrails, because technology will continue to invade ever more of our private spaces if left unchecked. 
Comments (2)

Golden boy

Media hates him because he’s crushing them in the ratings

Jul 6th
Reply

Golden boy

Rogan hate

Jul 6th
Reply
Download from Google Play
Download from App Store