DiscoverThe Road to Accountable AI
The Road to Accountable AI

The Road to Accountable AI

Author: Kevin Werbach

Subscribed: 6Played: 33
Share

Description

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.
27 Episodes
Reverse
In this episode, Kevin speaks with with the influential tech thinker Tim O’Reilly, founder and CEO of O’Reilly Media and popularizer of terms such as open source and Web 2.0. O'Reilly, who co-leads the AI Disclosures Project at the Social Science Research Council, offers an insightful and historically-informed take on AI governance. Tim and Kevin first explore the evolution of AI, tracing its roots from early computing innovations like ENIAC to its current transformative role Tim notes the centralization of AI development, the critical role of data access, and the costs of creating advanced models. The conversation then delves into AI ethics and safety, covering issues like fairness, transparency, bias, and the need for robust regulatory frameworks. They also examine the potential for distributed AI systems, cooperative models, and industry-specific applications that leverage specialized datasets. Finally, Tim and Kevin highlight the opportunities and risks inherent in AI's rapid growth, urging collaboration, accountability, and innovative thinking to shape a sustainable and equitable future for the technology. Tim O’Reilly is the founder, CEO, and Chairman of O’Reilly Media, which delivers online learning, publishes books, and runs conferences about cutting-edge technology, and has a history of convening conversations that reshape the computer industry. Tim is also a partner at early stage venture firm O’Reilly AlphaTech Ventures (OATV), and on the boards of Code for America, PeerJ, Civis Analytics, and PopVox. He is the author of many technical books published by O’Reilly Media, and most recently WTF? What’s the Future and Why It’s Up to Us (Harper Business, 2017).  SSRC, AI Disclosures Project Asimov's Addendum Substack The First Step to Proper AI Regulation Is to Make Companies Fully Disclose the Risks
Join Professor Werbach in his conversation with Alice Xiang, Global Head of AI Ethics at Sony and Lead Research Scientist at Sony AI. With both a research and corporate background, Alice provides an inside look at how her team integrates AI ethics across Sony's diverse business units. She explains how the evolving landscape of AI ethics is both a challenge and an opportunity for organizations to reposition themselves as the world embraces AI. Alice discusses fairness, bias, and incorporating these ethical ideas in practical business environments. She emphasizes the importance of collaboration, transparency, and diveristy in embedding a culture of accountable AI at Sony, showing other organizations how they can do the same.  Alice Xiang manages the team responsible for conducting AI ethics assessments across Sony's business units and implementing Sony's AI Ethics Guidelines. She also recently served as a General Chair for the ACM Conference on Fairness, Accountability, and Transparency (FAccT), the premier multidisciplinary research conference on these topics. Alice previously served on the leadership team of the Partnership on AI. She was a Visiting Scholar at Tsinghua University’s Yau Mathematical Sciences Center, where she taught a course on Algorithmic Fairness, Causal Inference, and the Law. Her work has been quoted in a variety of high profile journals and published in top machine learning conferences, journals, and law reviews.  Sony AI Flagship Project Augmented Datasheets for Speech Datasets and Ethical Decision-Making by Alice Xiang and Others  
Kevin Werbach speaks with Krishna Gade, founder and CEO of Fiddler AI, on the the state of explainability for AI models. One of the big challenges of contemporary AI is understanding just why a system generated a certain output. Fiddler is one of the startups offering tools that help developers and deployers of AI understand what exactly is going on.  In the conversation, Kevin and Krishna explore the importance of explainability in building trust with consumers, companies, and developers, and then dive into the mechanics of Fiddler's approach to the problem. The conversation covers current and potential regulations that mandate or incentivize explainability, and the prospects for AI explainability standards as AI models grow in complexity. Krishna distinguishes explainability from the broader process of observability, including the necessity of maintaining model accuracy through different times and contexts. Finally, Kevin and Krishna discuss the need for proactive AI model monitoring to mitigate business risks and engage stakeholders.  Krishna Gade is the founder and CEO of Fiddler AI, an AI Observability startup, which focuses on monitoring, explainability, fairness, and governance for predictive and generative models. An entrepreneur and engineering leader with strong technical experience in creating scalable platforms and delightful products,Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. At Facebook, Krishna led the News Feed Ranking Platform that created the infrastructure for ranking content in News Feed and powered use-cases like Facebook Stories and user recommendations.   Fiddler.Ai How Explainable AI Keeps Decision-Making Algorithms Understandable, Efficient, and Trustworthy - Krishna Gade x Intelligent Automation Radio    
This week, Professor Werbach is joined by USC Law School professor Angela Zhang, an expert on China's approach to the technology sector. China is both one of the world's largest markets and home to some of the world's leading tech firms, as well as an active ecosystem of AI developers. Yet its relationship to the United States has become increasingly tense. Many in the West see a battle between the US and China to dominate AI, with significant geopolitical implications. In the episodoe, Zhang discusses China’s rapidly evolving tech and AI landscape, and the impact of government policies on its development. She dives into what the Chinese government does and doesn’t do in terms of AI regulation, and compares Chinese practices to those in the West. Kevin and Angela consider the implications of US export controls on AI-related technologies, along with the potential for cooperation between the US and China in AI governance. Finally, they look toward the future of Chinese AI including its progress and potential challenges.  Angela Huyue Zhang is a Professor of Law at the Gould School of Law  of the University of Southern California. She is the author of Chinese Antitrust Exceptionalism: How the Rise of China Challenges Global Regulation which was named one of the Best Political Economy Books of the Year by ProMarket in 2021. Her second book, High Wire: How China Regulates Big Tech and Governs Its Economy, released in March 2024, has been covered in The New York Times, Bloomberg, Wire China, MIT Tech Review and many other international news outlets.    High Wire: How China Regulates Big Tech and Governs Its Economy  The Promise and Perils of China's Regulation of Artificial Intelligence Angela Zhang’s Website   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.  
Professor Werbach speaks with Shea Brown, founder of AI auditing firm BABL AI. Brown discusses how his work as an astrophysicist led him to and machine learning, and then to the challenge of evaluating AI systems. He explains the skills needed for effective AI auditing and what makes a robust AI audit. Kevin and Shae talk about the growing landscape of AI auditing services and the strategic role of specialized firms like BABL AI. They examine the evolving standards and regulations surrounding AI auditing from local laws to US government initiatives to the European Union's AI Act. Finally, Kevin and Shae discuss the future of AI auditing, emphasizing the importance of independence.  Shea Brown, the founder and CEO of BABL AI, is a researcher, speaker, consultant in AI ethics, and former associate professor of instruction in Astrophysics at the University of Iowa. Founded in 2018, BABL AI has audited and certified AI systems, consulted on responsible AI best practices, and offered online education on related topics. BABL AI’s overall mission is to ensure that all algorithms are developed, deployed, and governed in ways that prioritize human flourishing. Shea is a founding member of the International Association of Algorithmic Auditors (IAAA). BABL.ai International Association of Algorithmic Auditors NYC Local Law 144: Automated Employment Decision Tools (AEDT) Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.  
This week, Professor Werbach is joined by Kevin Bankston, Senior Advisor on AI Governance for the Center for Democracy & Technology, to discuss the benefits and risks of open weight frontier AI models. They discuss the meaning of open foundation models, how they relate to open source software, how such models could accelerate technological advancement, and the debate over their risks and need for restrictions. Bankston discusses the National Telecommunications and Information Administration's recent recommendations on open weight models, and CDT's response to the request for comments. Bankston also shares insights based on his prior work as AI Policy Director at Meta, and discusses national security concerns around China's ability to exploit open AI models.  Kevin Bankston is Senior Advisor on AI Governance for the Center for Democracy & Technology, supporting CDT’s AI Governance Lab. In addition to a prior term as Director of CDT’s Free Expression Project, he has worked on internet privacy and related policy issues at the American Civil Liberties Union, Electronic Frontier Foundation, the Open Technology Institute, and Meta Platfrms. He was named by Washingtonian magazine as one of DC’s 100 top tech leaders of 2017. Kevin serves as an adjunct professor at the Georgetown University Law Center, where he teaches on the emerging law and policy around generative AI.  CDT Comments to NTIA on Open Foundation Models by Kevin Bankston  CDT Submits Comment on AISI's Draft Guidance, "Managing Misuse Risk for Dual-Use Foundation Models" Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.  
In this episode, Professor Kevin Werbach sits with Lara Abrash, Chair of Deloitte US. Lara and Kevin discuss the complexities of integrating generative AI systems into companies and aligning stakeholders in making AI trustworthy. They discuss how to address bias, and the ways Deloitte promotes trust throughout its organization. Lara explains the role and technological expertise of boards, company risk management, and the global regulatory environment. Finally, Lara discusses the ways in which Deloitte handles both its people and the services they provide.  Lara Abrash is the Chair of Deloitte US, leading the Board of Directors in governing all aspects of the US Firm. Overseeing over 170,000 employees, Lara is a member of Deloitte’s Global Board of Directors and Chair of the Deloitte Foundation. Lara stepped into this role after serving as the chief executive officer of the Deloitte US Audit & Assurance business. Lara frequently speaks on topics focused on advancing the profession including modern leadership traits, diversity, equity, and inclusion, the future of work, and tech disruption. She is a member of the American Institute of Certified Public Accountants and received her MBA from Baruch College.  Deloitte’s Trustworthy AI Framework Deloitte’s 2024 Ethical Technology Report Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
Professor Werbach speaks with Adam Thierer, senior fellow for Technology and Innovation at R Street Institute. Adam and Kevin highligh developments in AI regulation on the state, federal, and international scale, and discuss both the benefits and dangers of regulatory engagement in the area. They consider the notion of AI as a “field-of-fields,” and the value of a sectoral approach to regulation, looking back to the development of regulatory approaches for the internet. Adam discusses what types of AI regulations can best balance accountability with innovation, protecting smaller AI developers and startups.  Adam Thierer specializes in entrepreneurialism, Internet, and free-speech issues, with a focus on emerging technologies. He is a senior fellow for the Technology & Innovation team at R Street Institute, a leading public policy think tank, and previously spent 12 years as a senior fellow at the Mercatus Center at George Mason University. Adam has also worked for the Progress and Freedom Foundation, the Adam Smith Institute, the Heritage Foundation and the Cato Institute. Adam has published 10 books on a wide range of topics, including online child safety, internet governance, intellectual property, telecommunications policy, media regulation and federalism. Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
In this episode, Kevin Werbach is joined by Reggie Townsend, VP of Data Ethics at SAS, an analytics software for business platform. Together they discuss SAS’s nearly 50-year long history of supporting business’s technology and the recent implementation of responsible AI initiatives. Reggie introduces model cards and the importance of variety in AI systems across diverse stakeholders and sectors. Reggie and Kevin explore the increase in both consumer trust and purchases when they feel a brand is ethical in its use of AI and the importance of trustworthy AI in employee retention and recruitment. Their discussion approaches the idea of bias in an untraditional way, highlighting the positive humanistic nature of bias and learning to manage the negative implications. Finally, Reggie shares his insights on fostering ethical AI practices through literacy and open dialogue, stressing the importance of authentic commitment and collaboration among developers, deployers, and regulators. SAS adds to its trustworthy AI offerings with model cards and AI governance services Article by Reggie Townsend: Talking AI in Washington, DC Reggie Townsend oversees the Data Ethics Practice (DEP) at SAS Institute. He leads the global effort for consistency and coordination of strategies that empower employees and customers to deploy data driven systems that promote human well-being, agency and equity. He has over 20 years of experience in strategic planning, management, and consulting focusing on topics such as advanced analytics, cloud computing and artificial intelligence. With visibility across multiple industries and sectors where the use of AI is growing, he combines this extensive business and technology expertise with a passion for equity and human empowerment.   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.  
Join Professor Kevin Werbach in his discussion with Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology. In this episode, Werbach and Toner discuss how the public views AI safety and ethics and both the positive and negative outcomes of advancements in AI. We discuss Toner’s lessons from the unsuccessful removal of Sam Altman as the CEO of OpenAI, oversight structures to audit and approve the AI companies deploy, and the role of the government in AI accountability. Finally, Toner explains how businesses can take charge of their responsible AI deployment.   Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. From 2021-2023, she served on the board of OpenAI, the creator of ChatGPT.  Helen Toner’s TED Talk: How to Govern AI, Even if it’s Hard to Predict Helen Toner on the OpenAI Coup “It was about trust and accountability” (Financial Times)   Want to learn more? Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new  Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI's power while addressing its risks  
Join Kevin and Nuala as they discuss Walmart's approach to AI governance, emphasizing the application of existing corporate principles to new technologies. She explains the Walmart Responsible AI Pledge, its collaborative creation process, and the importance of continuous monitoring to ensure AI tools align with corporate values. Nuala reveals her  commitment to responsible AI with a focus on customer centricity at Walmart with the mantra “Inform, Educate, Entertain” and examples like the "Ask Sam" tool that aids associates. They address the complexities of AI implementation, including bias, accuracy, and trust, and the challenges of standardizing AI frameworks. Kevin and Nuala conclude with reflections on the need for humility and agility in the evolving AI landscape, emphasizing the ongoing responsibility of technology providers to ensure positive impacts. Nuala O’Connor is the SVP and chief counsel, digital citizenship, at Walmart. Nuala leads the company’s Digital Citizenship organization, which advances the ethical use of data and responsible use of technology. Before joining Walmart, Nuala served as president and CEO of the Center for Democracy and Technology. In the private sector, Nuala has served in a variety of privacy leadership and legal counsel roles at Amazon, GE and DoubleClick. In the public sector, Nuala served as the first chief privacy officer at the U.S. Department of Homeland Security. She also served as deputy director of the Office of Policy and Strategic Planning, and later as chief counsel for technology at the U.S. Department of Commerce. Nuala holds a B.A. from Princeton University, an M.Ed. from Harvard University and a J.D. from Georgetown University Law Center.    Nuala O'Connor to Join Walmart in New Digital Citizenship Role Walmart launches its own voice assistant, ‘Ask Sam,’ initially for employee use Our Responsible AI Pledge: Setting the Bar for Ethical AI Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
Join Kevin and Suresh as they discuss the latest tools and frameworks that companies can use to effectively combat algorithmic bias, all while navigating the complexities of integrating AI into organizational strategies. Suresh describes his experiences at the White House Office of Science and Technology Policy and the creation of the Blueprint for an AI Bill of Rights, including its five fundamental principles—safety and effectiveness, non-discrimination, data minimization, transparency, and accountability. Suresh and Kevin dig into the economic and logistical challenges that academics face in government roles and highlight the importance of collaborative efforts alongside clear rules to follow in fostering ethical AI. The discussion highlights the importance of education, cultural shifts, and the role of the European Union's AI Act in shaping global regulatory frameworks. Suresh discusses his creation of Brown University's Center on Technological Responsibility, Reimagination, and Redesign, and why trust and accountability are paramount, especially with the rise of Large Language Models.   Suresh Venkatasubramanian is a Professor of Data Science and Computer Science at Brown University. Suresh's background is in algorithms and computational geometry, as well as data mining and machine learning. His current research interests lie in algorithmic fairness, and more generally the impact of automated decision-making systems in society. Prior to Brown University, Suresh was at the University of Utah, where he received a CAREER award from the NSF for his work in the geometry of probability. He has received a test-of-time award at ICDE 2017 for his work in privacy. His research on algorithmic fairness has received press coverage across North America and Europe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. For the 2021–2022 academic year, he served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy.    Blueprint for an AI Bill of Rights Brown University's Center on Technological Responsibility, Reimagination, and Redesign Brown professor Suresh Venkatasubramanian tackles societal impact of computer science at White House   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.  
Kevin Werbach speaks with Diya Wynn, the responsible AI lead at Amazon Web Services (AWS). Diya shares how she pioneered a formal practice for ethical AI at AWS, and explains AWS’s “Well-Architected” framework to assist customers in responsibly deploying AI. Kevin and Diya also discuss the significance of diversity and human bias in AI systems, revealing the necessity of incorporating diverse perspectives to create more equitable AI outcomes.  Diya Wynn leads a team at AWS that helps customers implement responsible AI practices. She has over 25 years of experience as a technologist scaling products for acquisition; driving inclusion, diversity & equity initiatives; and leading operational transformation. She serves on the AWS Health Equity Initiative Review Committee; mentors at Tulane University, Spelman College, and GMI; was a mayoral appointee in Environment Affairs for six years; and guest lectures regularly on responsible and inclusive technology. Responsible AI for the greater good: insights from AWS’s Diya Wynn  Ethics In AI: A Conversation With Diya Wynn, AWS Responsible AI Lead   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
Kevin Werbach is joined by Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, to discuss the pioneering efforts of her team in building a culture of ethical technology use. Paula shares insights on aligning risk assessments and technical mitigations with business goals to bring stakeholders on board. She explains how AI governance functions in a large business with enterprise customers, who have distinctive needs and approaches. Finally, she highlights the shift from "human in the loop" to "human at the helm" as AI technology advances, stressing that today's investments in trustworthy AI are essential for managing tomorrow’s more advanced systems. Paula Goldman leads Salesforce in creating a framework to build and deploy ethical technology that optimizes social benefit. Prior to Salesforce, she served Global Lead of the Tech and Society Solutions Lab at Omidyar Network, and has extensive entrepreneurial experience managing frontier market businesses. Creating safeguards for the ethical use of technology Trusted AI Needs a Human at the Helm Responsible Use of Technology: The Salesforce Case Study   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
Kevin Werbach speaks with Navrina Singh of Credo AI, which automates AI oversight and regulatory compliance. Singh addresses the increasing importance of trust and governance in the AI space. She discusses the need to standardize and scale oversight mechanisms by helping companies align and translate their systems to include all stakeholders and comply with emerging global standards. Kevin and Navrina also explore the importance of sociotechnical approaches to AI governance, the necessity of mandated AI disclosures, the democratization of generative AI, adaptive policymaking, and the need for enhanced AI literacy within organizations to keep pace with evolving technologies and regulatory landscapes. Navrina Singh is the Founder and CEO of Credo AI, a Governance SaaS platform empowering enterprises to deliver responsible AI. Navrina previously held multiple product and business leadership roles at Microsoft and Qualcomm. She is a member of the U.S. Department of Commerce National Artificial Intelligence Advisory Committee (NAIAC), an executive board member of Mozilla Foundation, and a Young Global Leader of the World Economic Forum.  Credo.ai ISO/ 42001 standard for AI governance Navrina Singh Founded Credo AI To Align AI With Human Values   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
Kevin Werbach speaks with Scott Zoldi of FICO, which pioneered consumer credit scoring in the 1950s and now offers a suite of analytics and fraud detection tools. Zoldi explains the importance of transparency and interpretability in AI models, emphasizing a “simpler is better” approach to creating clear and understandable algorithms. He discusses FICO's approach to responsible AI, which includes establishing model governance standards, and enforcing these standards through the use of blockchain technology. Zoldi explains how blockchain provides an immutable record of the model development process, enhancing accountability and trust. He also highlights the challenges organizations face in implementing responsible AI practices, particularly in light of upcoming AI regulations, and stresses the need for organizations to catch up in defining governance standards to ensure trustworthy and accountable AI models. Dr. Scott Zoldi is Chief Analytics Officer of  FICO, responsible for analytics and AI innovation across FICO's portfolio. He has authored more than 130 patents, and is a long-time advocate and inventor in the space of responsible AI. He was nomianed for American Banker’s 2024 Innovator Award and received Corinium’s Future Thinking Award in 2022. Zoldi is a member of the Board of Advisors for FinReg Lab, and serves on the Boards of Directors of Software San Diego and San Diego Cyber Center of Excellence. He received his Ph.D. in theoretical and computational physics from Duke University.   Navigating the Wild AI with Dr. Scott Zoldi   How to Use Blockchain to Build Responsible AI   The State of Responsible AI in Financial Services
Professor Kevin Werbach and AI ethicist Olivia Gambelin discuss the moral responsibilities surrounding Artificial Intelligence, and the practical steps companies should take to address tehm. Olivia explains how companies can begin their responsible AI journey, starting with taking inventory of their systems and using Olivia's Value Canvas to map the ethical terrain. Kevin and Olivia delve into the potential reasons companies avoid investing in ethical AI, the financial and compliance benefits of making the investment, and best practices of companies who succeed in AI governance. Olivia also discusses her initiative to build a network of responsible AI practitioners and promote development of the field. Olivia Gameblin is founder and CEO of Ethical Intelligence, an advisory firm specializing in Ethics-as-a-Service for businesses ranging from Fortune 500 companies to Series A startups. Her book, Responsible AI, offers a comprehensive guide to integrating ethical practices for AI deployment. She serves on the Founding Editorial Board for Springer Nature’s AI and Ethics Journal, co-chairs the IEEE AI Expert Network Criteria Committee, and advises the Ethical AI Governance Group and The Data Tank. She is deeply involved in both the Silicon Valley startup ecosystem and advising on AI policy and regulation in Europe.  Olivia Gameblin’s Website Responsible AI: Implement an Ethical Approach in Your Organization The EI (Ethical Intelligence) Network  The Values Canvas  
Join Professor Kevin Werbach and Beth Noveck, New Jersey's first Chief AI Strategist, as they explore AI's transformative power in public governance. Beth reveals how AI is revolutionizing government operations, from rewriting complex unemployment insurance letters in plain English to analyzing call data for faster responses. They discuss New Jersey's innovative use of generative AI to cut response times in half, empowering public servants to better serve their communities while balancing ethical considerations and privacy concerns. Learn about New Jersey's training programs, sandboxes, and pilot projects designed to integrate AI safely into public service. Beth also shares inspiring global examples, like Taiwan's citizen-engaged decision-making processes and Iceland's Better Reykjavik initiative, which inform local projects like New Jersey's mycareernj.gov career coaching tool.  Beth Simone Noveck directs the Governance Lab (GovLab) at New York University's Tandon School of Engineering. As the inaugural U.S. Deputy Chief Technology Officer and leader of the White House Open Government Initiative under President Obama, she crafted innovative strategies to enhance governmental transparency, cooperation, and public engagement. Noveck authored "Wiki Government," a seminal work advocating for the use of digital tools to revolutionize civic interaction. Her roles have included Chief Innovation Officer for New Jersey and Senior Advisor for the Open Government Initiative, earning her wide acclaim and numerous accolades for her contributions to the field. Noveck's work emphasizes the transformative potential of technology in fostering more open, transparent, and participatory governance structures. Open Government Initiative  The GovLab Wiki Government Beth Noveck TED Talk: Demand a more open-source government  
Join Professor Kevin Werbach and Jean-Enno Charton, Director of Digital Ethics and Bioethics at Merck KGAA, as they explore the ethical challenges of AI in healthcare and life sciences. Charton delves into the evolution of Merck's AI ethics program, which stemmed from their bioethics advisory panel addressing complex ethical dilemmas in areas like fertility research and clinical trials. He details the formation of a dedicated digital ethics panel, incorporating industry experts and academics, and developing the Principle at Risk Analysis (PARA) tool to identify and mitigate ethical risks in AI applications. Highlighting the significance of trust, transparency, and pragmatic solutions, Charton discusses how these principles are applied across Merck's diverse business units. Listen in to thoroughly examine the intersection between bioethics, trust, and AI. Jean-Enno Charton is the Chief Data and AI Officer at Merck KGAA, a global pharmaceutical and life sciences company. He chairs the Digital Ethics Advisory Panel, focusing on ethical data use and AI applications within the company. Charton led the development of Merck's Code of Digital Ethics, guiding ethical principles such as autonomy, justice, and transparency in digital initiatives. A recognized speaker on digital ethics, his work contributes to responsible data-driven technology deployment in the healthcare and life sciences sector. Merck Code of Digital Ethics IEEE Ethically Aligned Design Principle-at-Risk Analysis  
Join Professor Kevin Werbach and Dominique Shelton Leipzig, an expert in data privacy and technology law, as they share practical insights on AI's transformative potential and regulatory challenges in this episode on The Road to Accountable AI. They dissect the ripple effects of recent legislation, and why setting industry standards and codifying trust in AI are more than mere legal checkboxes—they're the bedrock of innovation and integrity in business. Transitioning from theory to practice, this episode uncovers what it truly means to govern AI systems that are accurate, safe, and respectful of privacy. Kevin and Dominique navigate through the high-risk scenarios outlined by the EU and discuss how companies can future-proof their brands by adopting AI governance strategies.  Dominique Shelton Leipzig is a partner and head of the Ad Tech Privacy & Data Management team and the Global Data Innovation team at the law firm Mayer Brown. She is the author of the recent book Trust: Responsible AI, Innovation, Privacy and Data Leadership. Dominique co-founded NxtWork, a non-profit aimed at diversifying leadership in corporate America, and has trained over 50,000 professionals in data privacy, AI, and data leadership. She has been named a "Legal Visionary" by the Los Angeles Times, a "Top Cyber Lawyer" by the Daily Journal, and a "Leading Lawyer" by Legal 500.  Trust: Responsible AI, Innovation, Privacy and Data Leadership Mayer Brown Digital Trust Summit A Framework for Assessing AI Risk Dominique’s Data Privacy Recommendation Enacted in Biden’s EO  
loading