DiscoverODSC's Ai X PodcastNavigating the Ethical Landscape of Generative AI with Michelle Yi
Navigating the Ethical Landscape of Generative AI with Michelle Yi

Navigating the Ethical Landscape of Generative AI with Michelle Yi

Update: 2024-03-12
Share

Description

The emergence and popularity of generative applications have raised numerous ethical concerns that can be broken down into a number of categories: 

  • Lack of explainability
  • Incorporation of human bias Inherent English and European bias of training data
  • Lack and impact of regulation, now and in the future
  • Fear of apocalyptic outcomes
  • Lack of accountability by developers’ impact on the environment
  • Effect on labor.


In this interview on navigating the ethical concerns of generative AI, Michelle Yi, technology leader and board member at Women in Data, will explore the causes, effects, and possible solutions for several of these concerns. In particular, she will focus on the impact of lack of explainability, legal and regulatory challenges, and how we move forward with this new technology responsibly.


Sponsored by: https://odsc.com/ 

Find more ODSC lightning interviews, webinars, live trainings, certifications, bootcamps here – https://aiplus.training/ 


Topics:

  1. Introductions
  2. The importance of diverse voices in the development and governance of generative AI 
  3. Examples of how a lack of diversity in GenAI development has led to ethical problems  
  4. Current limitations of tools or techniques used to detect and mitigate bias in generative AI models.
  5. Using synthetic data in generative AI models.
  6. Hallucination in large langue models.
  7. Implementing explainability in generative AI without compromising performance or efficiency.
  8. Favorite tools for improving GenAI for bias detection, mitigation, explainability, and transparency,
  9. Striking a balance between encouraging GenAI innovation and mitigating potential risks
  10. Watermarking generative AI models and the Nightshade project for protecting copyright
  11. Security vulnerabilities in generative AI and jailbreaking
  12. Motivating practitioners to prioritize safety, transparency, and fairness in GenAI 
  13. Effectively communicating generative AI safety measures versus model limitations 
  14. Programs that encourage underrepresented group's participation in STEM education and AI careers
  15. Does generative AI contribute to or hinder the goal of democratizing AI?
  16. The role of women in leadership positions shaping the ethical use of generative AI
  17. Wrapup


Some useful links: 

Feel free to get in touch with Michelle Yi via Linkedin - ⁠https://www.linkedin.com/in/michelleyulleyi/⁠

Learn more about Women in Data here: ⁠https://www.womenindata.org/⁠

Learn more about Girls Who Code here: ⁠https://girlswhocode.com/⁠

Links to Tools and Research mentioned in this podcast

Learn more about Nightshade project here: ⁠https://nightshade.cs.uchicago.edu/whatis.html⁠

Learn more about Phoenix open source LLM observability tool here ⁠https://github.com/Arize-ai/phoenix⁠

Learn More about the AdaPrompt: Adaptive Model Training for Prompt-based NLP here

https://arxiv.org/abs/2202.04824

Learn more about Chirho, an experimental language for causal reasoning here ⁠https://github.com/BasisResearch/chirho⁠

Learn more about Pyro, a universal probabilistic programming language (PPL) written in Python, here https://pyro.ai/

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Navigating the Ethical Landscape of Generative AI with Michelle Yi

Navigating the Ethical Landscape of Generative AI with Michelle Yi