AI Legitimacy and Data Privacy with Ruth Marshall
Description
Mindful AI’s guest, Ruth Marshall works on real-world solutions for data privacy, Privacy Enhancing Technologies and frameworks and methodologies for the responsible use of data. Ruth has spent the past 25 years moving between collaborative research and corporate communities, with a background in software and Artificial Intelligence. In the earlier half of her career Ruth was responsible for product development and R&D in five software companies including Accenture, Harvey Norman, and Novartis Pharmaceuticals. She is now co-founder of a data literacy and ethics education initiative at Hocone, where she works with organisations to develop frameworks and education programs for the responsible use of data. She was also engaged by the NSW Government to outline an approach and framework for responsible data use across the organisation.
In Episode 9, we chat about:
- Ruth’s main concerns around privacy and legitimacy (05:18 ): "I don't think people are even making assumptions right now about …whether the AI application that they're building, they have any legitimacy to do that. Are they the right person to do it?"
- Co-creation and getting input/feedback from affected groups is important for establishing legitimacy and trust. Constant feedback loops needed to flag issues.
- Example of concern with legitimacy - a water charity using AI to understand water access in African communities but not considering if they are the right people to be telling these communities how to organize their lives (07:45 )
- Indigenous data sovereignty groups have long considered legitimacy an important concept regarding data and AI, stemming from illegitimate reorganization of their lives by European settlers.
- The need for more data literacy and ethics around data collection, preparation, provenance. Issues around representation, privacy, legitimacy of use. There's a proliferation of AI tools and models with little quality control (20:38 ).
- Lack of professionalisation and standards in AI/software engineering. No curriculum requirements or ensuring baseline knowledge. Ruth suggests we need to move towards treating it as a profession with standards (21:46 ).
- The need for balance between quality control/frameworks and not creating monopolies or barriers to entry. Favouring education over stringent restrictions.
- On measuring outcomes: we need to refer back to original goals, but also monitor for unintended consequences using lived experience. Borrow from practices like post-market monitoring of drugs.
- Models become outdated as world changes - we need ongoing external validation of algorithms, data, and real world interactions. Issues arise from changing context, not just the AI itself.
- Overall importance of trust, transparency, co-creation with affected groups, adapting models to changing world, and ongoing review of intended and unintended outcomes
In regard to AI competence vs performance, Ruth would like to credit Rodney Brooks for the ideas she referenced - please see further Brook’s article: https://spectrum.ieee.org/gpt-4-calm-down













