DiscoverMLOps.communityDetecting Harmful Content at Scale // Matar Haller // #245
Detecting Harmful Content at Scale // Matar Haller // #245

Detecting Harmful Content at Scale // Matar Haller // #245

Update: 2024-07-091
Share

Description

Matar Haller is the VP of Data & AI at ActiveFence, where her teams own the end-to-end automated detection of harmful content at scale, regardless of the abuse area or media type. The work they do here is engaging, impactful, and tough, and Matar is grateful for the people she gets to do it with.



AI For Good - Detecting Harmful Content at Scale // MLOps Podcast #245 with Matar Haller, VP of Data & AI at ActiveFence.

// Abstract
One of the biggest challenges facing online platforms today is detecting harmful content and malicious behavior. Platform abuse poses brand and legal risks, harms the user experience, and often represents a blurred line between online and offline harm. So how can online platforms tackle abuse in a world where bad actors are continuously changing their tactics and developing new ways to avoid detection?

// Bio
Matar Haller leads the Data & AI Group at ActiveFence, where her teams are responsible for the data, algorithms, and infrastructure that fuel ActiveFence’s ability to ingest, detect, and analyze harmful activity and malicious content at scale in an ever-changing, complex online landscape. Matar holds a Ph.D. in Neuroscience from the University of California at Berkeley, where she recorded and analyzed signals from electrodes surgically implanted in human brains. Matar is passionate about expanding leadership opportunities for women in STEM fields and has three children who surprise and inspire her every day.

// MLOps Jobs board
https://mlops.pallet.xyz/jobs

// MLOps Swag/Merch
https://mlops-community.myshopify.com/

// Related Links
activefence.com
https://www.youtube.com/@ActiveFence

--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/

Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Matar on LinkedIn: https://www.linkedin.com/company/11682234/admin/feed/posts/

Timestamps:
[00:00 ] Matar's preferred coffee
[00:13 ] Takeaways
[01:39 ] The talk that stood out
[06:15 ] Online hate speech challenges
[08:13 ] Evaluate harmful media API
[09:58 ] Content moderation: AI models
[11:36 ] Optimizing speed and accuracy
[13:36 ] Cultural reference AI training
[15:55 ] Functional Tests
[20:05 ] Continuous adaptation of AI
[26:43 ] AI detection concerns
[29:12 ] Fine-Tuned vs Off-the-Shelf
[32:04 ] Monitoring Transformer Model Hallucinations
[34:08 ] Auditing process ensures accuracy
[38:38 ] Testing strategies for ML
[40:05 ] Modeling hate speech deployment
[42:19 ] Improving production code quality
[43:52 ] Finding balance in Moderation
[47:23 ] Model's expertise: Cultural Sensitivity
[50:26 ] Wrap up

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Detecting Harmful Content at Scale // Matar Haller // #245

Detecting Harmful Content at Scale // Matar Haller // #245

Demetrios Brinkmann