In episode 29 of Recsperts, I welcome Craig Macdonald, Professor of Information Retrieval at the University of Glasgow, and Aleksandr “Sasha” Petrov, PhD researcher and former applied scientist at Amazon. Together, we dive deep into sequential recommender systems and the growing role of transformer models such as SASRec and BERT4Rec.Our conversation begins with their influential replicability study of BERT4Rec, which revealed inconsistencies in reported results and highlighted the importance of training objectives over architecture tweaks. From there, Craig and Sasha guide us through their award-winning research on making transformers for sequential recommendation with large corpora both more effective and more efficient. We discuss how recency sampling (RSS) reduces training times dramatically, and how gSASRec overcomes the problem of overconfidence in models trained with negative sampling. By generalizing the sigmoid function (gBCE), they were able to reconcile cross-entropy–based optimization results with negative sampling, matching the effectiveness of softmax approaches while keeping training scalable for large corpora.We also explore RecJPQ, their recent work on joint product quantization for item embeddings. This approach makes transformer-based sequential recommenders substantially faster at inference and far more memory-efficient for embeddings—while sometimes even improving effectiveness thanks to regularization effects. Towards the end, Craig and Sasha share their perspective on generative approaches like GPTRec, the promises and limits of large language models in recommendation, and what challenges remain for the future of sequential recommender systems.Enjoy this enriching episode of RECSPERTS – Recommender Systems Experts.Don’t forget to follow the podcast and please leave a review.(00:00) - Introduction (04:09) - About Craig Macdonald (04:46) - About Sasha Petrov (13:48) - Tutorial on Transformers for Sequential Recommendations (19:24) - SASRec vs. BERT4Rec (21:25) - Replicability Study of BERT4Rec for Sequential Recommendation (32:52) - Training Sequential RecSys using Recency Sampling (40:01) - gSASRec for Reducing Overconfidence by Negative Sampling (01:00:51) - RecJPQ: Training Large-Catalogue Sequential Recommenders (01:21:37) - Generative Sequential Recommendation with GPTRec (01:29:12) - Further Challenges and Closing Remarks Links from the Episode:Craig Macdonald on LinkedInSasha Petrov on LinkedInSasha's WebsiteTutorial: Transformers for Sequential Recommendation (ECIR 2024)Tutorial Recording from ACM European Summer School in Bari (2024)Talk: Neural Recommender Systems (European Summer School in Information Retrieval 2024)Papers:Kang et al. (2018): Self-Attentive Sequential RecommendationSun et al. (2019): BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from TransformerPetrov et al. (2022): A Systematic Review and Replicability Study of BERT4Rec for Sequential RecommendationPetrov et al. (2022): Effective and Efficient Training for Sequential Recommendation using Recency SamplingPetrov et al. (2024): RSS: Effective and Efficient Training for Sequential Recommendation Using Recency Sampling (extended version)Petrov et al. (2023): gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative SamplingPetrov et al. (2025): Improving Effectiveness by Reducing Overconfidence in Large Catalogue Sequential Recommendation with gBCE lossPetrov et al. (2024): RecJPQ: Training Large-Catalogue Sequential RecommendersPetrov et al. (2024): Efficient Inference of Sub-Item Id-based Sequential Recommendation Models with Millions of ItemsRajput et al. (2023): Recommender Systems with Generative RetrievalPetrov et al. (2023): Generative Sequential Recommendation with GPTRecPetrov et al. (2024): Aligning GPTRec with Beyond-Accuracy Goals with Reinforcement LearningGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts WebsiteDisclaimer:Craig holds concurrent appointments as a Professor of Information Retrieval at University of Glasgow and as an Amazon Scholar. This podcast describes work performed at the University of Glasgow and is not associated with Amazon.
In episode 28 of Recsperts, I sit down with Robin Burke, professor of information science at the University of Colorado Boulder and a leading expert with over 30 years of experience in recommender systems. Together, we explore multistakeholder recommender systems, fairness, transparency, and the role of recommender systems in the age of evolving generative AI.We begin by tracing the origins of recommender systems, traditionally built around user-centric models. However, Robin challenges this perspective, arguing that all recommender systems are inherently multistakeholder—serving not just consumers as the recipients of recommendations, but also content providers, platform operators, and other key players with partially competing interests. He explains why the common “Recommended for You” label is, at best, an oversimplification and how greater transparency is needed to show how stakeholder interests are balanced.Our conversation also delves into practical approaches for handling multiple objectives, including reranking strategies versus integrated optimization. While embedding multistakeholder concerns directly into models may be ideal, reranking offers a more flexible and efficient alternative, reducing the need for frequent retraining.Towards the end of our discussion, we explore post-userism and the impact of generative AI on recommendation systems. With AI-generated content on the rise, Robin raises a critical concern: if recommendation systems remain overly user-centric, generative content could marginalize human creators, diminishing their revenue streams. Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:24) - About Robin Burke and First Recommender Systems (26:07) - From Fairness and Advertising to Multistakeholder RecSys (34:10) - Multistakeholder RecSys Terminology (40:16) - Multistakeholder vs. Multiobjective (42:43) - Reciprocal and Value-Aware RecSys (59:14) - Objective Integration vs. Reranking (01:06:31) - Social Choice for Recommendations under Fairness (01:17:40) - Post-Userist Recommender Systems (01:26:34) - Further Challenges and Closing Remarks Links from the Episode:Robin Burke on LinkedInRobin's WebsiteThat Recommender Systems LabReference to Broder's Keynote on Computational Advertising and Recommender Systems from RecSys 2008Multistakeholder Recommender Systems (from Recommender Systems Handbook), chapter by Himan Abdollahpouri & Robin BurkePOPROX: The Platform for OPen Recommendation and Online eXperimentationAltRecSys 2024 (Workshop at RecSys 2024)Papers:Burke et al. (1996): Knowledge-Based Navigation of Complex Information SpacesBurke (2002): Hybrid Recommender Systems: Survey and ExperimentsResnick et al. (1997): Recommender SystemsGoldberg et al. (1992): Using collaborative filtering to weave an information tapestryLinden et al. (2003): Amazon.com Recommendations - Item-to-Item Collaborative FilteringAird et al. (2024): Social Choice for Heterogeneous Fairness in RecommendationAird et al. (2024): Dynamic Fairness-aware Recommendation Through Multi-agent Social ChoiceBurke et al. (2024): Post-Userist Recommender Systems : A ManifestoBaumer et al. (2017): Post-userismBurke et al. (2024): Conducting Recommender Systems User Studies Using POPROXGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 27 of Recsperts, we meet Alessandro Piscopo, Lead Data Scientist in Personalization and Search, and Duncan Walker, Principal Data Scientist in the iPlayer Recommendations Team, both from the BBC. We discuss how the BBC personalizes recommendations across different offerings like news or video and audio content recommendations. We learn about the core values for the oldest public service media organization and the collaboration with editors in that process.The BBC once started with short video recommendations for BBC+ and nowadays has to consider recommendations across multiple domains: news, the iPlayer, BBC Sounds, BBC Bytesize, and more. With a reach of about 500M+ users who access services every week there is a huge potential. My guests discuss the challenges of aligning recommendations with public service values and the role of editors and constant exchange, alignment, and learning between the algorithmic and editorial lines of recommender systems.We also discuss the potential of cross-domain recommendations to leverage the content across different products as well as the organizational setup of teams working on recommender systems at the BBC. We learn about skews in the data due to the nature of an online service that also has a linear offering with TV and radio services.Towards the end, we also touch a bit on QUARE @ RecSys, which is the Workshop on Measuring the Quality of Explanations in Recommender Systems.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:10) - About Alessandro Piscopo and Duncan Walker (14:53) - RecSys Applications at the BBC (20:22) - Journey of Building Public Service Recommendations (28:02) - Role and Implementation of Public Service Values (36:52) - Algorithmic and Editorial Recommendation (01:01:54) - Further RecSys Challenges at the BBC (01:15:53) - Quare Workshop (01:23:27) - Closing Remarks Links from the Episode:Alessandro Piscopo on LinkedInDuncan Walker on LinkedInBBCQUARE @ RecSys 2023 (2nd Workshop on Measuring the Quality of Explanations in Recommender Systems)Papers:Clarke et al. (2023): Personalised Recommendations for the BBC iPlayer: Initial approach and current challengesBoididou et al. (2021): Building Public Service Recommenders: Logbook of a JourneyPiscopo et al. (2019): Data-Driven Recommendations in a Public Service OrganisationGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 26 of Recsperts, I speak with Sanne Vrijenhoek, a PhD candidate at the University of Amsterdam’s Institute for Information Law and the AI, Media & Democracy Lab. Sanne’s research explores diversity in recommender systems, particularly in the news domain, and its connection to democratic values and goals.We dive into four of her papers, which focus on how diversity is conceptualized in news recommender systems. Sanne introduces us to five rank-aware divergence metrics for measuring normative diversity and explains why diversity evaluation shouldn’t be approached blindly—first, we need to clarify the underlying values. She also presents a normative framework for these metrics, linking them to different democratic theory perspectives. Beyond evaluation, we discuss how to optimize diversity in recommender systems and reflect on missed opportunities—such as the RecSys Challenge 2024, which could have gone beyond accuracy-chasing. Sanne also shares her recommendations for improving the challenge by incorporating objectives such as diversity.During our conversation, Sanne shares insights on effectively communicating recommender systems research to non-technical audiences. To wrap up, we explore ideas for fostering a more diverse RecSys research community, integrating perspectives from multiple disciplines.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:24) - About Sanne Vrijenhoek (14:49) - What Does Diversity in RecSys Mean? (26:32) - Assessing Diversity in News Recommendations (34:54) - Rank-Aware Divergence Metrics to Measure Normative Diversity (01:01:37) - RecSys Challenge 2024 - Recommendations for the Recommenders (01:11:23) - RecSys Workshops - NORMalize and AltRecSys (01:15:39) - On the Different Conceptualizations of Diversity in RecSys (01:28:38) - Closing Remarks Links from the Episode:Sanne Vrijenhoek on LinkedInInformfullyMIND: MIcrosoft News DatasetRecSys Challenge 2024NORMalize 2023: The First Workshop on the Normative Design and Evaluation of Recommender SystemsNORMalize 2024: The Second Workshop on the Normative Design and Evaluation of Recommender SystemsAltRecSys 2024: The AltRecSys Workshop on Alternative, Unexpected, and Critical Ideas in RecommendationPapers:Vrijenhoek et al. (2021): Recommenders with a Mission: Assessing Diversity in News RecommendationsVrijenhoek et al. (2022): RADio – Rank-Aware Divergence Metrics to Measure Normative Diversity in News RecommendationsHeitz et al. (2024): Recommendations for the Recommenders: Reflections on Prioritizing Diversity in the RecSys ChallengeVrijenhoek et al. (2024): Diversity of What? On the Different Conceptualizations of Diversity in Recommender SystemsHelberger (2019): On the Democratic Role of News RecommendersSteck (2018): Calibrated RecommendationsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 25, we talk about the upcoming ACM Conference on Recommender Systems 2024 (RecSys) and welcome a former guest to geek about the conference. Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (01:56) - Overview RecSys 2024 (07:01) - Contribution Stats (09:37) - Interview Links from the Episode:RecSys 2024 Conference WebsitePapers:RecSys '24: Proceedings of the 18th ACM Conference on Recommender SystemsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 24 of Recsperts, I sit down with Amey Dharwadker, Machine Learning Engineering Manager at Facebook, to dive into the complexities of large-scale video recommendations. Amey, who leads the Video Recommendations Quality Ranking team at Facebook, sheds light on the intricate challenges of delivering personalized video feeds at scale. Our conversation covers content understanding, user interaction data, real-time signals, exploration, and evaluation techniques.We kick off the episode by reflecting on the inaugural VideoRecSys workshop at RecSys 2023, setting the stage for a deeper discussion on Facebook’s approach to video recommendations. Amey walks us through the critical challenges they face, such as gathering reliable user feedback signals to avoid pitfalls like watchbait. With a vast and ever-growing corpus of billions of videos—millions of which are added each month—the cold start problem looms large. We explore how content understanding, user feedback aggregation, and exploration techniques help address this issue. Amey explains how engagement metrics like watch time, comments, and reactions are used to rank content, ensuring users receive meaningful and diverse video feeds.A key highlight of the conversation is the importance of real-time personalization in fast-paced environments, such as short-form video platforms, where user preferences change quickly. Amey also emphasizes the value of cross-domain data in enriching user profiles and improving recommendations.Towards the end, Amey shares his insights on leadership in machine learning teams, pointing out the characteristics of a great ML team.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (02:32) - About Amey Dharwadker (08:39) - Video Recommendation Use Cases on Facebook (16:18) - Recommendation Teams and Collaboration (25:04) - Challenges of Video Recommendations (31:07) - Video Content Understanding and Metadata (33:18) - Multi-Stage RecSys and Models (42:42) - Goals and Objectives (49:04) - User Behavior Signals (59:38) - Evaluation (01:06:33) - Cross-Domain User Representation (01:08:49) - Leadership and What Makes a Great Recommendation Team (01:13:01) - Closing Remarks Links from the Episode:Amey Dharwadker on LinkedInAmey's WebsiteRecSys Challenge 2021VideoRecSys Workshop 2023VideoRecSys + LargeRecSys 2024Papers:Mahajan et al. (2023): CAViaR: Context Aware Video RecommendationsMahajan et al. (2023): PIE: Personalized Interest Exploration for Large-Scale Recommender SystemsRaul et al. (2023): CAM2: Conformity-Aware Multi-Task Ranking Model for Large-Scale Recommender SystemsZhai et al. (2024): Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative RecommendationsSaket et al. (2023): Formulating Video Watch Success Signals for Recommendations on Short Video PlatformsWang et al. (2022): Surrogate for Long-Term User Experience in Recommender SystemsSu et al. (2024): Long-Term Value of Exploration: Measurements, Findings and AlgorithmsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 23 of Recsperts, we welcome Yashar Deldjoo, Assistant Professor at the Polytechnic University of Bari, Italy. Yashar's research on recommender systems includes multimodal approaches, multimedia recommender systems as well as trustworthiness and adversarial robustness, where he has published a lot of work. We discuss the evolution of generative models for recommender systems, modeling paradigms, scenarios as well as their evaluation, risks and harms.We begin our interview with a reflection of Yashar's areas of recommender systems research so far. Starting with multimedia recsys, particularly video recommendations, Yashar covers his work around adversarial robustness and trustworthiness leading to the main topic for this episode: generative models for recommender systems. We learn about their aspects for improving beyond the (partially saturated) state of traditional recommender systems: improve effectiveness and efficiency for top-n recommendations, introduce interactivity beyond classical conversational recsys, provide personalized zero- or few-shot recommendations.We learn about the modeling paradigms and as well about the scenarios for generative models which mainly differ by input and modelling approach: ID-based, text-based, and multimodal generative models. This is how we navigate the large field of acronyms leading us from VAEs and GANs to LLMs.Towards the end of the episode, we also touch on the evaluation, opportunities, risks and harms of generative models for recommender systems. Yashar also provides us with an ample amount of references and upcoming events where people get the chance to know more about GenRecSys.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:58) - About Yashar Deldjoo (09:34) - Motivation for RecSys (13:05) - Intro to Generative Models for Recommender Systems (44:27) - Modeling Paradigms for Generative Models (51:33) - Scenario 1: Interaction-Driven Recommendation (57:59) - Scenario 2: Text-based Recommendation (01:10:39) - Scenario 3: Multimodal Recommendation (01:24:59) - Evaluation of Impact and Harm (01:38:07) - Further Research Challenges (01:45:03) - References and Research Advice (01:49:39) - Closing Remarks Links from the Episode:Yashar Deldjoo on LinkedInYashar's WebsiteKDD 2024 Tutorial: Modern Recommender Systems Leveraging Generative AI: Fundamentals, Challenges and OpportunitiesRecSys 2024 Workshop: The 1st Workshop on Risks, Opportunities, and Evaluation of Generative Models in Recommender Systems (ROEGEN@RECSYS'24)Papers:Deldjoo et al. (2024): A Review of Modern Recommender Systems Using Generative Models (Gen-RecSys)Deldjoo et al. (2020): Recommender Systems Leveraging Multimedia ContentDeldjoo et al. (2021): A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial NetworksDeldjoo et al. (2020): How Dataset Characteristics Affect the Robustness of Collaborative Recommendation ModelsLiang et al. (2018): Variational Autoencoders for Collaborative FilteringHe et al. (2016): Visual Bayesian Personalized Ranking from Implicit FeedbackGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 22 of Recsperts, we welcome Prabhat Agarwal, Senior ML Engineer, and Aayush Mudgal, Staff ML Engineer, both from Pinterest, to the show. Prabhat works on recommendations and search systems at Pinterest, leading representation learning efforts. Aayush is responsible for ads ranking and privacy-aware conversion modeling. We discuss user and content modeling, short- vs. long-term objectives, evaluation as well as multi-task learning and touch on counterfactual evaluation as well.In our interview, Prabhat guides us through the journey of continuous improvements of Pinterest's Homefeed personalization starting with techniques such as gradient boosting over two-tower models to DCN and transformers. We discuss how to capture users' short- and long-term preferences through multiple embeddings and the role of candidate generators for content diversification. Prabhat shares some details about position debiasing and the challenges to facilitate exploration.With Aayush we get the chance to dive into the specifics of ads ranking at Pinterest and he helps us to better understand how multifaceted ads can be. We learn more about the pain of having too many models and the Pinterest's efforts to consolidate the model landscape to improve infrastructural costs, maintainability, and efficiency. Aayush also shares some insights about exploration and corresponding randomization in the context of ads and how user behavior is very different between different kinds of ads.Both guests highlight the role of counterfactual evaluation and its impact for faster experimentation.Towards the end of the episode, we also touch a bit on learnings from last year's RecSys challenge.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:51) - Guest Introductions (09:57) - Pinterest Introduction (21:57) - Homefeed Personalization (47:27) - Ads Ranking (01:14:58) - RecSys Challenge 2023 (01:20:26) - Closing Remarks Links from the Episode:Prabhat Agarwal on LinkedInAayush Mudgal on LinkedInRecSys Challenge 2023Pinterest Engineering BlogPinterest LabsPrabhat's Talk at GTC 2022: Evolution of web-scale engagement modeling at PinterestBlogpost: How we use AutoML, Multi-task learning and Multi-tower models for Pinterest AdsBlogpost: Pinterest Home Feed Unified Lightweight Scoring: A Two-tower ApproachBlogpost: Experiment without the wait: Speeding up the iteration cycle with Offline Replay ExperimentationBlogpost: MLEnv: Standardizing ML at Pinterest Under One ML Engine to Accelerate InnovationBlogpost: Handling Online-Offline Discrepancy in Pinterest Ads Ranking SystemPapers:Eksombatchai et al. (2018): Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in Real-TimeYing et al. (2018): Graph Convolutional Neural Networks for Web-Scale Recommender SystemsPal et al. (2020): PinnerSage: Multi-Modal User Embedding Framework for Recommendations at PinterestPancha et al. (2022): PinnerFormer: Sequence Modeling for User Representation at PinterestZhao et al. (2019): Recommending what video to watch next: a multitask ranking systemGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 21 of Recsperts, we welcome Martijn Willemsen, Associate Professor at the Jheronimus Academy of Data Science and Eindhoven University of Technology. Martijn's researches on interactive recommender systems which includes aspects of decision psychology and user-centric evaluation. We discuss how users gain control over recommendations, how to support their goals and needs as well as how the user-centric evaluation framework fits into all of this.In our interview, Martijn outlines the reasons for providing users control over recommendations and how to holistically evaluate the satisfaction and usefulness of recommendations for users goals and needs. We discuss the psychology of decision making with respect to how well or not recommender systems support it. We also dive into music recommender systems and discuss how nudging users to explore new genres can work as well as how longitudinal studies in recommender systems research can advance insights. Towards the end of the episode, Martijn and I also discuss some examples and the usefulness of enabling users to provide negative explicit feedback to the system.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:03) - About Martijn Willemsen (15:14) - Waves of User-Centric Evaluation in RecSys (19:35) - Behaviorism is not Enough (46:21) - User-Centric Evaluation Framework (01:05:38) - Genre Exploration and Longitudinal Studies in Music RecSys (01:20:59) - User Control and Negative Explicit Feedback (01:31:50) - Closing Remarks Links from the Episode:Martijn Willemsen on LinkedInMartijn Willemsen's WebsiteUser-centric Evaluation FrameworkBehaviorism is not Enough (Talk at RecSys 2016)Neil Hunt: Quantifying the Value of Better Recommendations (Keynote at RecSys 2014)What recommender systems can learn from decision psychology about preference elicitation and behavioral change (Talk at Boise State (Idaho) and Grouplens at University of Minnesota)Eric J. Johnson: The Elements of ChoiceRasch ModelSpotify Web APIPapers:Ekstrand et al. (2016): Behaviorism is not Enough: Better Recommendations Through Listening to UsersKnijenburg et al. (2012): Explaining the user experience of recommender systemsEkstrand et al. (2014): User perception of differences in recommender algorithmsLiang et al. (2022): Exploring the longitudinal effects of nudging on users’ music genre exploration behavior and listening preferencesMcNee et al. (2006): Being accurate is not enough: how accuracy metrics have hurt recommender systemsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 20 of Recsperts, we welcome Bram van den Akker, Senior Machine Learning Scientist at Booking.com. Bram's work focuses on bandit algorithms and counterfactual learning. He was one of the creators of the Practical Bandits tutorial at the World Wide Web conference. We talk about the role of bandit feedback in decision making systems and in specific for recommendations in the travel industry.In our interview, Bram elaborates on bandit feedback and how it is used in practice. We discuss off-policy- and on-policy-bandits, and we learn that counterfactual evaluation is right for selecting the best model candidates for downstream A/B-testing, but not a replacement. We hear more about the practical challenges of bandit feedback, for example the difference between model scores and propensities, the role of stochasticity or the nitty-gritty details of reward signals. Bram also shares with us the challenges of recommendations in the travel domain, where he points out the sparsity of signals or the feedback delay.At the end of the episode, we can both agree on a good example for a clickbait-heavy news service in our phones. Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (02:58) - About Bram van den Akker (09:16) - Motivation for Practical Bandits Tutorial (16:53) - Specifics and Challenges of Travel Recommendations (26:19) - Role of Bandit Feedback in Practice (49:13) - Motivation for Bandit Feedback (01:00:54) - Practical Start for Counterfactual Evaluation (01:06:33) - Role of Business Rules (01:11:26) - better cut this section coherently (01:17:48) - Rewards and More (01:32:45) - Closing Remarks Links from the Episode:Bram van den Akker on LinkedInPractical Bandits: An Industry Perspective (Website)Practical Bandits: An Industry Perspective (Recording)Tutorial at The Web Conference 2020: Unbiased Learning to Rank: Counterfactual and Online ApproachesTutorial at RecSys 2021: Counterfactual Learning and Evaluation for Recommender Systems: Foundations, Implementations, and Recent AdvancesGitHub: Open Bandit PipelinePapers:van den Akker et al. (2023): Practical Bandits: An Industry Perspectivevan den Akker et al. (2022): Extending Open Bandit Pipeline to Simulate Industry Challengesvan den Akker et al. (2019): ViTOR: Learning to Rank Webpages Based on Visual FeaturesGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 19 of Recsperts, we welcome Himan Abdollahpouri who is an Applied Research Scientist for Personalization & Machine Learning at Spotify. We discuss the role of popularity bias in recommender systems which was the dissertation topic of Himan. We talk about multi-objective and multi-stakeholder recommender systems as well as the challenges of music and podcast streaming personalization at Spotify. In our interview, Himan walks us through popularity bias as the main cause of unfair recommendations for multiple stakeholders. We discuss the consumer- and provider-side implications and how to evaluate popularity bias. Not the sheer existence of popularity bias is the major problem, but its propagation in various collaborative filtering algorithms. But we also learn how to counteract by debiasing the data, the model itself, or it's output. We also hear more about the relationship between multi-objective and multi-stakeholder recommender systems.At the end of the episode, Himan also shares the influence of popularity bias in music and podcast streaming at Spotify as well as how calibration helps to better cater content to users' preferences.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (04:43) - About Himan Abdollahpouri (15:23) - What is Popularity Bias and why is it important? (25:05) - Effect of Popularity Bias in Collaborative Filtering (30:30) - Individual Sensitivity towards Popularity (36:25) - Introduction to Bias Mitigation (53:16) - Content for Bias Mitigation (56:53) - Evaluating Popularity Bias (01:05:01) - Popularity Bias in Music and Podcast Streaming (01:08:04) - Multi-Objective Recommender Systems (01:16:13) - Multi-Stakeholder Recommender Systems (01:18:38) - Recommendation Challenges at Spotify (01:35:16) - Closing Remarks Links from the Episode:Himan Abdollahpouri on LinkedInHiman Abdollahpouri on XHiman's WebsiteHiman's PhD Thesis on "Popularity Bias in Recommendation: A Multi-stakeholder Perspective"2nd Workshop on Multi-Objective Recommender Systems (MORS @ RecSys 2022)Papers:Su et al. (2009): A Survey on Collaborative Filtering TechniquesMehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender SystemsAbdollahpouri et al. (2021): User-centered Evaluation of Popularity Bias in Recommender SystemsAbdollahpouri et al. (2019): The Unfairness of Popularity Bias in RecommendationAbdollahpouri et al. (2017): Controlling Popularity Bias in Learning-to-Rank RecommendationWasilewsi et al. (2016): Incorporating Diversity in a Learning to Rank Recommender SystemOh et al. (2011): Novel Recommendation Based on Personal Popularity TendencySteck (2018): Calibrated RecommendationsAbdollahpouri et al. (2023): Calibrated Recommendations as a Minimum-Cost Flow ProblemSeymen et al. (2022): Making smart recommendations for perishable and stockout productsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel@recsperts.comRecsperts Website
In episode 18 of Recsperts, we hear from Professor Sole Pera from Delft University of Technology. We discuss the use of recommender systems for non-traditional populations, with children in particular. Sole shares the specifics, surprises, and subtleties of her research on recommendations for children.In our interview, Sole and I discuss use cases and domains which need particular attention with respect to non-traditional populations. Sole outlines some of the major challenges like lacking public datasets or multifaceted criteria for the suitability of recommendations. The highly dynamic needs and abilities of children pose proper user modeling as a crucial part in the design and development of recommender systems. We also touch on how children interact differently with recommender systems and learn that trust plays a major role here.Towards the end of the episode, we revisit the different goals and stakeholders involved in recommendations for children, especially the role of parents. We close with an overview of the current research community.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (04:56) - About Sole Pera (06:37) - Non-traditional Populations (09:13) - Dedicated User Modeling (25:01) - Main Application Domains (40:16) - Lack of Data about non-traditional Populations (47:53) - Data for Learning User Profiles (57:09) - Interaction between Children and Recommendations (01:00:26) - Goals and Stakeholders (01:11:35) - Role of Parents and Trust (01:17:59) - Evaluation (01:26:59) - Research Community (01:32:37) - Closing Remarks Links from the Episode:Sole Pera on LinkedInSole's WebsiteChildren and RecommendersKidRec 2022People and Information Retrieval Team (PIReT)Papers:Beyhan et al. (2023): Covering Covers: Characterization Of Visual Elements Regarding SleevesMurgia et al. (2019): The Seven Layers of Complexity of Recommender Systems for Children in Educational ContextsPera et al. (2019): With a Little Help from My Friends: User of Recommendations at SchoolCharisi et al. (2022): Artificial Intelligence and the Rights of the Child: Towards an Integrated Agenda for Research and PolicyGómez et al. (2021): Evaluating recommender systems with and for children: towards a multi-perspective frameworkNg et al. (2018): Recommending social-interactive games for adults with autism spectrum disorders (ASD)General Links:Follow me on LinkedInFollow me on TwitterSend me your comments, questions and suggestions to marcel@recsperts.comRecsperts Website
In episode 17 of Recsperts, we meet Miguel Fierro who is a Principal Data Science Manager at Microsoft and holds a PhD in robotics. We talk about the Microsoft recommenders repository with over 15k stars on GitHub and discuss the impact of LLMs on RecSys. Miguel also shares his view of the T-shaped data scientist.In our interview, Miguel shares how he transitioned from robotics into personalization as well as how the Microsoft recommenders repository started. We learn more about the three key components: examples, library, and tests. With more than 900 tests and more than 30 different algorithms, this library demonstrates a huge effort of open-source contribution and maintenance. We hear more about the principles that made this effort possible and successful. Therefore, Miguels also shares the reasoning behind evidence-based design to put the users of microsoft-recommenders and their expectations first. We also discuss the impact that recent LLM-related innovations have on RecSys.At the end of the episode, Miguel explains the T-shaped data professional as an advice to stay competitive and build a champion data team. We conclude with some remarks regarding the adoption and ethical challenges recommender systems pose and which need further attention.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Episode Overview (03:34) - Introduction Miguel Fierro (16:19) - Microsoft Recommenders Repository (30:04) - Structure of MS Recommenders (34:16) - Contributors to MS Recommenders (37:10) - Scalability of MS Recommenders (39:32) - Impact of LLMs on RecSys (48:26) - T-shaped Data Professionals (53:29) - Further RecSys Challenges (59:28) - Closing Remarks Links from the Episode:Miguel Fierro on LinkedInMiguel Fierro on TwitterMiguel's WebsiteMicrosoft RecommendersMcKinsey (2013): How retailers can keep up with consumersFortune (2012): Amazon's recommendation secretRecSys 2021 Keynote by Max Welling: Graph Neural Networks for Knowledge Representation and RecommendationPapers:Geng et al. (2022): Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)General Links:Follow me on LinkedInFollow me on TwitterSend me your comments, questions and suggestions to marcel@recsperts.comRecsperts Website
In episode 16 of Recsperts, we hear from Michael D. Ekstrand, Associate Professor at Boise State University, about fairness in recommender systems. We discuss why fairness matters and provide an overview of the multidimensional fairness-aware RecSys landscape. Furthermore, we talk about tradeoffs, methods and receive practical advice on how to get started with tackling unfairness.In our discussion, Michael outlines the difference and similarity between fairness and bias. We discuss several stages at which biases can enter the system as well as how bias can indeed support mitigating unfairness. We also cover the perspectives of different stakeholders with respect to fairness. We also learn that measuring fairness depends on the specific fairness concern one is interested in and that solving fairness universally is highly unlikely.Towards the end of the episode, we take a look at further challenges as well as how and where the upcoming RecSys 2023 provides a forum for those interested in fairness-aware recommender systems.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.(00:00) - Episode Overview (02:57) - Introduction Michael Ekstrand (17:08) - Motivation for Fairness-Aware Recommender Systems (25:45) - Overview and Definition of Fairness in RecSys (46:51) - Distributional and Representational Harm (53:59) - Relationship between Fairness and Bias (01:04:43) - Tradeoffs (01:13:36) - Methods and Metrics for Fairness (01:28:06) - Practical Advice for Tackling Unfairness (01:32:24) - Further Challenges (01:35:24) - RecSys 2023 (01:38:29) - Closing Remarks Links from the Episode:Michael Ekstrand on LinkedInMichael Ekstrand on MastodonMichael's WebsiteGroupLens Lab at University of MinnesotaPeople and Information Research Team (PIReT)6th FAccTRec Workshop: Responsible RecommendationNORMalize: The First Workshop on Normative Design and Evaluation of Recommender SystemsACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)Coursera: Recommender Systems SpecializationLensKit: Python Tools for Recommender SystemsChris Anderson - The Long Tail: Why the Future of Business Is Selling Less of MoreFairness in Recommender Systems (in Recommender Systems Handbook)Ekstrand et al. (2022): Fairness in Information Access SystemsKeynote at EvalRS (CIKM 2022): Do You Want To Hunt A Kraken? Mapping and Expanding Recommendation FairnessFriedler et al. (2021): The (Im)possibility of Fairness: Different Value Systems Require Different Mechanisms For Fair Decision MakingSafiya Umoja Noble (2018): Algorithms of Oppression: How Search Engines Reinforce RacismPapers:Ekstrand et al. (2018): Exploring author gender in book rating and recommendationEkstrand et al. (2014): User perception of differences in recommender algorithmsSelbst et al. (2019): Fairness and Abstraction in Sociotechnical SystemsPinney et al. (2023): Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information AccessDiaz et al. (2020): Evaluating Stochastic Rankings with Expected ExposureRaj et al. (2022): Fire Dragon and Unicorn Princess; Gender Stereotypes and Children's Products in Search Engine ResponsesMitchell et al. (2021): Algorithmic Fairness: Choices, Assumptions, and DefinitionsMehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender SystemsRaj et al. (2022): Measuring Fairness in Ranked Results: An Analytical and Empirical ComparisonBeutel et al. (2019): Fairness in Recommendation Ranking through Pairwise ComparisonsBeutel et al. (2017): Data Decisions and Theoretical Implications when Adversarially Learning Fair RepresentationsDwork et al. (2018): Fairness Under CompositionBower et al. (2022): Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender SystemsZehlike et al. (2022): Fairness in Ranking: A SurveyHoffmann (2019): Where fairness fails: data, algorithms, and the limits of antidiscrimination discourseSweeney (2013): Discrimination in Online Ad Delivery: Google ads, black names and white names, racial discrimination, and click advertisingWang et al. (2021): User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided MarketsGeneral Links:Follow me on Twitter: https://twitter.com/MarcelKurovskiSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In episode 15 of Recsperts, we delve into podcast recommendations with senior data scientist, Mirza Klimenta. Mirza discusses his work on the ARD Audiothek, a public broadcaster of audio-on-demand content, where he is part of pub. Public Value Technologies, a subsidiary of the two regional public broadcasters BR and SWR.We explore the use and potency of simple algorithms and ways to mitigate popularity bias in data and recommendations. We also cover collaborative filtering and various approaches for content-based podcast recommendations, drawing on Mirza's expertise in multidimensional scaling for graph drawings. Additionally, Mirza sheds light on the responsibility of a public broadcaster in providing diversified content recommendations.Towards the end of the episode, Mirza shares personal insights on his side project of becoming a novelist. Tune in for an informative and engaging conversation.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.(00:00) - Episode Overview (01:43) - Introduction Mirza Klimenta (08:06) - About ARD Audiothek (21:16) - Recommenders for the ARD Audiothek (30:03) - User Engagement and Feedback Signals (46:05) - Optimization beyond Accuracy (51:39) - Next RecSys Steps for the Audiothek (57:16) - Underserved User Groups (01:04:16) - Cold-Start Mitigation (01:05:06) - Diversity in Recommendations (01:07:50) - Further Challenges in RecSys (01:10:03) - Being a Novelist (01:16:07) - Closing Remarks Links from the Episode:Mirza Klimenta on LinkedInARD Audiothekpub. Public Value TechnologiesImplicit: Fast Collaborative Filtering for Implicit DatasetsFairness in Recommender Systems: How to Reduce the Popularity BiasPapers:Steck (2019): Embarrasingly Shallow Auoencoders for Sparse DataHu et al. (2008): Collaborative Filtering for Implicit Feedback DatasetsCer et al. (2018): Universal Sentence EncoderGeneral Links:Follow me on Twitter: https://twitter.com/MarcelKurovskiSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In episode number 14 of Recsperts we talk to Daniel Svonava, CEO and Co-Founder of Superlinked, delivering user modeling infrastructure. In his former role he was a senior software engineer and tech lead at YouTube working on ad performance prediction and pricing.We discuss the crucial role of user modeling for recommendations and discovery. Daniel presents two examples from YouTube’s ad performance forecasting to demonstrate the bandwidth of use cases for user modeling. We also discuss sources of information that fuel user models and additional personlization tasks that benefit from it like user onboarding. We learn that the tight combination of user modeling with (near) real-time updates is key to a sound personalized user experience.Daniel also shares with us how Superlinked provides personalization as a service beyond ecommerce-centricity. Offering personalized recommendations of items and people across various industries and use cases is what sets Superlinked apart. In the end, we also touch on the major general challenge of the RecSys community which is rebranding in order to establish a more positive image of the field.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Chapters:(03:35) - Introduction Daniel Svonava (10:18) - Introduction to User Modeling (17:52) - User Modeling for YouTube Ads (35:43) - Real-Time Personalization (57:29) - ML Tooling for User Modeling and Real-Time Personalization (01:07:41) - Superlinked as a User Modeling Infrastructure (01:31:22) - Rebranding RecSys as Major Challenge (01:37:40) - Final Remarks Links from the Episode:Daniel Svonava on LinkedInDaniel Svonava on TwitterSuperlinked - User Modeling InfrastructureThe 2023 MAD (Machine Learning, Artificial Intelligence, Data Science) LandscapeEric Ries: The Lean StartupRob Fitzpatrick: The Mom TestPapers:Liu et al. (2022): Monolith: Real Time Recommendation System With Collisionless Embedding TableRSPapers CollectionGeneral Links:Follow me on Twitter: https://twitter.com/MarcelKurovskiSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
This episode of Recsperts features Justin Basilico who is director of research and engineering at Netflix. Justin leads the team that is in charge of creating a personalized homepage. We learn more about the evolution of the Netflix recommender system from rating prediction to using deep learning, contextual multi-armed bandits and reinforcement learning to perform personalized page construction. Deep content understanding drives the creation of useful groupings of videos to be shown in a personalized homepage.Justin and I discuss the misalignment of metrics as just one out of many elements that is making personalization still “super hard”. We hear more about the journey of deep learning for recommender systems where real usefulness comes from taking advantage of the variety of data besides pure user-item interactions, i.e. histories, content, and context. We also briefly touch on RecSysOps for detecting, predicting, diagnosing and resolving issues in a large-scale recommender systems and how it helps to alleviate item cold-start.In the end of this episode, we talk about the company culture at Netflix. Key elements are freedom and responsibility as well as providing context instead of exerting control. We hear that being really comfortable with feedback is important for high-performance people and teams.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Chapters:(03:13) - Introduction Justin Basilico (07:37) - Evolution of the Netflix Recommender System (22:28) - Page Construction of the Personalized Netflix Homepage (32:12) - Misalignment of Metrics (37:36) - Experience with Deep Learning for Recommender Systens (48:10) - RecSysOps for Issue Detection, Diagnosis and Response (55:38) - Bandits Recommender Systems (01:03:22) - The Netflix Culture (01:13:33) - Further Challenges (01:15:48) - RecSys 2023 Industry Track (01:17:25) - Closing Remarks Links from the Episode:Justin Basilico on LinkedinJustin Basilico on TwitterNetflix Research PublicationsThe Netflix Tech BlogCONSEQUENCES+REVEAL Workshop at RecSys 2022Learning a Personalized Homepage (Alvino et al., 2015)Recent Trends in Personalization at Netflix (Basilico, 2021)RecSysOps: Best Practices for Operating a Large-Scale Recommender System (Saberian et al., 2022)Netflix Fourth Quarter 2022 Earnings InterviewNo Rules Rules - Netflix and the Culture of Reinvention (Hastings et al., 2020)Job Posting for Netflix' Recommendation TeamPapers:Steck et al. (2021): Deep Learning for Recommender Systems: A Netflix Case StudySteck et al. (2021): Negative Interactions for Improved Collaborative Filtering: Don't go Deeper, go HigherMore et al. (2019): Recap: Designing a more Efficient Estimator for Off-policy Evaluation in Bandits with Large Action SpacesBhattacharya et al. (2022): Augmenting Netflix Search with In-Session Adapted RecommendationsGeneral Links:Follow me on Twitter: https://twitter.com/MarcelKurovskiSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In this episode of Recsperts we talk to Rishabh Mehrotra, the Director of Machine Learning at ShareChat, about users and creators in multi-stakeholder recommender systems. We learn more about users intents and needs, which brings us to the important matter of user satisfaction (and dissatisfaction). To draw conclusions about user satisfaction we have to perceive real-time user interaction data conditioned on user intents. We learn that relevance does not imply satisfaction as well as that diversity and discovery are two very different concepts.Rishabh takes us even further on his industry research journey where we also touch on relevance, fairness and satisfaction and how to balance them towards a fair marketplace. He introduces us into the creator economy of ShareChat. We discuss the post lifecycle of items as well as the right mixture of content and behavioral signals for generating recommendations that strike a balance between revenue and retention.In the end, we also conclude our interview with the benefits of end-to-end ownership and accountability in industrial RecSys work and how it makes people independent and effective. We receive some advice for how to grow and strive in tough job market times.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Chapters:(03:44) - Introduction Rishabh Mehrotra (19:09) - Ubiquity of Recommender Systems (23:32) - Moving from UCL to Spotify Research (33:17) - Moving from Research to Engineering (36:33) - Recommendations in a Marketplace (46:24) - Discovery vs. Diversity and Specialists vs. Generalists (55:24) - User Intent, Satisfaction and Relevant Recommendations (01:09:48) - Estimation of Satisfaction vs. Dissatisfaction (01:19:10) - RecSys Challenges at ShareChat (01:27:58) - Post Lifecycle and Mixing Content with Behavioral Signals (01:39:28) - Detect Fatigue and Contextual MABs for Ad Placement (01:47:24) - Unblock Yourself and Upskill (02:00:59) - RecSys Challenge 2023 by ShareChat (02:02:36) - Farewell Remarks Links from the Episode:Rishabh Mehrotra on LinkedinRishabh Mehrotra on TwitterRishabh's WebsitePapers:Mehrotra et al. (2017): Auditing Search Engines for Differential Satisfaction Across DemographicsMehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender SystemsMehrotra et al. (2019): Jointly Leveraging Intent and Interaction Signals to Predict User Satisfaction with Slate RecommendationsAnderson et al. (2020): Algorithmic Effects on the Diversity of Consumption on SpotifyMehrotra et al. (2020): Bandit based Optimization of Multiple Objectives on a Music Streaming PlatformHansen et al. (2021): Shifting Consumption towards Diverse Content on Music Streaming PlatformsMehrotra (2021): Algorithmic Balancing of Familiarity, Similarity & Discovery in Music RecommendationsJeunen et al. (2022): Disentangling Causal Effects from Sets of Interventions in the Presence of Unobserved ConfoundersGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In this episode of Recsperts we talk to Flavian Vasile about the work of his team at Criteo AI Lab on personalized advertising. We learn about the different stakeholders like advertisers, publishers, and users and the role of recommender systems in this marketplace environment. We learn more about the pros and cons of click versus conversion optimization and transition to econ(omic) reco(mmendations), a new approach to model the effect of a recommendations system on the users' decision making process. Economic theory plays an important role for this conceptual shift towards better recommender systems.In addition, we discuss generative recommenders as an approach to directly translate a user’s preference model into a textual and/or visual product recommendation. This can be used to spark product innovation and to potentially generate what users really want. Besides that, it also allows to provide recommendations from the existing item corpus.In the end, we catch up on additional real-world challenges like two-tower models and diversity in recommendations.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Chapters:(02:37) - Introduction Flavian Vasile (06:46) - Personalized Advertising at Criteo (18:29) - Moving from Click to Conversion optimization (23:04) - Econ(omic) Reco(mmendations) (41:56) - Generative Recommender Systems (01:04:03) - Additional Real-World Challenges in RecSys (01:08:00) - Final Remarks Links from the Episode:Flavian Vasile on LinkedInFlavian Vasile on TwitterModern Recommendation for Advanced Practitioners - Part I (2019)Modern Recommendation for Advanced Practitioners - Part II (2019)CONSEQUENCES+REVEAL Workshop at RecSys 2022: Causality, Counterfactuals, Sequential Decision-Making & Reinforcement Learning for Recommender SystemsPapers:Heymann et al. (2022): Welfare-Optimized Recommender SystemsSamaran et al. (2021): What Users Want? WARHOL: A Generative Model for RecommendationBonner et al (2018): Causal Embeddings for RecommendationVasile et al. (2016): Meta-Prod2Vec: Product Embeddings Using Side-Information for RecommendationGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In episode number ten of Recsperts I welcome David Graus who is the Data Science Chapter Lead at Randstad Groep Nederland, a global leader in providing Human Resource services. We talk about the role of recommender systems in the HR domain which includes vacancy recommendations for candidates, but also generating talent recommendations for recruiters at Randstad. We also learn which biases might have an influence when using recommenders for decision support in the recruiting process as well as how Randstad mitigates them.In this episode we learn more about another domain where recommender systems can serve humans by effective decision support: Human Resources. Here, everything is about job recommendations, matching candidates with vacancies, but also exploiting knowledge about career path to propose learning opportunities and assist with career development. David Graus leads those efforts at Randstad and has previously worked in the news recommendation domain after obtaining his PhD from the University of Amsterdam.We discuss the most recent contribution by Randstad on mitigating bias in candidate recommender systems by introducing fairness-oriented post- and preprocessing to a recommendation pipeline. We learn that one can maintain user satisfaction while improving fairness at the same time (demographic parity measuring gender balance in this case).David and I also touch on his engagement in co-organizing the RecSys in HR workshops since RecSys 2021.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Links from the Episode:David Graus on LinkedInDavid Graus on TwitterDavid's WebsiteRecSys in HR 2022: Workshop on Recommender Systems for Human RecourcesRandstad Annual Report 2021Talk by David Graus at Anti-Discrimination Hackaton on "Algorithmic matching, bias, and bias mitigation"Papers:Arafan et al. (2022): End-to-End Bias Mitigation in Candidate Recommender Systems with Fairness GatesGeyik et al. (2019): Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent SearchGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/ (02:23) - Introduction David Graus (13:55) - About Randstad and the Staffing Industry (17:09) - Use Cases for RecSys Application in HR (22:04) - Talent and Vacancy Recommender System (33:46) - RecSys in HR Workshop (38:48) - Fairness for RecSys in HR (52:40) - Other HR RecSys Challenges (56:40) - Further RecSys Challenges
niloo
Thank you for the awesome content!