DiscoverRecsperts - Recommender Systems Experts
Recsperts - Recommender Systems Experts
Claim Ownership

Recsperts - Recommender Systems Experts

Author: Marcel Kurovski

Subscribed: 26Played: 532
Share

Description

Recommender Systems are the most challenging, powerful and ubiquitous area of machine learning and artificial intelligence. This podcast hosts the experts in recommender systems research and application. From understanding what users really want to driving large-scale content discovery - from delivering personalized online experiences to catering to multi-stakeholder goals. Guests from industry and academia share how they tackle these and many more challenges. With Recsperts coming from universities all around the globe or from various industries like streaming, ecommerce, news, or social media, this podcast provides depth and insights. We go far beyond your 101 on RecSys and the shallowness of another matrix factorization based rating prediction blogpost! The motto is: be relevant or become irrelevant!
Expect a brand-new interview each month and follow Recsperts on your favorite podcast player.
26 Episodes
Reverse
#25: RecSys 2024 Special

#25: RecSys 2024 Special

2024-10-1239:391

In episode 25, we talk about the upcoming ACM Conference on Recommender Systems 2024 (RecSys) and welcome a former guest to geek about the conference. Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (01:56) - Overview RecSys 2024 (07:01) - Contribution Stats (09:37) - Interview Links from the Episode:RecSys 2024 Conference WebsitePapers:RecSys '24: Proceedings of the 18th ACM Conference on Recommender SystemsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 24 of Recsperts, I sit down with Amey Dharwadker, Machine Learning Engineering Manager at Facebook, to dive into the complexities of large-scale video recommendations. Amey, who leads the Video Recommendations Quality Ranking team at Facebook, sheds light on the intricate challenges of delivering personalized video feeds at scale. Our conversation covers content understanding, user interaction data, real-time signals, exploration, and evaluation techniques.We kick off the episode by reflecting on the inaugural VideoRecSys workshop at RecSys 2023, setting the stage for a deeper discussion on Facebook’s approach to video recommendations. Amey walks us through the critical challenges they face, such as gathering reliable user feedback signals to avoid pitfalls like watchbait. With a vast and ever-growing corpus of billions of videos—millions of which are added each month—the cold start problem looms large. We explore how content understanding, user feedback aggregation, and exploration techniques help address this issue. Amey explains how engagement metrics like watch time, comments, and reactions are used to rank content, ensuring users receive meaningful and diverse video feeds.A key highlight of the conversation is the importance of real-time personalization in fast-paced environments, such as short-form video platforms, where user preferences change quickly. Amey also emphasizes the value of cross-domain data in enriching user profiles and improving recommendations.Towards the end, Amey shares his insights on leadership in machine learning teams, pointing out the characteristics of a great ML team.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (02:32) - About Amey Dharwadker (08:39) - Video Recommendation Use Cases on Facebook (16:18) - Recommendation Teams and Collaboration (25:04) - Challenges of Video Recommendations (31:07) - Video Content Understanding and Metadata (33:18) - Multi-Stage RecSys and Models (42:42) - Goals and Objectives (49:04) - User Behavior Signals (59:38) - Evaluation (01:06:33) - Cross-Domain User Representation (01:08:49) - Leadership and What Makes a Great Recommendation Team (01:13:01) - Closing Remarks Links from the Episode:Amey Dharwadker on LinkedInAmey's WebsiteRecSys Challenge 2021VideoRecSys Workshop 2023VideoRecSys + LargeRecSys 2024Papers:Mahajan et al. (2023): CAViaR: Context Aware Video RecommendationsMahajan et al. (2023): PIE: Personalized Interest Exploration for Large-Scale Recommender SystemsRaul et al. (2023): CAM2: Conformity-Aware Multi-Task Ranking Model for Large-Scale Recommender SystemsZhai et al. (2024): Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative RecommendationsSaket et al. (2023): Formulating Video Watch Success Signals for Recommendations on Short Video PlatformsWang et al. (2022): Surrogate for Long-Term User Experience in Recommender SystemsSu et al. (2024): Long-Term Value of Exploration: Measurements, Findings and AlgorithmsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 23 of Recsperts, we welcome Yashar Deldjoo, Assistant Professor at the Polytechnic University of Bari, Italy. Yashar's research on recommender systems includes multimodal approaches, multimedia recommender systems as well as trustworthiness and adversarial robustness, where he has published a lot of work. We discuss the evolution of generative models for recommender systems, modeling paradigms, scenarios as well as their evaluation, risks and harms.We begin our interview with a reflection of Yashar's areas of recommender systems research so far. Starting with multimedia recsys, particularly video recommendations, Yashar covers his work around adversarial robustness and trustworthiness leading to the main topic for this episode: generative models for recommender systems. We learn about their aspects for improving beyond the (partially saturated) state of traditional recommender systems: improve effectiveness and efficiency for top-n recommendations, introduce interactivity beyond classical conversational recsys, provide personalized zero- or few-shot recommendations.We learn about the modeling paradigms and as well about the scenarios for generative models which mainly differ by input and modelling approach: ID-based, text-based, and multimodal generative models. This is how we navigate the large field of acronyms leading us from VAEs and GANs to LLMs.Towards the end of the episode, we also touch on the evaluation, opportunities, risks and harms of generative models for recommender systems. Yashar also provides us with an ample amount of references and upcoming events where people get the chance to know more about GenRecSys.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:58) - About Yashar Deldjoo (09:34) - Motivation for RecSys (13:05) - Intro to Generative Models for Recommender Systems (44:27) - Modeling Paradigms for Generative Models (51:33) - Scenario 1: Interaction-Driven Recommendation (57:59) - Scenario 2: Text-based Recommendation (01:10:39) - Scenario 3: Multimodal Recommendation (01:24:59) - Evaluation of Impact and Harm (01:38:07) - Further Research Challenges (01:45:03) - References and Research Advice (01:49:39) - Closing Remarks Links from the Episode:Yashar Deldjoo on LinkedInYashar's WebsiteKDD 2024 Tutorial: Modern Recommender Systems Leveraging Generative AI: Fundamentals, Challenges and OpportunitiesRecSys 2024 Workshop: The 1st Workshop on Risks, Opportunities, and Evaluation of Generative Models in Recommender Systems (ROEGEN@RECSYS'24)Papers:Deldjoo et al. (2024): A Review of Modern Recommender Systems Using Generative Models (Gen-RecSys)Deldjoo et al. (2020): Recommender Systems Leveraging Multimedia ContentDeldjoo et al. (2021): A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial NetworksDeldjoo et al. (2020): How Dataset Characteristics Affect the Robustness of Collaborative Recommendation ModelsLiang et al. (2018): Variational Autoencoders for Collaborative FilteringHe et al. (2016): Visual Bayesian Personalized Ranking from Implicit FeedbackGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 22 of Recsperts, we welcome Prabhat Agarwal, Senior ML Engineer, and Aayush Mudgal, Staff ML Engineer, both from Pinterest, to the show. Prabhat works on recommendations and search systems at Pinterest, leading representation learning efforts. Aayush is responsible for ads ranking and privacy-aware conversion modeling. We discuss user and content modeling, short- vs. long-term objectives, evaluation as well as multi-task learning and touch on counterfactual evaluation as well.In our interview, Prabhat guides us through the journey of continuous improvements of Pinterest's Homefeed personalization starting with techniques such as gradient boosting over two-tower models to DCN and transformers. We discuss how to capture users' short- and long-term preferences through multiple embeddings and the role of candidate generators for content diversification. Prabhat shares some details about position debiasing and the challenges to facilitate exploration.With Aayush we get the chance to dive into the specifics of ads ranking at Pinterest and he helps us to better understand how multifaceted ads can be. We learn more about the pain of having too many models and the Pinterest's efforts to consolidate the model landscape to improve infrastructural costs, maintainability, and efficiency. Aayush also shares some insights about exploration and corresponding randomization in the context of ads and how user behavior is very different between different kinds of ads.Both guests highlight the role of counterfactual evaluation and its impact for faster experimentation.Towards the end of the episode, we also touch a bit on learnings from last year's RecSys challenge.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:51) - Guest Introductions (09:57) - Pinterest Introduction (21:57) - Homefeed Personalization (47:27) - Ads Ranking (01:14:58) - RecSys Challenge 2023 (01:20:26) - Closing Remarks Links from the Episode:Prabhat Agarwal on LinkedInAayush Mudgal on LinkedInRecSys Challenge 2023Pinterest Engineering BlogPinterest LabsPrabhat's Talk at GTC 2022: Evolution of web-scale engagement modeling at PinterestBlogpost: How we use AutoML, Multi-task learning and Multi-tower models for Pinterest AdsBlogpost: Pinterest Home Feed Unified Lightweight Scoring: A Two-tower ApproachBlogpost: Experiment without the wait: Speeding up the iteration cycle with Offline Replay ExperimentationBlogpost: MLEnv: Standardizing ML at Pinterest Under One ML Engine to Accelerate InnovationBlogpost: Handling Online-Offline Discrepancy in Pinterest Ads Ranking SystemPapers:Eksombatchai et al. (2018): Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in Real-TimeYing et al. (2018): Graph Convolutional Neural Networks for Web-Scale Recommender SystemsPal et al. (2020): PinnerSage: Multi-Modal User Embedding Framework for Recommendations at PinterestPancha et al. (2022): PinnerFormer: Sequence Modeling for User Representation at PinterestZhao et al. (2019): Recommending what video to watch next: a multitask ranking systemGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 21 of Recsperts, we welcome Martijn Willemsen, Associate Professor at the Jheronimus Academy of Data Science and Eindhoven University of Technology. Martijn's researches on interactive recommender systems which includes aspects of decision psychology and user-centric evaluation. We discuss how users gain control over recommendations, how to support their goals and needs as well as how the user-centric evaluation framework fits into all of this.In our interview, Martijn outlines the reasons for providing users control over recommendations and how to holistically evaluate the satisfaction and usefulness of recommendations for users goals and needs. We discuss the psychology of decision making with respect to how well or not recommender systems support it. We also dive into music recommender systems and discuss how nudging users to explore new genres can work as well as how longitudinal studies in recommender systems research can advance insights. Towards the end of the episode, Martijn and I also discuss some examples and the usefulness of enabling users to provide negative explicit feedback to the system.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (03:03) - About Martijn Willemsen (15:14) - Waves of User-Centric Evaluation in RecSys (19:35) - Behaviorism is not Enough (46:21) - User-Centric Evaluation Framework (01:05:38) - Genre Exploration and Longitudinal Studies in Music RecSys (01:20:59) - User Control and Negative Explicit Feedback (01:31:50) - Closing Remarks Links from the Episode:Martijn Willemsen on LinkedInMartijn Willemsen's WebsiteUser-centric Evaluation FrameworkBehaviorism is not Enough (Talk at RecSys 2016)Neil Hunt: Quantifying the Value of Better Recommendations (Keynote at RecSys 2014)What recommender systems can learn from decision psychology about preference elicitation and behavioral change (Talk at Boise State (Idaho) and Grouplens at University of Minnesota)Eric J. Johnson: The Elements of ChoiceRasch ModelSpotify Web APIPapers:Ekstrand et al. (2016): Behaviorism is not Enough: Better Recommendations Through Listening to UsersKnijenburg et al. (2012): Explaining the user experience of recommender systemsEkstrand et al. (2014): User perception of differences in recommender algorithmsLiang et al. (2022): Exploring the longitudinal effects of nudging on users’ music genre exploration behavior and listening preferencesMcNee et al. (2006): Being accurate is not enough: how accuracy metrics have hurt recommender systemsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 20 of Recsperts, we welcome Bram van den Akker, Senior Machine Learning Scientist at Booking.com. Bram's work focuses on bandit algorithms and counterfactual learning. He was one of the creators of the Practical Bandits tutorial at the World Wide Web conference. We talk about the role of bandit feedback in decision making systems and in specific for recommendations in the travel industry.In our interview, Bram elaborates on bandit feedback and how it is used in practice. We discuss off-policy- and on-policy-bandits, and we learn that counterfactual evaluation is right for selecting the best model candidates for downstream A/B-testing, but not a replacement. We hear more about the practical challenges of bandit feedback, for example the difference between model scores and propensities, the role of stochasticity or the nitty-gritty details of reward signals. Bram also shares with us the challenges of recommendations in the travel domain, where he points out the sparsity of signals or the feedback delay.At the end of the episode, we can both agree on a good example for a clickbait-heavy news service in our phones. Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (02:58) - About Bram van den Akker (09:16) - Motivation for Practical Bandits Tutorial (16:53) - Specifics and Challenges of Travel Recommendations (26:19) - Role of Bandit Feedback in Practice (49:13) - Motivation for Bandit Feedback (01:00:54) - Practical Start for Counterfactual Evaluation (01:06:33) - Role of Business Rules (01:11:26) - better cut this section coherently (01:17:48) - Rewards and More (01:32:45) - Closing Remarks Links from the Episode:Bram van den Akker on LinkedInPractical Bandits: An Industry Perspective (Website)Practical Bandits: An Industry Perspective (Recording)Tutorial at The Web Conference 2020: Unbiased Learning to Rank: Counterfactual and Online ApproachesTutorial at RecSys 2021: Counterfactual Learning and Evaluation for Recommender Systems: Foundations, Implementations, and Recent AdvancesGitHub: Open Bandit PipelinePapers:van den Akker et al. (2023): Practical Bandits: An Industry Perspectivevan den Akker et al. (2022): Extending Open Bandit Pipeline to Simulate Industry Challengesvan den Akker et al. (2019): ViTOR: Learning to Rank Webpages Based on Visual FeaturesGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website
In episode 19 of Recsperts, we welcome Himan Abdollahpouri who is an Applied Research Scientist for Personalization & Machine Learning at Spotify. We discuss the role of popularity bias in recommender systems which was the dissertation topic of Himan. We talk about multi-objective and multi-stakeholder recommender systems as well as the challenges of music and podcast streaming personalization at Spotify. In our interview, Himan walks us through popularity bias as the main cause of unfair recommendations for multiple stakeholders. We discuss the consumer- and provider-side implications and how to evaluate popularity bias. Not the sheer existence of popularity bias is the major problem, but its propagation in various collaborative filtering algorithms. But we also learn how to counteract by debiasing the data, the model itself, or it's output. We also hear more about the relationship between multi-objective and multi-stakeholder recommender systems.At the end of the episode, Himan also shares the influence of popularity bias in music and podcast streaming at Spotify as well as how calibration helps to better cater content to users' preferences.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (04:43) - About Himan Abdollahpouri (15:23) - What is Popularity Bias and why is it important? (25:05) - Effect of Popularity Bias in Collaborative Filtering (30:30) - Individual Sensitivity towards Popularity (36:25) - Introduction to Bias Mitigation (53:16) - Content for Bias Mitigation (56:53) - Evaluating Popularity Bias (01:05:01) - Popularity Bias in Music and Podcast Streaming (01:08:04) - Multi-Objective Recommender Systems (01:16:13) - Multi-Stakeholder Recommender Systems (01:18:38) - Recommendation Challenges at Spotify (01:35:16) - Closing Remarks Links from the Episode:Himan Abdollahpouri on LinkedInHiman Abdollahpouri on XHiman's WebsiteHiman's PhD Thesis on "Popularity Bias in Recommendation: A Multi-stakeholder Perspective"2nd Workshop on Multi-Objective Recommender Systems (MORS @ RecSys 2022)Papers:Su et al. (2009): A Survey on Collaborative Filtering TechniquesMehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender SystemsAbdollahpouri et al. (2021): User-centered Evaluation of Popularity Bias in Recommender SystemsAbdollahpouri et al. (2019): The Unfairness of Popularity Bias in RecommendationAbdollahpouri et al. (2017): Controlling Popularity Bias in Learning-to-Rank RecommendationWasilewsi et al. (2016): Incorporating Diversity in a Learning to Rank Recommender SystemOh et al. (2011): Novel Recommendation Based on Personal Popularity TendencySteck (2018): Calibrated RecommendationsAbdollahpouri et al. (2023): Calibrated Recommendations as a Minimum-Cost Flow ProblemSeymen et al. (2022): Making smart recommendations for perishable and stockout productsGeneral Links:Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel@recsperts.comRecsperts Website
In episode 18 of Recsperts, we hear from Professor Sole Pera from Delft University of Technology. We discuss the use of recommender systems for non-traditional populations, with children in particular. Sole shares the specifics, surprises, and subtleties of her research on recommendations for children.In our interview, Sole and I discuss use cases and domains which need particular attention with respect to non-traditional populations. Sole outlines some of the major challenges like lacking public datasets or multifaceted criteria for the suitability of recommendations. The highly dynamic needs and abilities of children pose proper user modeling as a crucial part in the design and development of recommender systems. We also touch on how children interact differently with recommender systems and learn that trust plays a major role here.Towards the end of the episode, we revisit the different goals and stakeholders involved in recommendations for children, especially the role of parents. We close with an overview of the current research community.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Introduction (04:56) - About Sole Pera (06:37) - Non-traditional Populations (09:13) - Dedicated User Modeling (25:01) - Main Application Domains (40:16) - Lack of Data about non-traditional Populations (47:53) - Data for Learning User Profiles (57:09) - Interaction between Children and Recommendations (01:00:26) - Goals and Stakeholders (01:11:35) - Role of Parents and Trust (01:17:59) - Evaluation (01:26:59) - Research Community (01:32:37) - Closing Remarks Links from the Episode:Sole Pera on LinkedInSole's WebsiteChildren and RecommendersKidRec 2022People and Information Retrieval Team (PIReT)Papers:Beyhan et al. (2023): Covering Covers: Characterization Of Visual Elements Regarding SleevesMurgia et al. (2019): The Seven Layers of Complexity of Recommender Systems for Children in Educational ContextsPera et al. (2019): With a Little Help from My Friends: User of Recommendations at SchoolCharisi et al. (2022): Artificial Intelligence and the Rights of the Child: Towards an Integrated Agenda for Research and PolicyGómez et al. (2021): Evaluating recommender systems with and for children: towards a multi-perspective frameworkNg et al. (2018): Recommending social-interactive games for adults with autism spectrum disorders (ASD)General Links:Follow me on LinkedInFollow me on TwitterSend me your comments, questions and suggestions to marcel@recsperts.comRecsperts Website
In episode 17 of Recsperts, we meet Miguel Fierro who is a Principal Data Science Manager at Microsoft and holds a PhD in robotics. We talk about the Microsoft recommenders repository with over 15k stars on GitHub and discuss the impact of LLMs on RecSys. Miguel also shares his view of the T-shaped data scientist.In our interview, Miguel shares how he transitioned from robotics into personalization as well as how the Microsoft recommenders repository started. We learn more about the three key components: examples, library, and tests. With more than 900 tests and more than 30 different algorithms, this library demonstrates a huge effort of open-source contribution and maintenance. We hear more about the principles that made this effort possible and successful. Therefore, Miguels also shares the reasoning behind evidence-based design to put the users of microsoft-recommenders and their expectations first. We also discuss the impact that recent LLM-related innovations have on RecSys.At the end of the episode, Miguel explains the T-shaped data professional as an advice to stay competitive and build a champion data team. We conclude with some remarks regarding the adoption and ethical challenges recommender systems pose and which need further attention.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review(00:00) - Episode Overview (03:34) - Introduction Miguel Fierro (16:19) - Microsoft Recommenders Repository (30:04) - Structure of MS Recommenders (34:16) - Contributors to MS Recommenders (37:10) - Scalability of MS Recommenders (39:32) - Impact of LLMs on RecSys (48:26) - T-shaped Data Professionals (53:29) - Further RecSys Challenges (59:28) - Closing Remarks Links from the Episode:Miguel Fierro on LinkedInMiguel Fierro on TwitterMiguel's WebsiteMicrosoft RecommendersMcKinsey (2013): How retailers can keep up with consumersFortune (2012): Amazon's recommendation secretRecSys 2021 Keynote by Max Welling: Graph Neural Networks for Knowledge Representation and RecommendationPapers:Geng et al. (2022): Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)General Links:Follow me on LinkedInFollow me on TwitterSend me your comments, questions and suggestions to marcel@recsperts.comRecsperts Website
In episode 16 of Recsperts, we hear from Michael D. Ekstrand, Associate Professor at Boise State University, about fairness in recommender systems. We discuss why fairness matters and provide an overview of the multidimensional fairness-aware RecSys landscape. Furthermore, we talk about tradeoffs, methods and receive practical advice on how to get started with tackling unfairness.In our discussion, Michael outlines the difference and similarity between fairness and bias. We discuss several stages at which biases can enter the system as well as how bias can indeed support mitigating unfairness. We also cover the perspectives of different stakeholders with respect to fairness. We also learn that measuring fairness depends on the specific fairness concern one is interested in and that solving fairness universally is highly unlikely.Towards the end of the episode, we take a look at further challenges as well as how and where the upcoming RecSys 2023 provides a forum for those interested in fairness-aware recommender systems.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.(00:00) - Episode Overview (02:57) - Introduction Michael Ekstrand (17:08) - Motivation for Fairness-Aware Recommender Systems (25:45) - Overview and Definition of Fairness in RecSys (46:51) - Distributional and Representational Harm (53:59) - Relationship between Fairness and Bias (01:04:43) - Tradeoffs (01:13:36) - Methods and Metrics for Fairness (01:28:06) - Practical Advice for Tackling Unfairness (01:32:24) - Further Challenges (01:35:24) - RecSys 2023 (01:38:29) - Closing Remarks Links from the Episode:Michael Ekstrand on LinkedInMichael Ekstrand on MastodonMichael's WebsiteGroupLens Lab at University of MinnesotaPeople and Information Research Team (PIReT)6th FAccTRec Workshop: Responsible RecommendationNORMalize: The First Workshop on Normative Design and Evaluation of Recommender SystemsACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)Coursera: Recommender Systems SpecializationLensKit: Python Tools for Recommender SystemsChris Anderson - The Long Tail: Why the Future of Business Is Selling Less of MoreFairness in Recommender Systems (in Recommender Systems Handbook)Ekstrand et al. (2022): Fairness in Information Access SystemsKeynote at EvalRS (CIKM 2022): Do You Want To Hunt A Kraken? Mapping and Expanding Recommendation FairnessFriedler et al. (2021): The (Im)possibility of Fairness: Different Value Systems Require Different Mechanisms For Fair Decision MakingSafiya Umoja Noble (2018): Algorithms of Oppression: How Search Engines Reinforce RacismPapers:Ekstrand et al. (2018): Exploring author gender in book rating and recommendationEkstrand et al. (2014): User perception of differences in recommender algorithmsSelbst et al. (2019): Fairness and Abstraction in Sociotechnical SystemsPinney et al. (2023): Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information AccessDiaz et al. (2020): Evaluating Stochastic Rankings with Expected ExposureRaj et al. (2022): Fire Dragon and Unicorn Princess; Gender Stereotypes and Children's Products in Search Engine ResponsesMitchell et al. (2021): Algorithmic Fairness: Choices, Assumptions, and DefinitionsMehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender SystemsRaj et al. (2022): Measuring Fairness in Ranked Results: An Analytical and Empirical ComparisonBeutel et al. (2019): Fairness in Recommendation Ranking through Pairwise ComparisonsBeutel et al. (2017): Data Decisions and Theoretical Implications when Adversarially Learning Fair RepresentationsDwork et al. (2018): Fairness Under CompositionBower et al. (2022): Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender SystemsZehlike et al. (2022): Fairness in Ranking: A SurveyHoffmann (2019): Where fairness fails: data, algorithms, and the limits of antidiscrimination discourseSweeney (2013): Discrimination in Online Ad Delivery: Google ads, black names and white names, racial discrimination, and click advertisingWang et al. (2021): User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided MarketsGeneral Links:Follow me on Twitter: https://twitter.com/MarcelKurovskiSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In episode 15 of Recsperts, we delve into podcast recommendations with senior data scientist, Mirza Klimenta. Mirza discusses his work on the ARD Audiothek, a public broadcaster of audio-on-demand content, where he is part of pub. Public Value Technologies, a subsidiary of the two regional public broadcasters BR and SWR.We explore the use and potency of simple algorithms and ways to mitigate popularity bias in data and recommendations. We also cover collaborative filtering and various approaches for content-based podcast recommendations, drawing on Mirza's expertise in multidimensional scaling for graph drawings. Additionally, Mirza sheds light on the responsibility of a public broadcaster in providing diversified content recommendations.Towards the end of the episode, Mirza shares personal insights on his side project of becoming a novelist. Tune in for an informative and engaging conversation.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.(00:00) - Episode Overview (01:43) - Introduction Mirza Klimenta (08:06) - About ARD Audiothek (21:16) - Recommenders for the ARD Audiothek (30:03) - User Engagement and Feedback Signals (46:05) - Optimization beyond Accuracy (51:39) - Next RecSys Steps for the Audiothek (57:16) - Underserved User Groups (01:04:16) - Cold-Start Mitigation (01:05:06) - Diversity in Recommendations (01:07:50) - Further Challenges in RecSys (01:10:03) - Being a Novelist (01:16:07) - Closing Remarks Links from the Episode:Mirza Klimenta on LinkedInARD Audiothekpub. Public Value TechnologiesImplicit: Fast Collaborative Filtering for Implicit DatasetsFairness in Recommender Systems: How to Reduce the Popularity BiasPapers:Steck (2019): Embarrasingly Shallow Auoencoders for Sparse DataHu et al. (2008): Collaborative Filtering for Implicit Feedback DatasetsCer et al. (2018): Universal Sentence EncoderGeneral Links:Follow me on Twitter: https://twitter.com/MarcelKurovskiSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In episode number 14 of Recsperts we talk to Daniel Svonava, CEO and Co-Founder of Superlinked, delivering user modeling infrastructure. In his former role he was a senior software engineer and tech lead at YouTube working on ad performance prediction and pricing.We discuss the crucial role of user modeling for recommendations and discovery. Daniel presents two examples from YouTube’s ad performance forecasting to demonstrate the bandwidth of use cases for user modeling. We also discuss sources of information that fuel user models and additional personlization tasks that benefit from it like user onboarding. We learn that the tight combination of user modeling with (near) real-time updates is key to a sound personalized user experience.Daniel also shares with us how Superlinked provides personalization as a service beyond ecommerce-centricity. Offering personalized recommendations of items and people across various industries and use cases is what sets Superlinked apart. In the end, we also touch on the major general challenge of the RecSys community which is rebranding in order to establish a more positive image of the field.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Chapters:(03:35) - Introduction Daniel Svonava (10:18) - Introduction to User Modeling (17:52) - User Modeling for YouTube Ads (35:43) - Real-Time Personalization (57:29) - ML Tooling for User Modeling and Real-Time Personalization (01:07:41) - Superlinked as a User Modeling Infrastructure (01:31:22) - Rebranding RecSys as Major Challenge (01:37:40) - Final Remarks Links from the Episode:Daniel Svonava on LinkedInDaniel Svonava on TwitterSuperlinked - User Modeling InfrastructureThe 2023 MAD (Machine Learning, Artificial Intelligence, Data Science) LandscapeEric Ries: The Lean StartupRob Fitzpatrick: The Mom TestPapers:Liu et al. (2022): Monolith: Real Time Recommendation System With Collisionless Embedding TableRSPapers CollectionGeneral Links:Follow me on Twitter: https://twitter.com/MarcelKurovskiSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
This episode of Recsperts features Justin Basilico who is director of research and engineering at Netflix. Justin leads the team that is in charge of creating a personalized homepage. We learn more about the evolution of the Netflix recommender system from rating prediction to using deep learning, contextual multi-armed bandits and reinforcement learning to perform personalized page construction. Deep content understanding drives the creation of useful groupings of videos to be shown in a personalized homepage.Justin and I discuss the misalignment of metrics as just one out of many elements that is making personalization still “super hard”. We hear more about the journey of deep learning for recommender systems where real usefulness comes from taking advantage of the variety of data besides pure user-item interactions, i.e. histories, content, and context. We also briefly touch on RecSysOps for detecting, predicting, diagnosing and resolving issues in a large-scale recommender systems and how it helps to alleviate item cold-start.In the end of this episode, we talk about the company culture at Netflix. Key elements are freedom and responsibility as well as providing context instead of exerting control. We hear that being really comfortable with feedback is important for high-performance people and teams.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Chapters:(03:13) - Introduction Justin Basilico (07:37) - Evolution of the Netflix Recommender System (22:28) - Page Construction of the Personalized Netflix Homepage (32:12) - Misalignment of Metrics (37:36) - Experience with Deep Learning for Recommender Systens (48:10) - RecSysOps for Issue Detection, Diagnosis and Response (55:38) - Bandits Recommender Systems (01:03:22) - The Netflix Culture (01:13:33) - Further Challenges (01:15:48) - RecSys 2023 Industry Track (01:17:25) - Closing Remarks Links from the Episode:Justin Basilico on LinkedinJustin Basilico on TwitterNetflix Research PublicationsThe Netflix Tech BlogCONSEQUENCES+REVEAL Workshop at RecSys 2022Learning a Personalized Homepage (Alvino et al., 2015)Recent Trends in Personalization at Netflix (Basilico, 2021)RecSysOps: Best Practices for Operating a Large-Scale Recommender System (Saberian et al., 2022)Netflix Fourth Quarter 2022 Earnings InterviewNo Rules Rules - Netflix and the Culture of Reinvention (Hastings et al., 2020)Job Posting for Netflix' Recommendation TeamPapers:Steck et al. (2021): Deep Learning for Recommender Systems: A Netflix Case StudySteck et al. (2021): Negative Interactions for Improved Collaborative Filtering: Don't go Deeper, go HigherMore et al. (2019): Recap: Designing a more Efficient Estimator for Off-policy Evaluation in Bandits with Large Action SpacesBhattacharya et al. (2022): Augmenting Netflix Search with In-Session Adapted RecommendationsGeneral Links:Follow me on Twitter: https://twitter.com/MarcelKurovskiSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In this episode of Recsperts we talk to Rishabh Mehrotra, the Director of Machine Learning at ShareChat, about users and creators in multi-stakeholder recommender systems. We learn more about users intents and needs, which brings us to the important matter of user satisfaction (and dissatisfaction). To draw conclusions about user satisfaction we have to perceive real-time user interaction data conditioned on user intents. We learn that relevance does not imply satisfaction as well as that diversity and discovery are two very different concepts.Rishabh takes us even further on his industry research journey where we also touch on relevance, fairness and satisfaction and how to balance them towards a fair marketplace. He introduces us into the creator economy of ShareChat. We discuss the post lifecycle of items as well as the right mixture of content and behavioral signals for generating recommendations that strike a balance between revenue and retention.In the end, we also conclude our interview with the benefits of end-to-end ownership and accountability in industrial RecSys work and how it makes people independent and effective. We receive some advice for how to grow and strive in tough job market times.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Chapters:(03:44) - Introduction Rishabh Mehrotra (19:09) - Ubiquity of Recommender Systems (23:32) - Moving from UCL to Spotify Research (33:17) - Moving from Research to Engineering (36:33) - Recommendations in a Marketplace (46:24) - Discovery vs. Diversity and Specialists vs. Generalists (55:24) - User Intent, Satisfaction and Relevant Recommendations (01:09:48) - Estimation of Satisfaction vs. Dissatisfaction (01:19:10) - RecSys Challenges at ShareChat (01:27:58) - Post Lifecycle and Mixing Content with Behavioral Signals (01:39:28) - Detect Fatigue and Contextual MABs for Ad Placement (01:47:24) - Unblock Yourself and Upskill (02:00:59) - RecSys Challenge 2023 by ShareChat (02:02:36) - Farewell Remarks Links from the Episode:Rishabh Mehrotra on LinkedinRishabh Mehrotra on TwitterRishabh's WebsitePapers:Mehrotra et al. (2017): Auditing Search Engines for Differential Satisfaction Across DemographicsMehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender SystemsMehrotra et al. (2019): Jointly Leveraging Intent and Interaction Signals to Predict User Satisfaction with Slate RecommendationsAnderson et al. (2020): Algorithmic Effects on the Diversity of Consumption on SpotifyMehrotra et al. (2020): Bandit based Optimization of Multiple Objectives on a Music Streaming PlatformHansen et al. (2021): Shifting Consumption towards Diverse Content on Music Streaming PlatformsMehrotra (2021): Algorithmic Balancing of Familiarity, Similarity & Discovery in Music RecommendationsJeunen et al. (2022): Disentangling Causal Effects from Sets of Interventions in the Presence of Unobserved ConfoundersGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In this episode of Recsperts we talk to Flavian Vasile about the work of his team at Criteo AI Lab on personalized advertising. We learn about the different stakeholders like advertisers, publishers, and users and the role of recommender systems in this marketplace environment. We learn more about the pros and cons of click versus conversion optimization and transition to econ(omic) reco(mmendations), a new approach to model the effect of a recommendations system on the users' decision making process. Economic theory plays an important role for this conceptual shift towards better recommender systems.In addition, we discuss generative recommenders as an approach to directly translate a user’s preference model into a textual and/or visual product recommendation. This can be used to spark product innovation and to potentially generate what users really want. Besides that, it also allows to provide recommendations from the existing item corpus.In the end, we catch up on additional real-world challenges like two-tower models and diversity in recommendations.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Chapters:(02:37) - Introduction Flavian Vasile (06:46) - Personalized Advertising at Criteo (18:29) - Moving from Click to Conversion optimization (23:04) - Econ(omic) Reco(mmendations) (41:56) - Generative Recommender Systems (01:04:03) - Additional Real-World Challenges in RecSys (01:08:00) - Final Remarks Links from the Episode:Flavian Vasile on LinkedInFlavian Vasile on TwitterModern Recommendation for Advanced Practitioners - Part I (2019)Modern Recommendation for Advanced Practitioners - Part II (2019)CONSEQUENCES+REVEAL Workshop at RecSys 2022: Causality, Counterfactuals, Sequential Decision-Making & Reinforcement Learning for Recommender SystemsPapers:Heymann et al. (2022): Welfare-Optimized Recommender SystemsSamaran et al. (2021): What Users Want? WARHOL: A Generative Model for RecommendationBonner et al (2018): Causal Embeddings for RecommendationVasile et al. (2016): Meta-Prod2Vec: Product Embeddings Using Side-Information for RecommendationGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In episode number ten of Recsperts I welcome David Graus who is the Data Science Chapter Lead at Randstad Groep Nederland, a global leader in providing Human Resource services. We talk about the role of recommender systems in the HR domain which includes vacancy recommendations for candidates, but also generating talent recommendations for recruiters at Randstad. We also learn which biases might have an influence when using recommenders for decision support in the recruiting process as well as how Randstad mitigates them.In this episode we learn more about another domain where recommender systems can serve humans by effective decision support: Human Resources. Here, everything is about job recommendations, matching candidates with vacancies, but also exploiting knowledge about career path to propose learning opportunities and assist with career development. David Graus leads those efforts at Randstad and has previously worked in the news recommendation domain after obtaining his PhD from the University of Amsterdam.We discuss the most recent contribution by Randstad on mitigating bias in candidate recommender systems by introducing fairness-oriented post- and preprocessing to a recommendation pipeline. We learn that one can maintain user satisfaction while improving fairness at the same time (demographic parity measuring gender balance in this case).David and I also touch on his engagement in co-organizing the RecSys in HR workshops since RecSys 2021.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Links from the Episode:David Graus on LinkedInDavid Graus on TwitterDavid's WebsiteRecSys in HR 2022: Workshop on Recommender Systems for Human RecourcesRandstad Annual Report 2021Talk by David Graus at Anti-Discrimination Hackaton on "Algorithmic matching, bias, and bias mitigation"Papers:Arafan et al. (2022): End-to-End Bias Mitigation in Candidate Recommender Systems with Fairness GatesGeyik et al. (2019): Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent SearchGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/ (02:23) - Introduction David Graus (13:55) - About Randstad and the Staffing Industry (17:09) - Use Cases for RecSys Application in HR (22:04) - Talent and Vacancy Recommender System (33:46) - RecSys in HR Workshop (38:48) - Fairness for RecSys in HR (52:40) - Other HR RecSys Challenges (56:40) - Further RecSys Challenges
In episode number nine of Recsperts we talk with the creators of RecPack which is a new Python package for recommender systems. We discuss how Froomle provides modularized personalization for customers in the news and e-commerce sectors. I talk to Lien Michiels and Robin Verachtert who are both industrial PhD students at the University of Antwerp and who work for Froomle. We also hear about their research on filter bubbles as well as model drift along with their RecSys 2022 contributions.In this episode we introduce RecPack as a new recommender package that is easy to use and to extend and which allows for consistent experimentation. Lien and Robin share with us how RecPack evolved, its structure as well as the problems in research and practice they intend to solve with their open source contribution.My guests also share many insights from their work at Froomle where they focus on modularized personalization with more than 60 recommendation scenarios and how they integrate these with their customers. We touch on topics like model drift and the need for frequent retraining as well as on the tradeoffs between accuracy, cost, and timeliness in production recommender systems.In the end we also exchange about Lien's critical reception of using the term 'filter bubble', an operationalized definition of them as well as Robin's research on model degradation and training data selection.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Links from the Episode:Lien Michiels on LinkedInLien Michiels on TwitterRobin Verachtert on LinkedInRecPack on GitLabRecPack DocumentationFROOMLEPERSPECTIVES 2022: Perspectives on the Evaluation of Recommender SystemsPERSPECTIVES 2022: Preview on "Towards a Broader Perspective in Recommender Evaluation" by Benedikt Loepp5th FAccTRec Workshop: Responsible RecommendationPapers:Verachtert et al. (2022): Are We Forgetting Something? Correctly Evaluate a Recommender System With an Optimal Training WindowLeysen and Michiels et al. (2022): What Are Filter Bubbles Really? A Review of the Conceptual and Empirical WorkMichiels and Verachtert et al. (2022): RecPack: An(other) Experimentation Toolkit for Top-N Recommendation using Implicit Feedback DataDahlgren (2021): A critical review of filter bubbles and a comparison with selective exposureGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/ (03:23) - Introduction Lien Michiels (07:01) - Introduction Robin Verachtert (09:29) - RecPack - Python Recommender Package (52:31) - Modularized Personalization in News and E-commerce by Froomle (01:09:54) - Research on Model Drift and Filter Bubbles (01:18:07) - Closing Questions
In episode number eight of Recsperts we discuss music recommender systems, the meaning of artist fairness and perspectives on recommender evaluation. I talk to Christine Bauer, who is an assistant professor at the University of Utrecht and co-organizer of the PERSPECTIVES workshop. Her research deals with context-aware recommender systems as well as the role of fairness in the music domain. Christine published work at many conferences like CHI, CHIIR, ICIS, and WWW.In this episode we talk about the specifics of recommenders in the music streaming domain. In particular, we discuss the interests of different stakeholders, like users, the platform, or artists. Christine Bauer presents insights from her research on fairness with respect to the representation of artists and their interests. We talk about gender imbalance and how recommender systems could serve as a tool to counteract existing imbalances instead of reinforcing them, for example with simulations and reranking. In addition, we talk about the lack of multi-method evaluation and how open datasets incline researchers to focus too much on offline evaluation. In contrast, Christine argues for more user studies and online evaluation.We wrap up with some final remarks on context-aware recommender systems and the potential of sensor data for improving context-aware personalization.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Links from the Episode:Website of Christine BauerChristine Bauer on LinkedInChristine Bauer on TwitterPERSPECTIVES 2022: Perspectives on the Evaluation of Recommender Systems5th FAccTRec Workshop: Responsible RecommendationPapers:Ferraro et al. (2021): What is fair? Exploring the artists' perspective on the fairness of music streaming platformsFerraro et. al (2021): Break the Loop: Gender Imbalance in Music RecommendersJannach et al. (2020): Escaping the McNamara Fallacy: Towards More Impactful Recommender Systems ResearchBauer et al. (2015): Designing a Music-controlled Running Application: a Sports Science and Psychological PerspectiveDey et al. (2000): Towards a Better Understanding of Context and Context-AwarenessGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/ (03:18) - Introducing Christine Bauer (09:08) - Multi-Stakeholder Interests in Music Recommender Systems (15:56) - Context-Aware Music Recommendations (21:55) - Fairness in Music RecSys (41:22) - Trade-Offs between Fairness and Relevance (48:18) - Evaluation Perspectives (01:02:37) - Further RecSys Challenges
In episode number seven, we meet Jacopo Tagliabue and discuss behavioral testing for recommender systems and experiences from ecommerce. Before Jacopo became the director of artificial intelligence at Coveo, he had founded tooso, which was later acquired by Coveo. Jacopo holds a PhD in cognitive intelligence and made many contributions to conferences like SIGIR, WWW, or RecSys. In addition, he serves as adjunct professor at NYU.In this episode we introduce behavioral testing for recommender systems and the corresponding framework RecList that was created by Jacopo and his co-authors. Behavioral testing goes beyond pure retrieval accuracy metrics and tries to uncover unintended behavior of recommender models. RecList is an adaption of CheckList that applies behavioral testing to NLP and which was proposed by Microsoft some time ago. RecList comes with an open-source framework with ready set datasets for different recommender use-cases like similar, sequence-based and complementary item recommendations. Furthermore, it offers some sample tests to make it easier for newcomers to get started with behavioral testing. We also briefly touch on the upcoming CIKM data challenge that is going to focus on the evaluation of recommender systems.In the end of this episode Jacopo also shares his insights from years of building and using diverse ML Ops tools and talk about what he refers to as the "post-modern stack".Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Links from the Episode:Jacopo Tagliabue on LinkedInGitHub: RecListCIKM RecEval Analyticup 2022 (sign up!)GitHub: You Don't Need a Bigger Boat - end-to-end (Metaflow-based) implementation of an intent prediction (and session recommendation) flowCoveo SIGIR eCOM 2021 Data Challenge DatasetBlogposts: The Post-Modern Stack - Joining the modern data stack with the modern ML stackTensorFlow RecommendersTorchRecNVIDIA MerlinRecommenders (by Microsoft)recbolePapers:Chia et al. (2022): Beyond NDCG: behavioral testing of recommender systems with RecListRibeiro et al. (2020): Beyond Accuracy: Behavioral Testing of NLP models with CheckListBianchi et al. (2020): Fantastic Embeddings and How to Align Them: Zero-Shot Inference in a Multi-Shop ScenarioGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
In episode number six, we welcome Manel Slokom to the show and talk about purpose-aware privacy-preserving data for recommender systems. Manel is a 4th year PhD student at Delft University of Technology. For three years in a row she served as student volunteer at RecSys - before becoming student volunteer co-chair herself in 2021. Besides working on privacy and fairness, she also dedicates herself to simulation and in particular synthetic data for recommender systems - also co-organizing the 1st SimuRec Workshop as part of RecSys 2021.This episode is definitely worth a longer run. Manel and I discussed fairness and privacy in recommender systems and how ratings can leak signals about sensitive personal information. For example, classifiers may exploit ratings in order to effectively determine one's gender. She explains "Personalized Blurring", which is the approach she developed to personalize gender obfuscation in user rating data, as well as how this can contribute to more diverse recommendations.In our discussion, we also touch "data-centric AI", a term recently formulated by Andrew Ng, and how adapting feedback data may yield underestimated effects on recommendations that can lead to "data-centric recommender systems". In addition, we dived into the differences between simulated and synthetic data which brought us to the SimuRec workshop that she co-organized as part of RecSys 2021.Finally, Manel provides some recommendations for young researcher to become active RecSys community members and benefit from exchange: talk to people and volunteer at RecSys.Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Links from the Episode:Manel on TwitterManel on LinkedInManel at TU Delft (find more papers referenced there)SimuRec Workshop at RecSys 2021FAccTrec Workshop at RecSys 2021Andrew Ng: Unbiggen AI (from IEEE Spectrum)Papers:Slokom et al. (2021): Towards user-oriented privacy for recommender system data: A personalization-based approach to gender obfuscation for user profilesWeinsberg et al. (2012): BlurMe: Inferring and Obfuscating User Gender Based on RatingsEkstrand et al. (2018): All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and EffectivenessSlokom et al. (2018): Comparing recommender systems using synthetic dataBurke et al. (2018): Synthetic Attribute Data for Evaluating Consumer-side FairnessBurke et al. (2005): Identifying Attack Models for Secure RecommendationNarayanan et al. (2008): Robust De-anonymization of Large Sparse DatasetsGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/
loading