Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Assessing Viewpoint Diversity in Search Results Using Ranking Fairness Metrics

50 views

Published on

Paper presentation at the BIAS workshop at ECML-PKDD 2020.

Published in: Science
  • Be the first to comment

  • Be the first to like this

Assessing Viewpoint Diversity in Search Results Using Ranking Fairness Metrics

  1. 1. 1 WIS Web Information Systems Assessing Viewpoint Diversity in Search Results Using Ranking Fairness Metrics Tim Draws1, Nava Tintarev1, Ujwal Gadiraju1, Alessandro Bozzon1, and Benjamin Timmermans2 1TU Delft, The Netherlands 2IBM, The Netherlands t.a.draws@tudelft.nl
  2. 2. 2 WIS Web Information Systems Biases in web search • Position bias2-4 • “Search Engine Manipulation Effect”1,5 How can we quantify (a lack of) viewpoint diversity in search results? Yes! Yes! Yes! Yes! Yes! No! No!
  3. 3. 3 WIS Web Information Systems Ranking fairness metrics Statistical parity in fair ranking: protected attribute should not influence the ranking6 evaluate statistical parity for top i discount by rank normalize
  4. 4. 4 WIS Web Information Systems Our paper RQ: Can ranking fairness metrics be used to assess viewpoint diversity in search results? Contributions: 1. Evaluation of existing metrics 2. Novel metric
  5. 5. 7 WIS Web Information Systems Representing viewpoints Should we all be vegan? Strongly opposing Opposing Somewhat opposing Neutral Somewhat supporting Supporting Strongly supporting protected non-protected Binomial viewpoint fairness
  6. 6. 8 WIS Web Information Systems Representing viewpoints Should we all be vegan? Strongly opposing Opposing Somewhat opposing Neutral Somewhat supporting Supporting Strongly supporting protected Multinomial viewpoint fairness
  7. 7. 9 WIS Web Information Systems Metrics we consider Binomial viewpoint fairness – Normalized Discounted Difference (nDD)6 – Normalized Discounted Ratio (nDR)6 – Normalized Discounted Kullback-Leibler Divergence (nDKL)6 Multinomial viewpoint fairness – Normalized Discounted Jensen-Shannon Divergence (nDJS)
  8. 8. 11 WIS Web Information Systems Simulation studies How do the metrics behave for different levels of viewpoint diversity? • Three synthetic data sets S1, S2, S3 • Per set created rankings to simulate different levels of viewpoint diversity
  9. 9. 13 WIS Web Information Systems Weighted sampling procedure Rank Viewpoint 1 Strongly opposing 2 Strongly opposing 3 Opposing 4 Somewhat opposing 5 Supporting 6 Strongly opposing … … S1 sampling Per set: created rankings with different levels of ranking bias • Binomial viewpoint fairness: all opposing viewpoints get w1, all others w2 • Multinomial viewpoint fairness: random viewpoint get w1, all others w2
  10. 10. 14 WIS Web Information Systems Results: binomial viewpoint fairness nDD nDR nDKL −1.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 −1.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 −1.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Ranking bias Meanmetricvalue Distribution S1 S2 S3 • All metrics assess binomial viewpoint fairness (as expected) • All metrics are asymmetric (proportion of protected items and ”direction” of bias matter) • Which metric to use depends on strength of ranking bias
  11. 11. 15 WIS Web Information Systems Results: multinomial viewpoint fairness 0.0 0.1 0.2 −1.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 Ranking bias MeannDJSvalue Distribution S1 S2 S3 • nDJS assesses multinomial viewpoint fairness • nDJS is also asymmetric (proportion of protected items and ”direction” of bias matter) • Careful interpretation: values not directly comparable to other metrics
  12. 12. 16 WIS Web Information Systems Discussion • Metrics work for assessing viewpoint diversity • Considerations: – What is the underlying aim? – How balanced is the data overall? – How strong is the ranking bias? – What is the direction of ranking bias?
  13. 13. 17 WIS Web Information Systems Take home and future work • Ranking fairness metrics can be used for assessing viewpoint diversity in search results – (when interpreted correctly) • Future work can use these metrics to… – …assess viewpoint diversity in real search results – …align different metric and behavioral outcomes
  14. 14. 18 WIS Web Information Systems References [1] R. Epstein and R. E. Robertson. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences of the United States of America, 112(33):E4512–E4521, 2015. [2] A. Ghose, P. G. Ipeirotis, and B. Li. Examining the impact of ranking on consumer behavior and search engine revenue. Management Science, 60(7):1632–1654, 2014. [3] L. A. Granka, T. Joachims, and G. Gay. Eye-tracking analysis of user behavior in WWW search. Proceedings of Sheffield SIGIR - Twenty-Seventh Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 478–479, 2004. [4] B. Pan, H. Hembrooke, T. Joachims, L. Lorigo, G. Gay, and L. Granka. In Google we trust: Users’ decisions on rank, position, and relevance. Journal of Computer-Mediated Communication, 12(3):801– 823, 2007. [5] F. A. Pogacar, A. Ghenai, M. D. Smucker, and C. L. Clarke. The positive and negative influence of search results on people’s decisions about the efficacy of medical treatments. ICTIR 2017 - Proceedings of the 2017 ACM SIGIR International Conference on the Theory of Information Retrieval, pages 209– 216, 2017. [6] Yang, K., & Stoyanovich, J. Measuring fairness in ranked outputs. Proceedings of the 29th International Conference on Scientific and Statistical Database Management, pages 1-6, 2017.

×