The document provides an overview of the TREC 2016 Open Search track, which focused on academic search. It discusses the track organization, methodology using a living labs approach, three academic search engine sites used (CiteSeerX, SSOAR, and Microsoft Academic Search), results from rounds 1 and 2, and key issues and contributions. The track aimed to enable meaningful IR research using real users and data, allowing participants to experiment by replacing components in a live search system and evaluating performance.
Overview of the TREC 2016 Open Search Track Academic Search Edition
1. Overview of the TREC 2016
Open Search track
Academic search edi>on
Krisz>an Balog
University of Stavanger
@krisz'anbalog
25th Text REtrieval Conference (TREC 2016) | Gaithersburg, 2016
Anne Schuth
Blendle
@anneschuth
4. WHAT IS OPEN SEARCH?
Open Search is a new evalua1on paradigm for IR. The
experimenta1on pla=orm is an exis1ng search engine.
Researchers have the opportunity to replace
components of this search engine and evaluate these
components using interac1ons with real,
"unsuspec1ng" users of this search engine.
5. WHY OPEN SEARCH?
• Because it opens up the possibility for people
outside search organiza>ons to do meaningful IR
research
• Meaningful includes
• Real users of an actual search system
• Access to the same data
6. RESEARCH QUESTIONS
• How does online evalua>on compare to offline,
Cranfield style, evalua>on?
• Would systems be ranked differently?
• How stable are such system rankings?
• How much interac>on volume is required to be able
to reach reliable conclusions about system behavior?
• How many queries are needed?
• How many query impressions are needed?
• To which degree does it maSer how query impressions are
distributed over queries?
7. RESEARCH QUESTIONS (2)
• Should systems be trained or op>mized differently
when the objec>ve is online performance?
• What are ques>ons that cannot be answered about
a specific task (e.g., scien>fic literature search)
using offline evalua>on?
• How much risk do search engines that serve as
experimental plaYorm take?
• How can this risk be controlled while s>ll be able to experiment?
9. KEY IDEAS
• An API orchestrates all the data exchange between
sites (live search engines) and par>cipants
• Focus on frequent (head) queries
• Enough traffic on them for experimenta>on
• Par>cipants generate rankings offline and upload
these to the API
• Eliminates real->me requirement
• Freedom in choice of tools and environment
K. Balog, L. Kelly, andA. Schuth.Head First: Living Labs for Ad-hoc Search Evalua=on. CIKM'14
13. METHODOLOGY (3)
experimental
system
API
• When any of the test queries is fired on the live
site, it requests an experimental ranking from the
API and interleaves it with that of the produc>on
system
query
interleaved
ranking
query
experimental
ranking
14. INTERLEAVING
doc 1
doc 2
doc 3
doc 4
doc 5
doc 2
doc 4
doc 7
doc 1
doc 3
system A system B
doc 1
doc 2
doc 4
doc 3
doc 7
interleaved list
A>B
Inference:
• Experimental ranking is interleaved with the
produc>on ranking
• Needs 1-2 order of magnitudes data than A/B tes>ng (also, it is
within subject as opposed to between subject design)
15. INTERLEAVING
doc 1
doc 2
doc 3
doc 4
doc 5
doc 1
doc 2
doc 3
doc 7
doc 4
system A system B
doc 1
doc 2
doc 3
doc 4
doc 7
interleaved list
Inference:
tie
• Team Drac Interleaving
• No preferences are inferred from common prefix of A and B
16. METHODOLOGY (4)
• Par>cipants get detailed feedback on user
interac>ons (clicks)
experimental
system
users live site
API
17. METHODOLOGY (5)
• Evalua>on measure:
• where the number of “wins” and “losses” is against
the produc>on system, aggregated over a period of
>me
• An Outcome of > 0.5 means bea>ng the produc>on system
Outcome =
#Wins
#Wins + #Losses
18. WHAT IS IN IT FOR PARTICIPANTS?
• Access to privileged (search and click-through) data
• Opportunity to test IR systems with real,
unsuspec>ng users in a live seing
• Not the same as crowdsourcing!
• Con>nuous evalua>on is possible, not limited to
yearly evalua>on cycle
19. KNOWN ISSUES
• Head queries only
• Considerable por>on of traffic, but only popular info needs
• Lack of context
• No knowledge of the searcher’s loca>on, previous searches, etc.
• No real->me feedback
• API provides detailed feedback, but it’s not immediate
• Limited control
• Experimenta>on is limited to single searches, where results are interleaved
with those of the produc>on system; no control over the en>re result list
• Ul>mate measure of success
• Search is only a means to an end, it is not the ul>mate goal
20. KNOWN ISSUES
• Head queries only
• Considerable por>on of traffic, but only popular info needs
• Lack of context
• No knowledge of the searcher’s loca>on, previous searches, etc.
• No real->me feedback
• API provides detailed feedback, but it’s not immediate
• Limited control
• Experimenta>on is limited to single searches, where results are interleaved
with those of the produc>on system; no control over the en>re result list
• Ul>mate measure of success
• Search is only a means to an end, it is not the ul>mate goal
Come to the planning session tomorrow!
22. ACADEMIC SEARCH
• Interes>ng domain
• Need seman>c matching to overcome vocabulary mismatch
• Different en>ty types (papers, authors, orgs, conferences, etc.)
• Beyond document ranking: ranking en>>es, recommending
related literature, etc.
• This year
• Single task: ad hoc scien>fic literature search
• Three academic search engines
23. TRACK ORGANIZATION
• Mul>ple evalua>on rounds
• Round #1: Jun 1 - Jul 15
• Round #2: Aug 1 - Sep 15
• Round #3: Oct 1 - Nov 15 (official TREC round)
• Train/test queries
• For train queries feedback is available individual impressions
• For test queries only aggregated feedback is available (and only
acer the end of each evalua>on period)
• Single submission per team
26. CITESEERX
• Main focus is on Computer and Informa>on Sci.
• hSp://citeseerx.ist.psu.edu/
• Queries
• 107 test + 100 training for Rounds #1 and #2
• 700 addi>onal test queries for Round #3
• Documents
• Title
• Full document text (extracted from PDF)
29. SSOAR
• Social Science Open Access Repository
• hSp://www.ssoar.info/
• Queries
• 74 test + 57 training for Rounds #1 and #2
• 988 addi>onal test queries for Round #3
• Documents
• Title, abstract, author(s), various metadata field (subject, type,
year, etc.)
32. MICROSOFT
ACADEMIC SEARCH
• Research service developed by MSR
• hSp://academic.research.microsoc.com/
• Queries
• 480 test queries
• Documents
• Title, abstract, URL
• En>ty ID in the Microsoc Academic Search Knowledge Graph
33. MICROSOFT ACADEMIC SEARCH
EVALUATION METHODOLOGY
• Offline evalua>on, performed by Microsoc
• Head queries (139)
• Binary relevance, inferred from historical click data
• Tradi>onal rank-based evalua>on (MAP)
• Tail queries (235)
• Side-by-side evalua>on against a baseline produc>on system
• Top 10 results decorated with Bing cap>ons
• Rela>ve ranking of systems w.r.t. the baseline
34. MICROSOFT ACADEMIC SEARCH
RESULTS
Team MAP
UDEL-IRL 0.60
BJUT 0.56
webis 0.52*
Team Rank
webis #1
UDEL-IRL #2
BJUT #3
* Significantly different from UDEL-IRL and BJUT
Head queries
(click-based evalua>on)
Tail queries
(side-by-side evalua>on)
35. SUMMARY
• Ad hoc scien>fic literature search
• 3 academic search engines, 10 par>cipants
• TREC OS 2017
• Academic search domain
• Addi>onal sites
• One more subtask (recommending literature; ranking people, conferences, etc.)
• Mul>ple runs per team
• Consider a second use-case
• Product search, contextual adver>sing, news recommenda>on, ...
36. CONTRIBUTORS
• API development and maintenance
• Peter Dekker
• CiteSeerX
• Po-Yu Chuang, Jian Wu, C. Lee Giles
• SSOAR
• Narges Tavakolpoursaleh, Philipp Schaer
• MS Academic Search
• Kuansan Wang, Tobias Hassmann, Artem Churkin, Ioana
Varsandan, Roland DiSel