Price Prediction

Why New Datasets are Needed for Deep Learning-Enhanced IR

To encourage innovation in the information retrieval area, the community has collected several datasets for public benchmarking (summarized in Table 1).

There are many public web datasets for traditional information retrieval tasks, such as Robust04 [3], ClueWeb09 [13], ClueWeb12 [9], GOV2 [12], ClueWeb22 [37] and Common Crawl [2]. Unfortunately, these datasets have, at most, hundreds of labeled queries, far from enough to learn a good deep learning enhanced retrieval model.

Recently, several new datasets have been published for research on deep learning enhanced retrieval [28, 35, 43]. MS MARCO [35] is one of the most popular datasets for embedding model investigation. It provides 100K questions collected from Bing’s search questions paired with human generated answers contextualized within web documents. MS MARCO Ranking v2 [47] expands the size of the document and question sets to 11 million and 1 million, respectively. ORCAS [14] provides 10 million unique queries and 18

million clicked query-document pairs for MS MARCO documents. Natural Questions [28], a sub-million-scale question answering dataset collected from Google’s search queries with human annotated answers in Wikipedia articles, was repurposed for embedding-based retrieval by extracting passages from Wikipedia as candidate answers [27]. CLIR [43], a million-scale cross-language information retrieval dataset collected from Wikipedia, has been used to train cross-lingual embedding models [55]. However, none of these datasets meet the emerging large, real and rich requirements. These datasets focus on English-only question answering tasks. None of them has the desired web-scale data with highly-skewed multilingual queries which can be short, ambiguous and often not formulated as natural language questions. Further, they only provide the raw text of queries and answers, which limits the potential of future cross-modal knowledge transfer research. Finally, they only focus on evaluating the quality of embedding models using brute-force search, which cannot reflect end-to-end retrieval challenges.

ANN benchmark [4] and Billion-scale ANN benchmark [1] provide multiple high-dimensional vector datasets to evaluate the result accuracy and system performance for embedding-based retrieval algorithms. Unfortunately, they cannot measure model quality and thus cannot reflect the end-to-end retrieval performance.

Therefore, a large-scale information-rich web dataset with real document and query distribution that can reflect real-world challenges is still lacking.

Authors:

(1) Qi Chen, Microsoft Beijing, China;

(2) Xiubo Geng, Microsoft Beijing, China;

(3) Corby Rosset, Microsoft, Redmond, United States;

(4) Carolyn Buractaon, Microsoft, Redmond, United States;

(5) Jingwen Lu, Microsoft, Redmond, United States;

(6) Tao Shen, University of Technology Sydney, Sydney, Australia and the work was done at Microsoft;

(7) Kun Zhou, Microsoft, Beijing, China;

(8) Chenyan Xiong, Carnegie Mellon University, Pittsburgh, United States and the work was done at Microsoft;

(9) Yeyun Gong, Microsoft, Beijing, China;

(10) Paul Bennett, Spotify, New York, United States and the work was done at Microsoft;

(11) Nick Craswell, Microsoft, Redmond, United States;

(12) Xing Xie, Microsoft, Beijing, China;

(13) Fan Yang, Microsoft, Beijing, China;

(14) Bryan Tower, Microsoft, Redmond, United States;

(15) Nikhil Rao, Microsoft, Mountain View, United States;

(16) Anlei Dong, Microsoft, Mountain View, United States;

(17) Wenqi Jiang, ETH Zürich, Zürich, Switzerland;

(18) Zheng Liu, Microsoft, Beijing, China;

(19) Mingqin Li, Microsoft, Redmond, United States;

(20) Chuanjie Liu, Microsoft, Beijing, China;

(21) Zengzhong Li, Microsoft, Redmond, United States;

(22) Rangan Majumder, Microsoft, Redmond, United States;

(23) Jennifer Neville, Microsoft, Redmond, United States;

(24) Andy Oakley, Microsoft, Redmond, United States;

(25) Knut Magne Risvik, Microsoft, Oslo, Norway;

(26) Harsha Vardhan Simhadri, Microsoft, Bengaluru, India;

(27) Manik Varma, Microsoft, Bengaluru, India;

(28) Yujing Wang, Microsoft, Beijing, China;

(29) Linjun Yang, Microsoft, Redmond, United States;

(30) Mao Yang, Microsoft, Beijing, China;

(31) Ce Zhang, ETH Zürich, Zürich, Switzerland and the work was done at Microsoft.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button