26

Learning from unlabelled data with COVID-19 Open Research Dataset

 4 years ago
source link: https://towardsdatascience.com/learning-from-unlabelled-data-with-covid-19-open-research-dataset-cded4979f1cf?gi=e32eadccf08a
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Objective criteria for text search results and some surprising results

The COVID-19 Open Research Dataset can help researchers and the health community in the fight against a global pandemic. The Vespa team is contributing by releasing a search app based on the dataset. Since the data comes with no reliable labels to judge a good search result from a bad one, we would like to propose objective criteria to evaluate search results that do not rely on human-annotated labels. We use this criterion to run experiments and evaluate the value delivered by term-matching and semantic signals. We then show that the semantic signals deliver poor results even when considering a fine-tuned version of a model specifically designed for scientific text.

2IJFfaa.jpg!web

Photo by National Cancer Institute on Unsplash

Released by the Allen Institute for AI , the COVID-19 Open Research Dataset (CORD-19) contains over 44,000 scholarly articles, including over 29,000 with full text, about COVID-19 and the coronavirus family of viruses for use by the global research community. It was released to mobilize researchers to apply recent advances in natural language processing to generate new insights in support of the fight against this infectious disease. And it did exactly that.

As soon as it was released, there were a Kaggle challenge , a dataset explorer , fine-tuned embedding models and a run to collect labelled data:

Given my latest experience with labels containing strong term-matching bias in the MS MARCO dataset and the fact that we at vespa.ai wanted to move fast to build a search app around the CORD-19 dataset, I decided to spend some time to think how I could evaluate between different matching criteria and ranking functions without labelled data.

Objective criteria for text search

The goal was to have an objective criteria and to move away from the “it looks good enough” criteria so commonly used when reliable labels are not available. My proposal is simple, we can use the title of the article as a query and consider the associated abstract as the relevant document for the query.

qqUFbqA.jpg!web

Photo by Marc A on Unsplash

This criteria is simple, can scale to massive amounts of data since we do not rely on human annotation, and it makes sense. Think like this, if you use the title as a query and a given method is not able to retrieve the correct abstract and include it in the top 100 of the resulting list we have a very sub-optimal ranking function for the context of a CORD-19 search app.

Results

Some of the results obtained are summarized in this section. We report here three important metrics. The percentage of documents matched by the query, the recall at the top 100 positions and the mean reciprocal rank (MRR) considering the top 100 documents returned.

Term-matching

Table 1 shows results obtained by ranking documents with the term-matching signal BM25 score . The first row shows the result when we only match documents with abstracts that contains every word in the title (AND operator). This is way too restrictive, matching only a small fraction of documents (0.01%) and therefore misses many relevant abstracts leading to poor recall and MRR metrics (20% and 19% respectively).


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK