ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

Understanding life sciences data curation practices via user research

[version 1; peer review: 1 approved, 1 approved with reservations]
* Equal contributors
PUBLISHED 11 Sep 2019
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the ELIXIR gateway.

This article is included in the EMBL-EBI collection.

Abstract

Background: Manual curation is a cornerstone of public biological data resources. However, it is a time-consuming process that urgently needs supportive technical solutions in the face of rapid data growth. Supporting scalable curation is a part of the mission of the Elixir Data Platform. Thus far, we have established infrastructure capable of ingesting and aggregating text-mined outputs from multiple providers and making these available via an API. This public API is used by Europe PMC to display specific entities and relationships on full text articles (via the SciLite application).
Methods: To ensure that the future development of this infrastructure meets the needs of curators, we carried out a user research project to understand and identify common workflow patterns and practices via an observational study. Building on these outcomes, we then devised a curator community survey to more specifically understand which entity types, sections of a paper and tools are of top priority to address.
Results: The main challenges faced by curators included the following: a) There is a need for ways to prioritise and identify relevant papers for curation as the volume of literature is large; b) Finding specific information can prove difficult; quick ways of filtering articles based on specific entities, such as experimental methods, species and other important entities, such as genes, cell lines and tissue samples, are required; and c) Transferring information from the search/annotation tools to the various curation workflows was also challenging.
Conclusions: This study lays the foundation for identifying actionable items to orient the current infrastructure towards meeting the needs of curation community, by improving text-mined annotation quality and coverage and other engineering solutions; and reusing text-mined annotations and other metadata in Europe PMC for article triage. Furthermore, this study presents an opportunity to explore customisation of triage/ranking systems to suit different curation contexts.

Keywords

Database curation, User research, Observational study, Curator survey, Annotation Infrastructure, Europe PMC

Introduction

Biological databases play a key role in knowledge discovery in life science research. A major contributor towards the maintenance of these databases is the process of manual curation. Curation is a high value task as experts carefully examine the relevant scientific literature and extract the essential information, such as biological functions and relationships between biological entities, generating the corresponding database records in a structured way. The advances in high-throughput technologies have resulted in tremendous growth of biological data, consequently increasing the number of research papers being published. As a result, the demand for high-quality curation that makes use of these resources has never been higher, but this demand can present challenges for curators in finding, and assimilating scientific conclusions described in the literature.

Text mining, machine learning and analytics promise to provide better ranking of reading lists, classification of articles, and identification of assertions with their biological context and evidence buried within the text of articles. To this end, many life science knowledgebases now include text mining (to varying degrees) in curation workflows. For example, databases, such as neXtProt1,2 and FlyBase3 have integrated text mining algorithms into their respective curation workflows to retrieve a ranked list of relevant articles and tag entities of interest. Furthermore, tools like PubTator4 and TextPresso5 are other examples of text mining tools that have been adopted by some curation communities. On the other hand, databases that mainly rely on manual curation, such as IntAct6 and DisProt7, are exploring possibilities to leverage text mining approaches to select articles for further curation.

Broadly speaking, the curation community recognises the potential of text mining in article triage and the identification of entities/concepts for curation. Nevertheless, text mining pipelines adopted thus far have been engineered to cater to specific domains or projects and wide uptake is lacking; curators often continue to use manual curation methods. This mainly stems from the wide variety of very precise information required by curators. The challenge is therefore to produce robust systems that both address the immediate and specific needs of curators as well as scale across multiple curation groups. In order to do this, we need to know the immediate challenges faced by curators with respect to selection and prioritisation of articles to curate. A clear understanding of the requirements will help build new systems and/or re-orient existing systems that cater to the needs of the curation community.

In this report we describe the outcomes of a user research project, conducted to understand curation practices and priorities for article selection. The project comprised of two parts, a) an observational study, to understand how curators proceed with selecting articles to curate, to identify commonalities in curator requirements; and b) a community survey, to specifically identify the immediate priorities of curators, such as entity types and sections of interest in an article, to name a few. The aim of this study is to identify specific actions for the Elixir Data Platform in the future, optimising and extending existing systems and infrastructural components. In the subsequent sections we present the main findings from our investigation.

Methods

Observational study

We initially drafted an interview guide (list of questions available as Extended data8) and a preliminary curator persona, reflecting our initial hypotheses about curators and their work practices. Following this we selected five different curation teams. The selection criteria was based on the type of curation the group was involved with, i.e., extracting biological information from scientific literature and integrating it into a biological database, contrary to groups that process raw data submissions (such as sequencing data). Out of these, two teams were based at EMBL-EBI, the other three teams were based in Norway, Switzerland and Italy. For teams that were situated at EMBL-EBI, the interviews were held in person and for the other teams the interviews were conducted over conference calls. The sessions were conducted over a period of two months: between March and April 2018. We observed three project leaders and four curators from the selected teams. The participants belong to teams that focused on curating very specific experimental evidence primarily on proteins such as protein-protein interactions, the role of a protein in a complex, protein disruption, human protein functions and transcription factor regulation. One team is focussed on the annotation of human genes relevant to a particular disease and another one on curating publications reporting associations of genetic variants with diseases.

For each session we followed an iterative user research process9,10 (as outlined in Table 1):

Table 1. Overview of user research activities.

The table provides an overview of the observation study on curation practices.

ActivityPurposeParticipantsMaterialsOutputsAnalysis
Stakeholder
interviews
Learn about curation
practices
3 project leads and 4
curators from 5 different
curation teams
Interview guideInterview notesObservations
- Patterns -
Implications
Follow-up
interviews
Clarify certain
aspects of curation
1 project lead and 1
curator from the same
team
Clarification questionsInterview notes
Stakeholder
workshop
Validate learnings
from interviews
3 project leads and 4
curators from 4 different
curation teams
Guide and notesHCW themesTranscribed
HCW themes
and feedback
Draft curator personaCurator persona
with feedback
Revised curator
persona
Draft curation process and
example screenshots
Curation process
with feedback
Revised curation
process
Curation
experience map
Report on
curation practice
Consolidate and
share validated
learnings from our
research
Previous participants and
another researcher
Draft report
Revised persona &
curation experience map
Revised materials
incorporated into
this report
  • The participants were asked to proceed with their daily curation work11 and were observed on how:

    • they select the entities (either come from a spreadsheet or partially curated data record) they wish to curate,

    • perform searches (including the query parameters) to retrieve the initial set of publications,

    • the criteria they used to either discard or select an article,

    • the information from the selected article is transferred to the respective curation platform.

  • Using the “What? So What? Now What?” method1, we transcribed our notes from these sessions and identified the most important observations, patterns and their implications.

Additionally, we further carried out two follow-up interviews with one project lead and one curator from the same team at EMBL-EBI to clarify particular curation tasks.

Furthermore, we conducted a stakeholder workshop12 with three project leads and four curators from four different curation teams to validate our main learnings from the interview sessions. As some curators took part both in the interviews and the workshop, overall we have engaged with 12 curators and team leads from seven different teams. The participants were presented with the preliminary curator persona13 and a workflow outlining the curation process. The participants were invited to give us their feedback on these drafts and express their challenges or pain points as How Can We (HCW) questions2. Their feedback was used to revise the curator persona and the curation process workflow and was consolidated into the curation experience map.

Community survey

Based on the interview guide used for the observational study, we formulated questions to understand the immediate challenges. The survey consisted of 15 questions (see Extended data8) ranging from, for instance, the section of the article the curators were most interested in; the types of biological entities curators look for; and whether it helps to know that a given article has been curated/accounted for in another database. The community survey was conducted online and was developed using Typeform. The survey was promoted via the mailing lists of various consortia, such as ELIXIR, International Society for Biocuration (ISB) and Alliance of Genome Resources. These widely known consortia provide a forum for developers, researchers and curators to streamline and standardise the maintenance of biological resources. The survey was conducted between December 2018 to January 2019.

Ethical issues

We confirm that we have obtained consent to use data from the participants as per the Europe PMC privacy notice. The privacy notice is formulated in accordance with EMBL’s data protection framework. The consent was part of the survey form and the participants can take the survey on accepting the terms and conditions for data re-use.

Results and discussion

Observational study

Curator persona. We created a persona called Ashley (see Underlying data8) to present the curators’ needs, sentiments, tasks and pain points from their own perspective in more detail to help us empathise with them13:

Ashley curates with precision and attention to detail, while trying to be as efficient as possible. Ashley is looking for very specific information about an experiment that the authors of a paper do not always report in a lot of detail. Ashley appreciates being able to ask a team mate when the “detective work” bears no fruit.

Apart from these “organic, informal discussions”, Ashley works independently during “triage”, “annotation” and to fill in the curation record in the Editor. During the latter stage Ashley tries to “translate from the author’s language to the curator’s language” using the appropriate identifiers and Control Vocabulary (CV) terms for species, proteins, methods and other important entities so that the curated evidence is referred to precisely and consistently and the annotations in the curation record are self explanatory outside the context of the paper.

This is cumbersome as a particular type of evidence is not always referred to in the same way and in enough detail in the literature. Moreover, the Editor is not integrated with the search and annotation tools, so Ashley spends a lot of time going back and forth between the paper and the Editor, translating the curatable text from the paper into CV terms, switching browser tabs and consulting notes from the online research and team discussions.

Curation experience map. The curation experience map in Figure 1 presents the identified pain points in the context of the main curation activities. As shown in the map, curation consists of four stages:

009aa614-a176-4994-97ee-078572925bc2_figure1.gif

Figure 1. The curation experience map presents the pain points in the context of the main curation activities.

  • a) Deciding which entity (primarily protein in this case) to curate. What to curate often depends on the curator’s background and the project that they are working on.

  • b) “Triaging” the literature to identify relevant publications.

  • c) “Annotating” a relevant publication to identify the precise curatable information in detail, including determining the species and the relevant experimental method.

  • d) Filling in the curation record based on the curatable information in the publication (which is often done in parallel with annotating the paper).

In a typical curation scenario, curators:

  • Search PubMed for a protein and scan the titles in the search results to identify relevant experimental papers during “triage”.

  • If a title indicates that the paper is relevant (e.g. by mentioning the protein and/or species of interest), then they skim read the Methods and the Results of the paper. They are particularly interested in Figures, Tables and their Legends, which is where they usually find the key (curatable) information.

  • These sections are read more thoroughly during the “annotation” stage to identify the exact experimental context that needs to be curated. They may “glance through” the Abstract or skip it altogether.

A pain point during “triage” is identifying relevant publications: The curators reported that most publications returned in a PubMed search are usually not relevant (i.e. they are false positives). Additionally, because they are often looking for very specific and at times underreported experimental evidence, some searches return very few papers or no papers at all.

Ambiguities in the paper about species, proteins, and relevant experimental methods may slow down curation significantly during “triage” and “annotation”. The curators highlighted that identifying the species as their main pain point as this task may take up to “75% of the curation effort”, and in the end may turn out to be irretrievable from the paper.

Furthermore, to get clarity on the details of an experiment, curators would look up specific references in the paper and do further research online. If this “detective work” is not successful, the curators will either not annotate the paper or will provide fewer annotations. The curators would also discuss unresolved questions such as the annotation of unusual data or what to do when there is no curatable data with their teammates during “organic, informal discussions”. As a last resort they would contact the authors directly; however, authors often do not respond to requests for clarification.

If there are no matching CV terms to annotate the paper new ones are requested. This can delay annotating the paper because the ontology staff are often different to those curating the papers and sometimes requesting new terms results in prolonged discussions between curators and ontologists.

It was observed that curators use different tools at each stage, which are not integrated with each other. During “triage” they would search PubMed for relevant publications and then look at a particular paper on the publisher’s site. Annotating a publication may involve downloading or printing the pdf version of the paper and highlighting curatable text. To fill in the curation record they use a bespoke tool which they call “the Editor”, which presents a template that needs to be filled in with molecule names and experimental context, supported by standardised identifiers, controlled vocabularies and ontologies as well as free text describing the experimental evidence. Most of the times the assertions that go in the database are not in the paper in the same words.

Summary of the pain points in curation workflows. The identified pain points were formulated as How Can We (HCW) questions as follows:

  • HCW identify relevant publications for curation in search results or a list of references during “triage”?

  • HCW identify species, experimental methods, molecules (primarily proteins) and other important entities (such as cells and tissues) in a publication during “triage” and “annotation”?

  • HCW help curators fill in the curation record more efficiently?

Community survey

The survey received 42 respondents in total, covering a number of European countries, such as the United Kingdom, France, Italy and Switzerland. The majority of the participants identified themselves as ‘Scientific curator’ with over 5 years experience in curation. Broadly speaking respondents mainly curate peer-reviewed articles (43.6%), followed by review (25.5%) and preprint (16%) articles. Figure 2 shows their preference in the type of articles for curation.

009aa614-a176-4994-97ee-078572925bc2_figure2.gif

Figure 2. The pie chart shows the article types that are of interest to curators.

Figure 3 shows the section of articles of interest to the curators: the majority look for method sections in articles. The other sections of importance are the figures/tables and their legends. Apart from these, the supplementary data seems to be a section of importance. Furthermore, as shown in Figure 4, the types of entities curators look for in articles were diverse with preference given mainly to: genes/protein curation and their functions, database accession numbers, experimental methods and gene mutations.

009aa614-a176-4994-97ee-078572925bc2_figure3.gif

Figure 3. The figure shows the article sections that are of interest to curators.

009aa614-a176-4994-97ee-078572925bc2_figure4.gif

Figure 4. The graph provides an overview of the entity types of interest.

Respondents were asked if it was useful to know a given article was already curated by another database: the results indicate that it was useful (see Figure 5). A follow-up question was asked as to (if useful) why it was useful to know if an article is already annotated by another database, the majority of the responses ranged from: avoiding duplication if the curators belong to the same consortium, as a means of validation (in case of ontology terms), and consistency in annotations.

009aa614-a176-4994-97ee-078572925bc2_figure5.gif

Figure 5. The figure shows the response for the question: How useful is it to know if the article has been curated by another database?

Outcomes of the user research project

Text mining approaches are sophisticated and play a vital role in addressing big data questions, the results of which can contribute to supplying “leads” on key papers for curation. However, curators require a wide variety of very precise information. Addressing each of those specific requirements will be a complex task, but text mining systems can certainly provide underlying services based on broad commonalities in the requirements that can prove useful to curators. To this end, this effort has proved to be useful in terms of understanding the main challenges faced by curators. While the sample size of the community survey was small, when analysed in conjunction with the observational study we found significant commonalities on the work practices. For instance, in the survey when the respondents were asked for their biggest challenge while curating, the majority responses indicated finding relevant papers, and identifying specific information that includes genes and species.

Our research on curation practices so far indicates a need to better support curators on the following areas:

  • Identify relevant papers for curation during “triage”. An efficient way towards article selection, where search results could be prioritised based on a set of parameters.

  • Identify species, relevant experimental methods, molecules (primarily proteins) and other important entities (such as cells and tissues) in a publication during “triage” and “annotation”.

  • Retrieving certain sections of articles such as Methods, Figures or Results.

  • Integrate triage systems to the various curation workflows.

Conclusion

Contributions made by manual curation are vital to the maintenance of biological databases. To maximise the impact of this critically important process, the latest technological advancements need to be leveraged. Under the Elixir Data platform, we have established infrastructural elements to support scalable curation, which includes automated systems to ingest and aggregate from various sources, APIs to redistribute the annotations and an application called SciLite to display annotations on articles. However, a key challenge for scalable curation is to make use of such core components across different curation teams, whose requirements and workflows can be highly precise and vary widely. Consequently, this requires engagement with the curation community to derive actionable insights that may contribute towards service delivery. Therefore, this project lays the foundation needed to understand the commonalities shared among various curation workflows. Going forward, we will use the results of the project to feed into improvements to text-mined annotation quality and coverage, triage and browsing systems or other engineering solutions.

Data availability

The interview responses from the observation study have not been made public to protect the participants’ privacy. Please contact the corresponding author to apply for access to the data, providing details of the information required and the intended use of the data. Access to the data will be granted once permission from participants to share the data has been obtained.

Underlying data

Zenodo: Results of user research project to understand data curation practices. https://doi.org/10.5281/zenodo.32096588.

This project contains the following underlying data:

  • Curator persona.docx (the curator persona generated during the first part of the study).

  • Curator survey results.xlsx (raw data taken from the survey given to each participant).

Extended data

Zenodo: Results of user research project to understand data curation practices. https://doi.org/10.5281/zenodo.32096588.

This project contains the following extended data:

  • Observation study - interview guide.docx (interview guide outlines the type of questions to be asked)

  • Curator survey questions.docx (questionnaire given to each participant in the community survey).

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 11 Sep 2019
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Venkatesan A, Karamanis N, Ide-Smith M et al. Understanding life sciences data curation practices via user research [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2019, 8(ELIXIR):1622 (https://doi.org/10.12688/f1000research.19427.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 11 Sep 2019
Views
14
Cite
Reviewer Report 14 Oct 2019
Cecilia N. Arighi, Center for Bioinformatics and Computational Biology, University of Delaware, Newark, DE, USA 
Approved with Reservations
VIEWS 14
This work presents a study about the biocuration community and its literature-based curation practices. The work intends to identify pain points and commonalities in the curation workflow where ePMC infrastructure could work to assist this community.
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Arighi CN. Reviewer Report For: Understanding life sciences data curation practices via user research [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2019, 8(ELIXIR):1622 (https://doi.org/10.5256/f1000research.21298.r53748)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
18
Cite
Reviewer Report 07 Oct 2019
Lynette Hirschman, The MITRE Corporation, Bedford, MA, USA 
Approved
VIEWS 18
This is a well-designed and informative examination of data curation practices in support of the Elixir Data Platform, with a focus on exploring curator needs and pain points to identify where automated tools might help. The article presents results from ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Hirschman L. Reviewer Report For: Understanding life sciences data curation practices via user research [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2019, 8(ELIXIR):1622 (https://doi.org/10.5256/f1000research.21298.r53750)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 11 Sep 2019
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.