Keywords
Database curation, User research, Observational study, Curator survey, Annotation Infrastructure, Europe PMC
Database curation, User research, Observational study, Curator survey, Annotation Infrastructure, Europe PMC
Biological databases play a key role in knowledge discovery in life science research. A major contributor towards the maintenance of these databases is the process of manual curation. Curation is a high value task as experts carefully examine the relevant scientific literature and extract the essential information, such as biological functions and relationships between biological entities, generating the corresponding database records in a structured way. The advances in high-throughput technologies have resulted in tremendous growth of biological data, consequently increasing the number of research papers being published. As a result, the demand for high-quality curation that makes use of these resources has never been higher, but this demand can present challenges for curators in finding, and assimilating scientific conclusions described in the literature.
Text mining, machine learning and analytics promise to provide better ranking of reading lists, classification of articles, and identification of assertions with their biological context and evidence buried within the text of articles. To this end, many life science knowledgebases now include text mining (to varying degrees) in curation workflows. For example, databases, such as neXtProt1,2 and FlyBase3 have integrated text mining algorithms into their respective curation workflows to retrieve a ranked list of relevant articles and tag entities of interest. Furthermore, tools like PubTator4 and TextPresso5 are other examples of text mining tools that have been adopted by some curation communities. On the other hand, databases that mainly rely on manual curation, such as IntAct6 and DisProt7, are exploring possibilities to leverage text mining approaches to select articles for further curation.
Broadly speaking, the curation community recognises the potential of text mining in article triage and the identification of entities/concepts for curation. Nevertheless, text mining pipelines adopted thus far have been engineered to cater to specific domains or projects and wide uptake is lacking; curators often continue to use manual curation methods. This mainly stems from the wide variety of very precise information required by curators. The challenge is therefore to produce robust systems that both address the immediate and specific needs of curators as well as scale across multiple curation groups. In order to do this, we need to know the immediate challenges faced by curators with respect to selection and prioritisation of articles to curate. A clear understanding of the requirements will help build new systems and/or re-orient existing systems that cater to the needs of the curation community.
In this report we describe the outcomes of a user research project, conducted to understand curation practices and priorities for article selection. The project comprised of two parts, a) an observational study, to understand how curators proceed with selecting articles to curate, to identify commonalities in curator requirements; and b) a community survey, to specifically identify the immediate priorities of curators, such as entity types and sections of interest in an article, to name a few. The aim of this study is to identify specific actions for the Elixir Data Platform in the future, optimising and extending existing systems and infrastructural components. In the subsequent sections we present the main findings from our investigation.
We initially drafted an interview guide (list of questions available as Extended data8) and a preliminary curator persona, reflecting our initial hypotheses about curators and their work practices. Following this we selected five different curation teams. The selection criteria was based on the type of curation the group was involved with, i.e., extracting biological information from scientific literature and integrating it into a biological database, contrary to groups that process raw data submissions (such as sequencing data). Out of these, two teams were based at EMBL-EBI, the other three teams were based in Norway, Switzerland and Italy. For teams that were situated at EMBL-EBI, the interviews were held in person and for the other teams the interviews were conducted over conference calls. The sessions were conducted over a period of two months: between March and April 2018. We observed three project leaders and four curators from the selected teams. The participants belong to teams that focused on curating very specific experimental evidence primarily on proteins such as protein-protein interactions, the role of a protein in a complex, protein disruption, human protein functions and transcription factor regulation. One team is focussed on the annotation of human genes relevant to a particular disease and another one on curating publications reporting associations of genetic variants with diseases.
For each session we followed an iterative user research process9,10 (as outlined in Table 1):
The table provides an overview of the observation study on curation practices.
The participants were asked to proceed with their daily curation work11 and were observed on how:
○ they select the entities (either come from a spreadsheet or partially curated data record) they wish to curate,
○ perform searches (including the query parameters) to retrieve the initial set of publications,
○ the criteria they used to either discard or select an article,
○ the information from the selected article is transferred to the respective curation platform.
Using the “What? So What? Now What?” method1, we transcribed our notes from these sessions and identified the most important observations, patterns and their implications.
Additionally, we further carried out two follow-up interviews with one project lead and one curator from the same team at EMBL-EBI to clarify particular curation tasks.
Furthermore, we conducted a stakeholder workshop12 with three project leads and four curators from four different curation teams to validate our main learnings from the interview sessions. As some curators took part both in the interviews and the workshop, overall we have engaged with 12 curators and team leads from seven different teams. The participants were presented with the preliminary curator persona13 and a workflow outlining the curation process. The participants were invited to give us their feedback on these drafts and express their challenges or pain points as How Can We (HCW) questions2. Their feedback was used to revise the curator persona and the curation process workflow and was consolidated into the curation experience map.
Based on the interview guide used for the observational study, we formulated questions to understand the immediate challenges. The survey consisted of 15 questions (see Extended data8) ranging from, for instance, the section of the article the curators were most interested in; the types of biological entities curators look for; and whether it helps to know that a given article has been curated/accounted for in another database. The community survey was conducted online and was developed using Typeform. The survey was promoted via the mailing lists of various consortia, such as ELIXIR, International Society for Biocuration (ISB) and Alliance of Genome Resources. These widely known consortia provide a forum for developers, researchers and curators to streamline and standardise the maintenance of biological resources. The survey was conducted between December 2018 to January 2019.
We confirm that we have obtained consent to use data from the participants as per the Europe PMC privacy notice. The privacy notice is formulated in accordance with EMBL’s data protection framework. The consent was part of the survey form and the participants can take the survey on accepting the terms and conditions for data re-use.
Curator persona. We created a persona called Ashley (see Underlying data8) to present the curators’ needs, sentiments, tasks and pain points from their own perspective in more detail to help us empathise with them13:
Ashley curates with precision and attention to detail, while trying to be as efficient as possible. Ashley is looking for very specific information about an experiment that the authors of a paper do not always report in a lot of detail. Ashley appreciates being able to ask a team mate when the “detective work” bears no fruit.
Apart from these “organic, informal discussions”, Ashley works independently during “triage”, “annotation” and to fill in the curation record in the Editor. During the latter stage Ashley tries to “translate from the author’s language to the curator’s language” using the appropriate identifiers and Control Vocabulary (CV) terms for species, proteins, methods and other important entities so that the curated evidence is referred to precisely and consistently and the annotations in the curation record are self explanatory outside the context of the paper.
This is cumbersome as a particular type of evidence is not always referred to in the same way and in enough detail in the literature. Moreover, the Editor is not integrated with the search and annotation tools, so Ashley spends a lot of time going back and forth between the paper and the Editor, translating the curatable text from the paper into CV terms, switching browser tabs and consulting notes from the online research and team discussions.
Curation experience map. The curation experience map in Figure 1 presents the identified pain points in the context of the main curation activities. As shown in the map, curation consists of four stages:
a) Deciding which entity (primarily protein in this case) to curate. What to curate often depends on the curator’s background and the project that they are working on.
b) “Triaging” the literature to identify relevant publications.
c) “Annotating” a relevant publication to identify the precise curatable information in detail, including determining the species and the relevant experimental method.
d) Filling in the curation record based on the curatable information in the publication (which is often done in parallel with annotating the paper).
In a typical curation scenario, curators:
Search PubMed for a protein and scan the titles in the search results to identify relevant experimental papers during “triage”.
If a title indicates that the paper is relevant (e.g. by mentioning the protein and/or species of interest), then they skim read the Methods and the Results of the paper. They are particularly interested in Figures, Tables and their Legends, which is where they usually find the key (curatable) information.
These sections are read more thoroughly during the “annotation” stage to identify the exact experimental context that needs to be curated. They may “glance through” the Abstract or skip it altogether.
A pain point during “triage” is identifying relevant publications: The curators reported that most publications returned in a PubMed search are usually not relevant (i.e. they are false positives). Additionally, because they are often looking for very specific and at times underreported experimental evidence, some searches return very few papers or no papers at all.
Ambiguities in the paper about species, proteins, and relevant experimental methods may slow down curation significantly during “triage” and “annotation”. The curators highlighted that identifying the species as their main pain point as this task may take up to “75% of the curation effort”, and in the end may turn out to be irretrievable from the paper.
Furthermore, to get clarity on the details of an experiment, curators would look up specific references in the paper and do further research online. If this “detective work” is not successful, the curators will either not annotate the paper or will provide fewer annotations. The curators would also discuss unresolved questions such as the annotation of unusual data or what to do when there is no curatable data with their teammates during “organic, informal discussions”. As a last resort they would contact the authors directly; however, authors often do not respond to requests for clarification.
If there are no matching CV terms to annotate the paper new ones are requested. This can delay annotating the paper because the ontology staff are often different to those curating the papers and sometimes requesting new terms results in prolonged discussions between curators and ontologists.
It was observed that curators use different tools at each stage, which are not integrated with each other. During “triage” they would search PubMed for relevant publications and then look at a particular paper on the publisher’s site. Annotating a publication may involve downloading or printing the pdf version of the paper and highlighting curatable text. To fill in the curation record they use a bespoke tool which they call “the Editor”, which presents a template that needs to be filled in with molecule names and experimental context, supported by standardised identifiers, controlled vocabularies and ontologies as well as free text describing the experimental evidence. Most of the times the assertions that go in the database are not in the paper in the same words.
Summary of the pain points in curation workflows. The identified pain points were formulated as How Can We (HCW) questions as follows:
HCW identify relevant publications for curation in search results or a list of references during “triage”?
HCW identify species, experimental methods, molecules (primarily proteins) and other important entities (such as cells and tissues) in a publication during “triage” and “annotation”?
HCW help curators fill in the curation record more efficiently?
The survey received 42 respondents in total, covering a number of European countries, such as the United Kingdom, France, Italy and Switzerland. The majority of the participants identified themselves as ‘Scientific curator’ with over 5 years experience in curation. Broadly speaking respondents mainly curate peer-reviewed articles (43.6%), followed by review (25.5%) and preprint (16%) articles. Figure 2 shows their preference in the type of articles for curation.
Figure 3 shows the section of articles of interest to the curators: the majority look for method sections in articles. The other sections of importance are the figures/tables and their legends. Apart from these, the supplementary data seems to be a section of importance. Furthermore, as shown in Figure 4, the types of entities curators look for in articles were diverse with preference given mainly to: genes/protein curation and their functions, database accession numbers, experimental methods and gene mutations.
Respondents were asked if it was useful to know a given article was already curated by another database: the results indicate that it was useful (see Figure 5). A follow-up question was asked as to (if useful) why it was useful to know if an article is already annotated by another database, the majority of the responses ranged from: avoiding duplication if the curators belong to the same consortium, as a means of validation (in case of ontology terms), and consistency in annotations.
Text mining approaches are sophisticated and play a vital role in addressing big data questions, the results of which can contribute to supplying “leads” on key papers for curation. However, curators require a wide variety of very precise information. Addressing each of those specific requirements will be a complex task, but text mining systems can certainly provide underlying services based on broad commonalities in the requirements that can prove useful to curators. To this end, this effort has proved to be useful in terms of understanding the main challenges faced by curators. While the sample size of the community survey was small, when analysed in conjunction with the observational study we found significant commonalities on the work practices. For instance, in the survey when the respondents were asked for their biggest challenge while curating, the majority responses indicated finding relevant papers, and identifying specific information that includes genes and species.
Our research on curation practices so far indicates a need to better support curators on the following areas:
Identify relevant papers for curation during “triage”. An efficient way towards article selection, where search results could be prioritised based on a set of parameters.
Identify species, relevant experimental methods, molecules (primarily proteins) and other important entities (such as cells and tissues) in a publication during “triage” and “annotation”.
Retrieving certain sections of articles such as Methods, Figures or Results.
Integrate triage systems to the various curation workflows.
Contributions made by manual curation are vital to the maintenance of biological databases. To maximise the impact of this critically important process, the latest technological advancements need to be leveraged. Under the Elixir Data platform, we have established infrastructural elements to support scalable curation, which includes automated systems to ingest and aggregate from various sources, APIs to redistribute the annotations and an application called SciLite to display annotations on articles. However, a key challenge for scalable curation is to make use of such core components across different curation teams, whose requirements and workflows can be highly precise and vary widely. Consequently, this requires engagement with the curation community to derive actionable insights that may contribute towards service delivery. Therefore, this project lays the foundation needed to understand the commonalities shared among various curation workflows. Going forward, we will use the results of the project to feed into improvements to text-mined annotation quality and coverage, triage and browsing systems or other engineering solutions.
The interview responses from the observation study have not been made public to protect the participants’ privacy. Please contact the corresponding author to apply for access to the data, providing details of the information required and the intended use of the data. Access to the data will be granted once permission from participants to share the data has been obtained.
Zenodo: Results of user research project to understand data curation practices. https://doi.org/10.5281/zenodo.32096588.
This project contains the following underlying data:
Zenodo: Results of user research project to understand data curation practices. https://doi.org/10.5281/zenodo.32096588.
This project contains the following extended data:
Observation study - interview guide.docx (interview guide outlines the type of questions to be asked)
Curator survey questions.docx (questionnaire given to each participant in the community survey).
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
We are grateful to our participants, to Francisco Talo for his involvement and to Ane Møller Gabrielsen for her comments.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
References
1. Hirschman L, Burns GA, Krallinger M, Arighi C, et al.: Text mining for the biocuration workflow.Database (Oxford). 2012; 2012: bas020 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: My area of expertise is on biocuration, usability and text mining applied to biocuration.
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Yes
If applicable, is the statistical analysis and its interpretation appropriate?
Yes
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
References
1. Hirschman L, Burns GA, Krallinger M, Arighi C, et al.: Text mining for the biocuration workflow.Database (Oxford). 2012; 2012: bas020 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Evaluation of text mining for biomedical applications, particularly curation.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 1 11 Sep 19 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)