Participation

Participation in the BioNLP shared task is free and open to all, academia and industry, individuals and groups. This page provides instructions and general guidelines for participation; please see the main page for descriptions of the tasks and other information.

Instructions for participation

The general flow of participation is outlined in the following. Please note that the BioNLP 2011 shared task evaluation is divided into two nonoverlapping parts, the first for supporting tasks and the latter for main tasks.

Registration

All participants should register for the shared task at this page.

Sample data can be downloaded without registration, but registration is required for training and development data access, use of the development test server, and final submission.

Registration does not require commitment to participate, and while registrants are requested to provide names and other identifying information to the organizers, this information will not be published. The BioNLP shared task allows anonymous participation: final results will be initially published without identifying information, and participants wishing to remain anonymous may withdraw from the task at this point.

Before training and development data release

Prior to the release of training and development data (Sep. 2010 for supporting tasks, Dec. 2010 for main tasks), small data samples for each task are available. The samples contain only a small number of abstracts each, and they are not intended for training machine learning methods or for precise testing, but intended to serve an example of the final data. The samples may allow participants to start system design and general implementation and rule writing for rule-based systems.

Participants wishing to train and test machine-learning based systems during this phase may find the data of the previous shared task, the BioNLP'09 shared task on event extraction, useful. Please note that while the general setup and data format of this former task are the same and the event types correspond to those of the GENIA main task, some details such as file naming conventions are different.

System training and development phase

For supporting tasks, training and development data will be released in September 2010 and training data in October 2010. For the main tasks, the training and development data will be available in December 2010 and the test data in March 2011. The period between these releases, approximately one month for supporting tasks and three months for the main tasks, is the primary system training and development phase of the shared task.

During this phase, an online submission system accepting system outputs for development test data and returning evaluation results will be available. This system is identical to the submission system for final results. We strongly encourage participants to test submitting their system outputs to the online development test set evaluation system to avoid surprises in final submission.

Test phase

Test data will be released in October 2010 for the supporting tasks and in March 2011 for the main tasks. Following test data release, participants will have a period of approximately one week to submit their final results.

The submission system for final results will be opened at the same time as the test data is made available. To encourage early testing of submission to assure that the format of the final submission is correct, the system accepts any number of submissions. However, to prevent fine-tuning against the test set, the final submission system does not provide immediate feedback on results. Additionally, only the last submission from each participant is considered in evaluation; possible prior submissions are simply discarded.

Final results will be announced to all participants two days after the close of the submission system.

Please note that while the shared task test data will be made available for further testing through the web interface after the shared task, the full gold annotations for this data will not be released at this time to allow the test data to continue to serve as the basis for stable evaluation after the task in a way that minimizes risk of overfit or otherwise unrealistic results. Participants wishing to perform manual error analysis are encouraged to do so on the development test data, which should have statistically identical properties to the test data. A date for the release of gold annotations will be set later.

After the evaluation

After final results for the supporting tasks are published, the system training and development phase for the main tasks begins. After final results are published for the main tasks, the evaluation part of the BioNLP 2011 shared tasks is over.

After completion of the evaluation, all participants are encouraged to write a manuscript describing their system, analysis, and results for submission to the BioNLP 2011 shared task workshop, about which the detail will be announced later. The manuscript submission deadline is April 2011. Detailed instructions for authors will be made available before the end of the evaluation phase.

The BioNLP 2011 shared task workshop will be held in summer 2011.

Guidelines for participation

Participants are encouraged to

Use any text resource

In addition to the training data for each task, participants are encouraged to make use of the development data for the task, the training and development data for any other task (or the 2009 task), any ontologies, large-scale unannotated resources (e.g. PubMed, PMC), and any other annotated corpus, in any way they see fit. The only restriction is that participants may not use human annotations for the test data (see also below).

Use any tool

Sentence splitters, parsers, taggers, coreference resolution tools and any other tools can be used, and both previously available tools and ones newly introduced/retrained/etc. by the participants can be used. The use of previously introduced tools for participating in tasks the tools are intended to address is explicitly permitted: for example, participants may use a coreference resolution tool they have not developed themselves as part of their system to participate in the supporting task on coreference.

Participants are also encouraged to make use of available event extraction systems, and may submit results from their own previously introduced systems. However, it is not sufficient for participation to submit the output of a system developed by another group (even if retrained), and participants making use of event extraction systems introduced by others in their own system should carefully evaluate the contribution of their proposed extensions or modifications to the performance of the base system.

We ask the participants to observe the following minimal restrictions.

One final submission per team per task

Participants may take part in any number and any combination of supporting and main tasks, and there are no limits on the use of the development test evaluation system or the number of attempts to submit final results. However, only one final submission per team will be considered in each task, that is, the shared task does not allow multiple "runs" for the final test data.

No human annotation of test data

Participants are encouraged to use external tools and resources, including other manually annotated resources in addition to the shared task training data. However, participants should not perform any manual annotation of the final test data or use manual annotations created by other groups. In making use of annotated corpora, we would like to ask the participants to make sure that these do not overlap with the final test data e.g. by checking for PMID overlap.