HED annotation quickstart¶
This tutorial takes you through the steps of annotating the events using HED (Hierarchical Event Descriptors). The tutorial focuses on how to make good choices of HED annotations to make your data usable for downstream analysis. The mechanics of putting your selected HED annotations into BIDS (Brain Imaging Data Structure) format is covered in the BIDS annotation quickstart guide.
What is HED annotation?¶
A HED annotation consists of a comma separated list of tags selected from a HED vocabulary or schema. An important reason for using an agreed-upon vocabulary rather than free-form tagging for annotation is to avoid confusion and ambiguity and to promote data-sharing.
The basic terms are organized into trees for easier access and search. The Expandable HED vocabulary viewer allows you to explore these terms.
A recipe for simple annotation¶
In thinking about how to annotate an event, you should always start by selecting a tag from the Event subtree to indicate the general event category. Possible choices are: Sensory-event, Agent-action, Data-feature, Experiment-control, Experiment-procedure, Experiment-structure, and Measurement-event. See the Expandable HED vocabulary viewer to view the available tags.
Most experiments will only have a few types of distinct events. The simplest way to create a minimal HED annotation for your events is:
Select one of the 7 tags from the Event subtree to designate the general category of the event.
Use the following table to select the appropriate supporting tags given that event type.
Standard HED tag selections for minimal annotation.
Event tag |
Support tag type |
Example tags |
Reason |
---|---|---|---|
Sensory-event |
Sensory-presentation |
Visual-presentation |
Which sense? |
Task-event-role |
Experimental-stimulus |
What task role? |
|
Task-stimulus-role |
Cue |
Stimulus purpose? |
|
Item |
(Face, Image) |
What is presented? |
|
Sensory-attribute |
Red |
What modifiers are needed? |
|
Agent-action |
Agent-task-role |
Experiment-participant |
Who is agent? |
Action |
Move |
What action is performed? |
|
Task-action-type |
Appropriate-action |
What task relationship? |
|
Item |
Arm |
What is action target? |
|
Data-feature |
Data-source-type |
Expert-annotation |
Where did the feature come from? |
Label |
Label/Blinker_BlinkMax |
Tool name? |
|
Data-value |
Percentage/32.5 |
Feature value or type? |
|
Experiment-control |
Agent |
Controller-Agent |
What is the controller? |
Informational |
Label/Stop-recording |
What did the controller do? |
|
Experiment-procedure |
Task-event-role |
Task-activity |
What procedure? |
Experiment-structure |
Organizational-property |
Time-block |
What structural property? |
Measurement-event |
Data-source-type |
Instrument-measurement |
Source of the data. |
Label |
Label/Oximeter_O2Level |
Instrument name? |
|
Data-value |
Percentage/32.5 |
What value or type? |
As in BIDS, we assume that the event metadata is given in tabular form.
Each table row represents the metadata associated with a single data event marker,
as shown in the following excerpt of the events.tsv
file for a simple Go/No-go experiment.
The onset
column gives the time in seconds of the marker relative
to the beginning of the associated data file.
Event file from a simple Go/No-go experiment.
onset |
duration |
event_type |
value |
stim_file |
---|---|---|---|---|
5.035 |
n/a |
stimulus |
animal_target |
105064.jpg |
5.370 |
n/a |
response |
correct_response |
n/a |
6.837 |
n/a |
stimulus |
animal_distractor |
38068.jpg |
8.651 |
n/a |
stimulus |
animal_target |
136095.jpg |
8.940 |
n/a |
response |
correct_response |
n/a |
10.801 |
n/a |
stimulus |
animal_distractor |
38014.jpg |
12.684 |
n/a |
stimulus |
animal_distractor |
82063.jpg |
12.943 |
n/a |
response |
incorrect_response |
n/a |
In the Go/No-go experiment, the experimental participant is presented
with a series of target and distractor animal images.
The participant is instructed to lift a finger off a button
when a target animal image appears.
Since in this experiment, the value
column has distinct values
for all possible unique event types, the event_type
column is redundant.
In this case, we can choose to assign all the annotations to
the value
column as demonstrated in the following example.
Version 1: Assigning all annotations to the value column.
value |
Event category |
Supporting tags |
---|---|---|
animal_target |
Sensory-event |
Visual-presentation, Experimental-stimulus, |
animal_distractor |
Sensory-event |
Visual-presentation, Experimental-stimulus, |
correct_response |
Agent-action |
Experiment-participant, (Lift, Finger), Correct-action |
incorrect_response |
Agent-action |
Experiment-participant, (Lift, Finger), Incorrect-action |
The table above shows the event category and the supporting tags as suggested in the Standard hed tags for minimal annotation table.
A better format for your annotations is the 4-column spreadsheet format described in BIDS annotation quickstart, since there are online tools to convert this format into a JSON sidecar that can be deployed directly in a BIDS dataset.
4-column spreadsheet format for the previous example.
column_name |
column_value |
description |
HED |
---|---|---|---|
value |
animal_target |
An target animal image was |
Sensory-event, Visual-presentation, |
value |
animal_distractor |
A non-target animal distractor |
Sensory-event, Visual-presentation, |
value |
correct_response |
Participant correctly |
Agent-action, Experiment-participant, |
value |
incorrect_response |
Participant lifted finger off |
Agent-action, Experiment-participant, |
HED tools assemble the annotations for each event into a single HED tag string.
An exactly equivalent version of the previous example splits the HED tag annotation between
the event_type
and value
columns as shown in the next example.
Version 2: Assigning annotations to multiple event file columns.
column_name |
column_value |
description |
HED |
---|---|---|---|
event_type |
stimulus |
An image of an animal |
Sensory-event, |
event_type |
response |
Participant lifted finger |
Agent-action, |
value |
animal_target |
A target animal image. |
Target, (Animal, Image) |
value |
animal_distractor |
A non-target animal image |
Non-target, Distractor, |
value |
correct_response |
The previous stimulus |
Correct-action |
value |
incorrect_response |
The previous stimulus |
Incorrect-action |
stim_file |
n/a |
Filename of stimulus image. |
(Image, Pathname/#) |
In version 2, the annotations that are common
to all stimuli and responses are assigned to event_type
.
We have also included the annotation for the stim_file
column in the last row
of this table.
The assembled annotation for the first event (with onset 5.035) in the event file excerpt from go/no-go above is:
Sensory-event, Visual-presentation, Experimental-stimulus, Target, (Animal, Image), (Image, Pathname/105064.jpg)
Mapping annotations and column information across multiple column values often makes the annotation process simpler, especially when annotations become more complex. Multiple column representation also can make analysis easier, particularly if the columns represent information such as design variables.
See BIDS annotation quick start for how to create templates to fill in with your annotations using online tools. Once you have completed the annotation and converted it to a sidecar, you simply need to place this sidecar in the root directory of your BIDS dataset.
This quick start demonstrates the most basic HED annotations. HED is capable of much more extensive and expressive annotations as explained in a series of tutorials on this site.