Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One modality to rule them all? #1324

Open
VisLab opened this issue Oct 14, 2022 · 5 comments
Open

One modality to rule them all? #1324

VisLab opened this issue Oct 14, 2022 · 5 comments

Comments

@VisLab
Copy link
Member

VisLab commented Oct 14, 2022

I hesitate to open this can of worms, but as more auxiliary modalities (e.g., motion, eye-tracking) are incorporated into BIDS, the use of multiple modalities in conjunction for analysis becomes problematic for BIDS, and this is of concern for downstream users of the data.

As far as I can tell, each modality is treated completely independently of each other with no standardized information about how they sync up. I think the motion BEP included time-stamps at one point (I don't know if it was in the final version being incorporated into BIDS), but this is insufficient for analysis in conjunction with modalities such as EEG since the time clocks don't run evenly and these recordings can run for a reasonable length of time.

I would like to raise this issue for consideration.... Does anyone have any thoughts on how this might be addressed in BIDS?
Is anyone interested on working on this?

The only idea that I have thought of so far is to have a "Sync" modality whose data consists of the sync markers for the individual data streams that could be synchronized downstream for analysis (something like available from LSL).
This would preserve the current model of modality independence, and allow the actual imaging data to be as is, but allow tools to synchronize for downstream analysis.

@sappelhoff
Copy link
Member

cc @sjeung @JuliusWelzel

@JuliusWelzel
Copy link
Collaborator

Dear Kay,
first of all, great thread title ;)

You are absolutely right, it would be great to clarify this in more detail across multiple modalities. For MOTION-BIDS we currently plan to advise using the scans.tsv to synchronize between multiple recordings:
"Synchronising motion data with other modalities can be achieved by using the scans.tsv file, which contains the offsets
between shared files. No main modality is defined which assigns each data type equal importance.
When multiple data types are simultaneously recorded, it may be necessary to match events.tsv files shared alongside one
modality with another." If timestamps per sample are available, we also drafted a paragraph in the spec on why it makes sense to share these as a dedicated channel.
However, I would be happy to help drafting a more general solution.

Best,
Julius

@Remi-Gau
Copy link
Collaborator

@VisLab I think this very closely related to this #86

@VisLab
Copy link
Member Author

VisLab commented Oct 18, 2022

@VisLab I think this very closely related to this #86

Yes....it is telling that this issue is from 2018.

@VisLab
Copy link
Member Author

VisLab commented Oct 18, 2022

For MOTION-BIDS we currently plan to advise using the scans.tsv to synchronize between multiple recordings...

This is a start, but not really the solution. As a side note.... I would strongly lobby to make the scans.tsv file and the acq_time strongly recommended. Subjects do have circadian rhythms which affect the brain and downstream analysts might like to control for that. Further, sometimes we would like to know how much time has elapsed in the session for time-on-task issues, so knowing when the various scans started in the session can be quite important.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants