Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spike2IO cannot read smrx recordings #424

Closed
whbdupree opened this issue Oct 26, 2017 · 16 comments
Closed

Spike2IO cannot read smrx recordings #424

whbdupree opened this issue Oct 26, 2017 · 16 comments
Assignees
Labels
Milestone

Comments

@whbdupree
Copy link

I am unable to read recordings in the .smrx format

@whbdupree
Copy link
Author

In [1]: from neo.io import Spike2IO
r
In [2]: r=Spike2IO(filename='test.smrx')

In [3]: x=r.read(cascade=True,lazy=False)

KeyError Traceback (most recent call last)
in ()
----> 1 x=r.read(cascade=True,lazy=False)

/usr/lib/python3.6/site-packages/neo/io/baseio.py in read(self, lazy, cascade, **kargs)
119 if not cascade:
120 return bl
--> 121 seg = self.read_segment(lazy=lazy, cascade=cascade, **kargs)
122 bl.segments.append(seg)
123 bl.create_many_to_one_relationship()

/usr/lib/python3.6/site-packages/neo/io/spike2io.py in read_segment(self, take_ideal_sampling_rate, lazy, cascade)
90 """
91
---> 92 header = self.read_header(filename=self.filename)
93
94 # ~ print header

/usr/lib/python3.6/site-packages/neo/io/spike2io.py in read_header(self, filename)
195 channelHeader += HeaderReader(fid, np.dtype(dt))
196
--> 197 channelHeader.type = dict_kind[channelHeader.kind]
198 #~ print i, channelHeader
199 channelHeaders.append(channelHeader)

KeyError: 215

@samuelgarcia
Copy link
Contributor

Yes. Only smr are supported.
I don't known how much smr and smrx are different. If it is only some conversion 32bit to 64bit this should be easy to adapt. But if smrx is a completely new file format then it is a piece of work.

Do you have small files you could give and specification of the format ?

@whbdupree
Copy link
Author

whbdupree commented Oct 27, 2017 via email

@samuelgarcia
Copy link
Contributor

If not too difficult, this snippet should be as small as possible and should contains all possible objects (signals, spikes, event, epoch..). Thanks.

@whbdupree
Copy link
Author

whbdupree commented Oct 27, 2017 via email

@apdavison apdavison added IO and removed IO labels Jan 9, 2018
@samuelgarcia samuelgarcia added this to the 0.7.0 milestone Feb 16, 2018
@sLopezdeDiego
Copy link

Does any of you have a specification of the smrx file format? I would like to add support for this, but I have not been able to find any documentation for smrx.

@samuelgarcia
Copy link
Contributor

I still do not have theses specifications.
maybe a plexon custumers could ask for them diretcly ?

@apdavison apdavison modified the milestones: 0.7.0, 0.8.0 Nov 15, 2018
@apdavison apdavison modified the milestones: 0.8.0, 0.9.0 Jul 23, 2019
@neuralcircuitsTV
Copy link

neuralcircuitsTV commented Apr 14, 2020

It would be very useful to be able to read .smrx files, as these are now standard for Spike2 data acquisition. I can provide a file if helpful. This is taken from the Spike2 v9 help files:

The 64-bit filing system (*.smrx)

The new 64-bit filing system was designed to be logically compatible with the 32-bit system; by this we mean that it can hold the same types of data as the original without information loss, though some of these types are extended. It is also likely that we will add further data types, as required, in the future. Features include:

1.Times are stored as 64-bit integers. At a time resolution of 1 ns (nanosecond, 10-9 seconds), the maximum duration is around 256 years.

2.The file size is limited only by the capabilities of the operating system and by the size of a file that you can manage conveniently for archival. As I write this disk drives have a maximum size of a few TB. The file format is designed to make recovery of damaged files relatively straightforward.

3.The files have a built-in data lookup system designed to minimise the number of disk reads required to locate data on any channel.

4.The overhead for severely fragmented waveform data has been reduced to a few bytes per fragment.

We have removed many of the limits on things like the number of channels in a file and the length of channels comments and units. Before Spike2 version 8.03 the program enforced the original limits on the length of strings (but treated as Unicode characters). From version 8.03 onwards, a 64-bit smrx file in Spike2 can have 20 Unicode characters of channel title, 10 of channel units and comments can be up to 100 characters. The underlying file format does not impose a fixed limit, but practical considerations make it useful to define them.

@samuelgarcia
Copy link
Contributor

Dear Tim,
I am the original contributor of Spike2IO

I agree. This smrx format have to be incorporated in neo.

To develop a new format a developer need:

  • full specficiation of the format
  • small file for the test chain
  • time
  • help
  • personnal need of the format

At the moment, I don't have any of theses.
But maybe, if you have time we could collaborate on this format.

Cheers

Samuel

@neuralcircuitsTV
Copy link

Dear Samuel
Thanks, this would be great, and I know researchers who would benefit from being able to directly import smrx files. If there is some time in the future, I put a short example data file in a repo, lfp-juxta. Happy to collaborate. Best, Tim

@apdavison apdavison modified the milestones: 0.9.0, 0.10.0 Jul 7, 2020
@bendichter
Copy link
Contributor

bendichter commented Aug 19, 2020

@samuelgarcia

We'd like to offer conversion from smrx to NWB, and I think the best way to offer this would be:
SMRX -> NEO -> SpikeExtractors -> NWB

I was able to find some code to do this in MATLAB as part of the Brainstorm package here in case that is useful.

I found a link for Python code here but the form threw an error. It also says this code only works on Windows, which I know won't work for many NWB users, but at least it's a start.

edit: The link is now fixed, and leads to a .exe which I cannot open on OSX Catalina using wine. I'm guessing it's 32-bit

@samuelgarcia
Copy link
Contributor

Hi ben.

2 independent answers:

  • For pure file conversion, in my mind the chain "SMRX -> NEO -> SpikeExtractors -> NWB" is not optimal. I think the
    "SMRX -> NEO -> NWB" would be better, the neo API is really more sophisticated than the spikeextractors layer.
    Neo handle : multi segment -aka pauses in recording- , multiple sampling rate, annotation, array annotaion, epch, event, ....
    The spikeinterface layer for IO for design with a unique goal : spikesorting. The neo layer is more universal to handle datasets
    The central core/strength of neo is "handle datset with relationship between object". This is crutial to cover all dataset conversion
    details.

  • For the srmx, as I said in April I need :

    • full specficiation of the format >>> maybe the code you mention can help for this
    • small file for the test chain >>> if you work on this conversion maybe you can provide a small file here
    • time >>> I am back from vaccation, I think it is OK now.
    • help >>> could you help me on this ?

Cheers

@bendichter
Copy link
Contributor

@samuelgarcia

I agree, fewer steps in the conversion is better. I haven't been able to try out #796 yet, but if that works for this use-case I agree that would be better, particularly since it supports more flexibility and types of data than SpikeInterface.

That being said, we also want to use SpikeInterface spike sorting pipelines on SMRX data, so we'd need some path that connects SMRX to SpikeExtractor regardless of if it is our preferred conversion strategy.

I've been working on the requirements you mentioned:

  • full specification of the format - I have not been able to find this and I think it is unlikely that we will be able to get it. We have been able to get a pre-compiled Python API with some example python code for usage of that API, but even the API itself is not open. How does this map onto the scope of NEO? Do you support reading of closed formats with provided API libraries? I'm not crazy about it myself.
  • For example data, I found this file, here but I can't verify its quality
  • For time and help, we are working on a contract that would allow us to direct some person-hours towards this problem. Will keep you updated as this develops

@bendichter
Copy link
Contributor

@samuelgarcia CED just released a pip-installable API: https://pypi.org/project/sonpy/

@samuelgarcia
Copy link
Contributor

Close by #987

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants