If I want to read a lot mdf and concat them together ,is there easy way to save memory/calculate out of memory? #562
shangrilaer
started this conversation in
Ideas
Replies: 2 comments 2 replies
-
The last time I had to glue hundreds of measurements together, I first filtered out the relevant signals before calling concatenate(). Of course this only helps, in case you need just a small subset of the measured values for your analysis. Also it might be a good idea to check, if you are using a 64 bit version of Python. |
Beta Was this translation helpful? Give feedback.
2 replies
-
You should definitely use the channel filtering at load time to speed up the process (see https://asammdf.readthedocs.io/en/latest/tips.html) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello , I want to read a lot of mdf files during road test from several collages. but met a out-of-memory problem.
I want to make a big dataframe to do this statistics.so I can make some histograms or kind of things as calculating total distance /time this kind of things.
so I met some problems:
1, I need to resample different ,and asammdf helps and can do fast enough.
2, I want to join them together to a big dataframe, but met a out-of-memory problem. maybe the file is big or not , during reading will cost memory. during concat will cost momory . too many mdf files to join to a big dataframe will cost a lot . Is anyone have good ideas for this? I can use tall array in Matlab , but I dont know how to do in Python.
Beta Was this translation helpful? Give feedback.
All reactions