Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeseries annotation: extractSpreadsheet very slow in col-based sheets #187

Open
majidghgol opened this issue Jan 12, 2018 · 0 comments
Open
Assignees

Comments

@majidghgol
Copy link
Collaborator

The sample file for Earthquakes data (issue 175) has about 8000 rows and 20 columns. I introduced a very simple mapping spec to extract only the magnitude of the earthquakes (one column). However, I noticed every iteration in parse_col_ts takes 1s, which makes it need more than 2 hours to finish. I am not sure if it is specific to my machine, but as far as I know pyexcel is much more efficient in row by row reading of excel file. So, it might be due the fact that the spreadsheet is read col by col in this function.

The mapping spec I am using:

[
	{
		"TimeSeriesRegions":
		[
			{
				"orientation": "col",
				"locs": "[2:8498]",
				"cols": "[E:E]",
				"metadata": [
					{
						"orientation": "row",
						"name": "data_label",
						"loc": 1
					}
				],
				"times": {
					"locs": "[A]"
				}
			}
		],
		"Metadata":
		[

		]
	}
]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants