Automatically generate descriptive CSVW (CSV on the Web) metadata for tabular data files:
- Extract columns datatypes: detect if they are categorical, and which values are accepted, using
ydata-profiling
. - Ontology mappings: when provided with a URL to an OWL ontology, text embeddings are generated and stored in a local Qdrant vector database for all classes and properties, we use similarity search to match each data column to the most relevant ontology terms.
- Currently supports: CSV, Excel, SPSS files. Any format that can be loaded in a Pandas DataFrame could be easily added, create an issue on GitHub to request a new format to be added.
- Processed files needs to contain 1 sheet, if multiple sheets are present in a file only the first one will be processed.
Warning
The lib does not check yet if the VectorDB has been fully loaded. It will skip loading if there is at least 2 vectors in the DB. So if you stop the loading process halfway through, you will need to delete the VectorDB folder to make sure the lib run the ontology loading.
This package requires Python >=3.8, simply install it with:
pip install git+https://github.com/vemonet/csvw-ontomap.git
You can easily use your package from your terminal after installing csvw-ontomap
with pip:
csvw-ontomap tests/resources/*.csv
Store CSVW metadata report JSON-LD output to file:
csvw-ontomap tests/resources/*.csv -o csvw-report.json
Store CSVW metadata report as CSV file:
csvw-ontomap tests/resources/*.csv -o csvw-report.csv
Provide the URL to an OWL ontology that will be used to map the column names:
csvw-ontomap tests/resources/*.csv -m https://semanticscience.org/ontology/sio.owl
Specify the path to store the vectors (default is data/vectordb
):
csvw-ontomap tests/resources/*.csv -m https://semanticscience.org/ontology/sio.owl -d data/vectordb
Use this package in python scripts:
from csvw_ontomap import CsvwProfiler, OntomapConfig
import json
profiler = CsvwProfiler(
ontologies=["https://semanticscience.org/ontology/sio.owl"],
vectordb_path="data/vectordb",
config=OntomapConfig( # Optional
comment_best_matches=3, # Add the ontology matches as comment
search_threshold=0, # Between 0 and 1
),
)
csvw_report = profiler.profile_files([
"tests/resources/*.csv",
"tests/resources/*.xlsx",
"tests/resources/*.spss",
])
print(json.dumps(csvw_report, indent=2))
The final section of the README is for if you want to run the package in development, and get involved by making a code contribution.
Clone the repository:
git clone https://github.com/vemonet/csvw-ontomap
cd csvw-ontomap
Install Hatch, this will automatically handle virtual environments and make sure all dependencies are installed when you run a script in the project:
pipx install hatch
Make sure the existing tests still work by running the test suite and linting checks. Note that any pull requests to the fairworkflows repository on github will automatically trigger running of the test suite;
hatch run test
To display all logs when debugging:
hatch run test -s
In case you are facing issues with dependencies not updating properly you can easily reset the virtual environment with:
hatch env prune
Manually trigger installing the dependencies in a local virtual environment:
hatch -v env create
The deployment of new releases is done automatically by a GitHub Action workflow when a new release is created on GitHub. To release a new version:
-
Make sure the
PYPI_TOKEN
secret has been defined in the GitHub repository (in Settings > Secrets > Actions). You can get an API token from PyPI at pypi.org/manage/account. -
Increment the
version
number in thepyproject.toml
file in the root folder of the repository.hatch version fix
-
Create a new release on GitHub, which will automatically trigger the publish workflow, and publish the new release to PyPI.
You can also do it locally:
hatch build
hatch publish