- Added support for uploading parquet files (create and append)
- Added dependency pyarrow for handling parquet export
- Added requirements to setup.cfg
- Added
dtype
argument, so users can specify their own SqlAlchemy dtype for certain columns.
- Removed the required environment variables
ls_sql_name
andls_blob_name
since it's standard.
- Added
dtype
argument, so users can specify their own SqlAlchemy dtype for certain columns.
- Set requirements to specific version
- Always create linked services for blob and sql. This way the user can switch source blob storages and sink databases easier.
- Use environment variable AZURE_STORAGE_CONNECTION_STRING for parquet upload
- If parquet upload is a single file, place it in the root of the folder
- Add upsert for parquet files
- Logging level to debug for query
- Fix bugs and add requirements checks
- Fix bug for staging schema with upsert
- Add support all pandas int dtypes
- Add customisable container name
- Upgrade dependency packages
- Fix failing pipeline because of removed staging schema
- Add more dtypes
- Upgrade package version
- Fix bug when dataframe is empty
- Fix bug categorical dtype
- Make pyodbc driver dynamic
- Upgrade packages and set minimal versions
- Fix code to work with upgraded packages
- Export to parquet on storage instead of csv