Skip to content

Versioning research

Juan Luis Cano Rodríguez edited this page Oct 23, 2024 · 1 revision

(https://github.com/kedro-org/kedro/issues/4129)

What Should We Do Next?

Areas of Focus Suggested Actions
Integration with other leading toolings Integration with Established Tools and Interoperability: This research aims to explore how Kedro can integrate with existing tools to manage complexity, avoiding the need to reinvent the wheel. Kedro should prioritize interoperability with other solutions, leveraging industry-standard tools to enhance its capabilities.

Integration with Leading Tools: Consider integrating with leading tools like Delta Lake, Apache Hudi, Apache Iceberg for data management, Git for code versioning, and MLflow and DVC for model versioning. Users have reported that Kedro's dataset versioning faces compatibility issues on platforms such as Databricks and Palantir Foundry, reducing its versatility and leading to redundancy with more mature platforms. Refer to market research for insights on how other tools support versioning in data, code, and models.

Alignment with Data Lakehouse Concepts: The industry's enthusiasm for the data lakehouse concept, which includes features like versioning and time travel, doesn't fully align with Kedro's current design, creating challenges for integration and complementarity.
Versioning Method 1: Explore Individual Artefact Versioning Granular Versioning Solutions for Different Data Types: To improve versioning, consider implementing granular solutions tailored to different data types—such as code, models, table data, semi-structured, and unstructured data. This approach could offer advanced features but contrasts with Kedro's current method, which indiscriminately creates new copies without fully understanding the data being versioned. Full reproducibility requires capturing the exact code, data, and parameters used in a run. Kedro's current solution is imperfect as it fails to capture all parameters, the state of the code, or ensure consistent upstream data, making full reproducibility and experiment tracking challenging.

Interaction Between Code and Data Versions: Consider how code and data versions interact, potentially creating non-linear branches. This would enable better tracing and auditing by identifying which code version produced which data version, and allow branching from specific points in time, thus addressing the multidimensional aspects of versioning.

• Refer to this Miro board for various artefacts versioning.
Versioning Method 2: Explore versioning entire pipeline to avoid duplicating massive files • The goal was to integrate comprehensive versioning, potentially tying it into experiment tracking. The idea was to version everything within the Kedro pipeline that might change, rather than focusing solely on individual elements like parameters or catalog settings.

• E.g. PMPx team implemented a GitHub-based versioning in Kedro to track entire pipelines, leveraging GitHub branches for comprehensive version control. GitHub improves versioning efficiency by tracking and storing only the changes made, rather than creating complete copies of files, thus saving significant storage space.
Identification Use of Unique ID for Versioning: Consider using a unique identifier instead of a DateTime format for versioning. Although timestamps have advantages, particularly in file systems, they can be problematic. Moreover, displaying numerous parameters in a table is impractical. Users mention the DateTime format currently used for versioning in Kedro is challenging to manage, making it difficult to reference versions across different software and programming languages.

Single Number Version Tracking: Users need a single version number that maps to the corresponding versions of the model, data, and code. This approach simplifies tracking and ensures compatibility, eliminating the complexity of managing multiple version numbers.

Customized Version Names: Consider allowing users to set up customized version names, such as incorporating specific parameters.
Storage, Logging, & Retrieval Centralized Session Store: Consider storing logs and versioning information in a centralized session store to ensure easy access and reference.

Automatic Logging: Implement automatic logging of key parameters and metrics with each version to maintain a complete historical context.

Detailed Metadata Logging: Include detailed metadata with each version, such as data size and key parameters, to provide a comprehensive record.

• Maintain Historical Files: Consider keeping all historical versions of files with attributes for easy lookup without needing additional functions.

• Refer to this Miro board for user journey on Kedro versioning.
Documentation Enhanced Documentation and Use Cases: A documentation with example use cases and usage patterns, providing users with detailed guidance and options to maximize the tool's value, giving users more control.

• Clear Documentation: Document what changes were made in each version, including any parameter adjustments or data modifications.
Accessibility API Access for Versioning in Kedro: Making versioning easily searchable and accessible via an API would allow other applications to build on Kedro and leverage this versioning information.

Collaborative (Sharing) Versioning in Managed Analytics: Ensure multiple users can access versioned outcomes easily, avoid local machine conflicts, and utilize platforms like GitHub for effective collaborative versioning.

Priority matrix (Miro board) image

image

Artefacts: What to Track?

Reproducing runs in Kedro is challenging due to incomplete capture of code, parameters, and data, hindering full reproducibility. Granular versioning across data types could improve this, despite Kedro's limitations. Miro link: https://miro.com/app/board/uXjVK9U8mVo=/?moveToWidget=3458764597910279898&cot=14 image

User journey

Miro link: https://miro.com/app/board/uXjVK9U8mVo=/?moveToWidget=3458764596155374065&cot=14 image

Data

From the user interviews, data versioning involves tracking and managing different versions of datasets over time, allowing for consistent results even when code remains unchanged. It typically includes handling large tables and unstructured data by storing snapshots or slices at specific points, enabling historical analysis. While unstructured data is often versioned by copying versions, semi-structured data may require specialized algorithms, and large datasets demand careful management due to their complexity.

Painpoints: Data

Painpoints Opportunities
1. Inconsistency of Upstream Data in Pipeline Runs: A significant issue is the lack of guarantee that pipeline input data remains consistent across runs due to changes in upstream systems, making it nearly impossible to snapshot all states in a lightweight, deployable framework. ---
2. Excessive and Redundant Dataset Versioning: Frequent pipeline runs generate numerous, often unnecessary, dataset versions, leading to excessive storage use and a cluttered version history that's difficult to manage and navigate. Introduce for e.g., a --disable-versioning flag in Kedro's CLI to prevent unnecessary version creation, tag important outputs, and simplify storage engine selection with compatible options like Apache Hudi or Delta.
3. Challenges in Retrieving and Managing Data Versions in Kedro and Jupyter Notebooks: Engineers face difficulties retrieving specific dataset versions in Jupyter and Kedro, often requiring manual inspection and custom logic to locate and load the desired timestamps, making the process cumbersome and time-consuming. Implement a feature to load dataset versions by order or automatically load the most recent version (e.g., catalog.load("df", version="last"), enhancing workflow efficiency and intuitive version management.
4. Difficulty with Transcoding: Users face challenges with Kedro's versioning during transcoding, leading to failures. They struggle to retrieve recent dataset timestamps easily, requiring manual AWS checks and adding unnecessary pipeline steps. ---

Extra insights:

Usage of versioned datasets is low

  • There seems to be very low prevalence of versioned: true datasets on open source projects.

https://github.com/kedro-org/kedro/network/dependents shows 2 439 repositories, and this query shows 154 files. That's an upper bound of approximately ~6 % of open repositories using versioned datasets, and this is without discarding those that are mostly a copy-paste of the spaceflights tutorial.

  • There seems to be very low prevalence of --load-versions in our telemetry.

Out of 3 537 184 total kedro run commands, only 1 644 included --load-versions, ~0.05 %.

SQL query
SELECT
  COUNT(*)
FROM HEAP_FRAMEWORK_VIZ_PRODUCTION.HEAP.ANY_COMMAND_RUN
WHERE
  COMMAND LIKE 'kedro run %'
  AND COMMAND LIKE '%--load-version%'

Interest in versioned datasets in our support channels is low

https://linen-slack.kedro.org/?threads%5Bquery%5D=%22versioned%3A%20true%22 shows 35 results. It's difficult to assess how many "questions" (~threads) are, but for reference, "dataset" yields 877 results, "plugin" yields 270 results, "node" gives 731 results. Searching for "*" gives 3 773 results. This is roughly ~1 % of the messages.

Users are finding workarounds to their pain points within versioning

For example, https://github.com/kedro-org/kedro/issues/4028#issuecomment-2315318257 states

just for your UX research transparency, we now completely moved away from versioning and instead have a RUN_ID env variable that we pick up in globals.yaml and prefix all pipeline paths with that. we found this approach (all data of a version bundled under one path) to be preferable.

Feature-rich alternatives exist

Read @iamelijahko's thorough market analysis in https://github.com/kedro-org/kedro/wiki/Market-research-on-versioning-tools

Maintenance of versioned datasets delays resolution of unrelated user pain points

For example, these 5 issues with the Polars datasets:

https://github.com/kedro-org/kedro-plugins/issues/789 https://github.com/kedro-org/kedro-plugins/issues/702 https://github.com/kedro-org/kedro-plugins/issues/625 https://github.com/kedro-org/kedro-plugins/issues/590 https://github.com/kedro-org/kedro-plugins/issues/444

are all blocked because of how we use fsspec for our versioning.

Clone this wiki locally