Argo Dataflow has been reimplemented in the scope of a broader project focussed on real-time data processing and analytics. Please checkout the new numaflow project.
Dataflow is a Kubernetes-native platform for executing large parallel data-processing pipelines.
Each pipeline is specified as a Kubernetes custom resource which consists of one or more steps which source and sink messages from data sources such Kafka, NATS Streaming, or HTTP services.
Each step runs zero or more pods, and can scale horizontally using HPA or based on queue length using built-in scaling rules. Steps can be scaled-to-zero, in which case they periodically briefly scale-to-one to measure queue length so they can scale a back up.
Learn more about features.
- Real-time "click" analytics
- Anomaly detection
- Fraud detection
- Operational (including IoT) analytics
pip install git+https://github.com/argoproj-labs/argo-dataflow#subdirectory=dsls/python
from argo_dataflow import cron, pipeline
if __name__ == '__main__':
(pipeline('hello')
.namespace('argo-dataflow-system')
.step(
(cron('*/3 * * * * *')
.cat()
.log())
)
.run())
Read in order:
Beginner:
Intermediate:
- Handlers
- Git usage
- Expression syntax
- Garbage collection
- Scaling
- Command line
- Kubectl
- Events interop
- Workflow interop
- Meta-data
- Idempotence
Advanced