Skip to content

Commit

Permalink
Merge 2.3.3 dev to business-dev (apache#292)
Browse files Browse the repository at this point in the history
* [Feature][Connector V2] expose configurable options in Cassandra (apache#3681)

* [Connector-V2][Paimon] Introduce paimon connector (apache#4178)

* [Improve][Zeta] Improve Zeta operation max count and ignore NPE (apache#4787)

* [Improve][Zeta] Improve Zeta operation max count and ignore NPE

* [Improve][Zeta] Improve Zeta operation max count and ignore NPE

* [Improve][Zeta] Cancel pipeline add retry to avoid cancel failed. (apache#4792)

* [Hotfix][CDC] Fix chunk start/end parameter type error (apache#4777)

Incorrect wrapping as Array<Array> types, but only Array type required

* [Feature][Zeta] Add OSS support for Imap storage to cluster-mode type (apache#4683)

* Add OSS/S3 to cluster-mode type apache#4621

* fixed bug & add e2e test

* Wait for the node to start before scheduling  & Move jar to parent pom & optimize writer

* update LICENSE

* [Hotfix][CI] Fix error repository name in ci config files (apache#4795)

* [Feature][Json-format] support read format for pulsar (apache#4111)

* [Improve][Connector-V2][Jdbc-Sink][Doc] Add the generate sink sql par… (apache#4797)

* [Improve][Connector-V2][Jdbc-Sink][Doc] Add the generate sink sql parameter for the jdbc sinj document

* [Docs][Connector-V2][Mysql] fix Mysql sink format doc (apache#4800)

* [Hotfix][Connector][Jdbc] Fix sqlserver system table case sensitivity (apache#4806)

* [Hotfix][Connector][Jdbc] Fix reconnect throw close statement exception (apache#4801)

* [Hotfix][Connector-V2][Jdbc] Fix the error of extracting primary key column in sink (apache#4815)

* [Feature][Connector-v2] Add Snowflake Source&Sink connector (apache#4470)


---------

Co-authored-by: Eric <gaojun2048@gmail.com>
Co-authored-by: hailin0 <wanghailin@apache.org>

* [Hotfix][CI] Fix redundant modules run e2e tests when change jdbc module (apache#4824)

* fix pom.xml code style (apache#4836)

* [Chore] Format the.conf file using the same style (apache#4830)

* [Hotfix][Zeta] Fix cpu load problem (apache#4828)

* [Improve][Zeta] Reduce the number of IMAPs used by checkpointIdCounter (apache#4832)

* [Bugfix][connector-v2][rabbitmq] Fix reduplicate ack msg bug and code style (apache#4842)

---------

Co-authored-by: 毕博 <bibo@mafengwo.com>

* [Improve][Zeta] async execute checkpoint trigger and other block method (apache#4846)

* [Improve][Zeta] async execute checkpoint trigger

* [Bug][Zeta] Fix zeta cannot normally recycle thread belong to abnormal tasks

* [Improve][Zeta] Move `restoreState` add `addSplitsBack` execute by TaskExecuteService

* [Improve][Zeta] Move `receivedReader` execute by TaskExecuteService

* [Bug][Zeta] Fix task `notifyTaskStatusToMaster` failed when job not running or failed before run (apache#4847)

* [Bug][Zeta] Fix task repeat notify failed when job not running

* [Bug][Zeta] Fix notifyTaskStatusToMaster not release lock and NPE

* [Improve][Zeta] Reduce the frequency of fetching data from imap (apache#4851)

* [Improve][Zeta] Add Metaspace size default value to config file (apache#4848)

* [Improve][Zeta] Speed up listAllJob function (apache#4852)

* [Bug][Zeta] Fix TaskGroupContext always hold classloader so classloader can't recycle (apache#4849)

* [Improve][Zeta] Fix engine runtime error (apache#4850)

* [Hotfix][Zeta] Fix completePendingCheckpoint concurrent action (apache#4854)

This operation does not allow concurrent execution

* [Hotfix][Zeta] Fix master active bug (apache#4855)

* [Bugfix][DAG] Fix the incorrect setting of transform parallelism (apache#4814)

* [Hotfix][Zeta] fix pipeline state not right bug (apache#4823)

* [BUG][Doris] Add a jobId to the doris label to distinguish between tasks (apache#4853)

Co-authored-by: zhouyao <yao.zhou@marketingforce.com>

* [Improve] Add a jobId to the doris label to distinguish between tasks (apache#4839)

Co-authored-by: zhouyao <yao.zhou@marketingforce.com>

* [Hotfix][Zeta] Fix IMap operation timeout bug (apache#4859)

* [Bug][Zeta] Fix restoreComplete Future can't be completed when cancel task (apache#4863)

* [Feature][SQL Transform]Add catalog support for SQL Transform plugin (apache#4819)

* [improve][SelectDB] Add a jobId to the selectDB label to distinguish between tasks (apache#4864)

Co-authored-by: zhouyao <yao.zhou@marketingforce.com>

* [Hotfix][Connector-v2][kafka] Fix the short interval of pull data settings and revise the format (apache#4875)

* [Bug][Connector-V2][Doris] update last checkpoint id when doing snapshot (apache#4881)

* [Hotfix][Zeta] Fix deploy operation timeout but task already finished bug (apache#4867)

* [Core][Docs]Remove incubator in README file (apache#4882)

* [Bugfix][CDC Base] Solving the ConcurrentModificationException caused by snapshotState being modified concurrently. (apache#4877)

* [improve][CDC base] Implement Sample-based Sharding Strategy with Configurable Sampling Rate (apache#4856)

* [Improve][Zeta] Reduce the operation count of imap_running_job_metrics (apache#4861)

* [Bug][Zeta] Fix TaskExecutionService will return not active ExecutionContext (apache#4869)

* [Hotfix][Jdbc] Fix XA DataSource crash(Oracle/Dameng/SqlServer) (apache#4866)

* [Bugfix] [Connector-V2] [File] Fix read temp file (apache#4876)

Co-authored-by: wantao <wantao@inmyshow.com>

* [Bug][Zeta] Fix TaskExecutionService synchronized lock will not release (apache#4886)

* [Improve][Zeta] Move driver into lib directory and change operation count (apache#4845)

* [hotfix][kafka] Fix the problem that the partition information cannot be obtained when kafka is restored (apache#4764)

* [Bugfix][zeta] Fix the deadlock issue with JDBC driver loading (apache#4878)

* [Chore] update 2.3.2 release-note.md (apache#4892)

* [Improve][Connector-V2][Jdbc-Source] Support for Decimal types as splict keys  (apache#4634)

* [Improve][Connector-V2][Jdbc-Source]Support Compatible Mysql bigint(20) used as a partition_column apache#4634

Co-authored-by: zhilinli <lzl15844876351@163.com>

* [Bug][connector-v2][doris] add streamload Content-type for doris URLdecode error (apache#4880)

* [Chore] Change repository name from incubator-seatunnel to seatunnel  (apache#4868)


---------

Co-authored-by: Jia Fan <fanjiaeminem@qq.com>

* [Improve][connector-V2-Neo4j]Supports neo4j sink batch write and update docs (apache#4841)

* [Hotfix][connector-v2][e2e] Fix maven scope (apache#4901)

* quick-start-seatunnel-engine.md (apache#4943)

* fix error (apache#4888)

* [Hotfix][Connector-V2][ClickhouseFile] Fix ClickhouseFile write file failed when field value is null (apache#4937)

* Update ClickhouseFileSinkWriter.java

Bug fix: When ClikchouseFileSinkerWriter writes to a temporary file, it does not check whether the field value is empty, so an exception will be thrown.

Modified to write an empty string when a null value is encountered

* Update ClickhouseFileSinkWriter.java

repair code style

* Update ClickhouseFileSinkWriter.java

code style

* [Improve][Zeta] Add an interface for batch retrieval of JobMetrics (apache#4576)

* [Improve] Documentation and partial word optimization. (apache#4936)

* code format

* add cdc feature

* fix cdc can not get driver error

---------

Co-authored-by: gdliu3 <gdliu3@iflytek.com>

* [Doc][Connector-V2] StarRocks `nodeUrls` property name fix (apache#4951)

node_urls -> nodeUrls
node_urls doesn't work

* [Feature][E2E][FtpFile] add ftp file e2e test case (apache#4647)

* [WIP][Feature][Connector-e2e] add ftp e2e test

* Let e2e barely execute by excluding the commons-net jar package.

* Resolve the maven conflict

---------

Co-authored-by: hailin0 <wanghailin@apache.org>

* [Hotfix][Connector-V2][StarRocks] Fix code style (apache#4966)

* [Hotfix][Connector-v2][HbaseSink]Fix default timestamp (apache#4958)

* [Doc]Change the transform website url (apache#4954)

* [Docs][Connector-V2][Http]Reconstruct the Http connector document (apache#4962)

Co-authored-by: chenzy15 <chenzy15@ziroom.com>

* [Feature][connector-v2][mongodb] mongodb support cdc sink (apache#4833)

* [Bug] [zeta][starter]fix bug (apache#4983) (apache#4984)

Co-authored-by: wsstony <tonymao777@163.com>

* fix redis nodes format error. (apache#4981)

Co-authored-by: lightzhao <zhaolianyong777@gmail.com>

* [Improve][CDC]Remove  driver for cdc connector (apache#4952)

* [Hotfix][Connector-V2][Mongodb] Fix document error content and remove redundant code (apache#4982)

Co-authored-by: chenzy15 <chenzy15@ziroom.com>

* [Improve][Connector-V2][OSS-Jindo] Optimize jindo oss connector (apache#4964)

* [Improve][Connector-V2][Jindo-Oss] Optimize jindo-oss connector

* [Improve][Connector-V2][Jindo-Oss] Update module name

* [Hotfix][Connector-V2][StarRocks] Fix code style

* [bugfix] Upgrade the key log output level(apache#4993)

* [Feature][Zeta] Configuration files support user variable replacement (apache#4969)

* [Feature][Transform-V2][SQL] Support 'select *' and 'like' clause for SQL Transform plugin (apache#4991)

Co-authored-by: mcy <rewrma@163.com>

* [Improve][CDC]change driver scope to provider (apache#5002)

* [Hotfix][Connector-V2][Hive] Support user-defined hive-site.xml (apache#4965)

* [Improve][Connector-v2][Mongodb]Optimize reading logic (apache#5001)

Co-authored-by: chenqqq11 <chenzy15@ziroom.com>

* [Feature][Connector-V2][Clickhouse] clickhouse writes with checkpoints (apache#4999)

* [Hotfix][Connector-V2][Mongodb] Compatible with historical parameters (apache#4997)

* Split updated modules integration test for part 4 (apache#5028)

* [Hotfix] Fix the CI Job name error (apache#5032)

* [Feature][CDC] Support disable/enable exactly once for INITIAL (apache#4921)

* [bugfix][zeta] Fixed multi-table job data loss and latency issues (apache#149) (apache#5031)

* [Hotfix][CDC] Fix jdbc connection leak for mysql (apache#5037)

* [Bugfix][zeta] Fix cdc connection does not close (apache#4922)

* Fix XA Transaction bug (apache#5020)

* Set Up with Kubernetes, dockerfile document error in constructing docker image (apache#5022)

Co-authored-by: yctan <1417983443@qq.com>

* [Improve][Connector-v2][Mongodb]sink support transaction update/writing (apache#5034)

* fix:the HdfsStorage can not delete checkpoint file apache#5046 (apache#5054)

* [BugFix] [Connector-V2] [MySQL-CDC] serverId from int to long (apache#5033) (apache#5035)

* [bugfix] change MySQL CDC serverId from int to long (apache#5033)

* style: 🎨 optimize code style

* [Feature][Connector-V2][cdc] Change the time zone to the default time zone (apache#5030)

* [Bugfix][connector-cdc-mysql] Fix listener not released when BinlogClient reuse (apache#5011)

* [Feature][Connector-V2][Jdbc] Add oceanbase dialect factory (apache#4989)


---------

Co-authored-by: silenceland <silenceland23@163.com>
Co-authored-by: changhuyan <877018069@qq.com>

* [HotFix][Zeta] fix after the savepoint job is restored, the checkpoint file cannot be generated apache#4985 (apache#5051)

* fix after the savepoint job is restored, the checkpoint file cannot be generated

* fix class not found exception (apache#5063)

* [Feature] update action config to support run CI on fork repo (apache#5065)

* [Bugfix]fix clickhouse source connector read Nullable() type is not null,example:Nullable(Float64) while value is null the result is 0.0 (apache#5080)

* [Feature][Connector-V2][Clickhouse] Add clickhouse connector time zone key,default system time zone (apache#5078)

* Add clickhouse connector time zone key,default system time zone

* Modify the document and add clickhouse server_time_zone configuration

* [Chore] Modify repeat des (apache#5088)

Co-authored-by: 80597928 <Lzl@qq.com>

* [Docs] Add Value types in Java to Schema feature (apache#5087)

* [Feature][Connector-V2] JDBC source support string type as partition key (apache#4947)

* [HotFix] Fix code style (apache#5092)

* [Docs][Zeta] Add savepoint doc (apache#5081)

* [Feature][connector-v2][mongodbcdc]Support source mongodb cdc (apache#4923)

* [Improve] Improve savemode api (apache#4767)

* [Doc] Improve DB2 Source Vertica Source & DB2 Sink Vertica Sink document (apache#5102)

* [Improve][Docs][Clickhouse] Reconstruct the clickhouse connector doc (apache#5085)



---------

Co-authored-by: chenzy15 <chenzy15@ziroom.com>

* [Pom]update version to 2.3.3-SNAPSHOT (apache#5043)

* update version to 2.3.3-SNAPSHOT

* update dependency version in know dependencies file

* Add logs to find job restore from master active switch error

* [Feature][Connector-V2][mysql cdc] Conversion of tinyint(1) to bool is supported (apache#5105)

Co-authored-by: zhouyao <yao.zhou@marketingforce.com>

* [Improve][Zeta] Add sleep for Task to reduce CPU cost (apache#5117)

* [Feature][JDBC Sink] Add DM upsert support (apache#5073)



---------

Co-authored-by: David Zollo <davidzollo365@gmail.com>

* [Hotfix][Connector][Jdbc] Fix the problem of JdbcOutputFormat database connection leak (apache#4802)

[Hotfix][Connector][Jdbc] Fix the problem of JdbcOutputFormat database connection leak

* [Hotfix]Fix mongodb cdc e2e instability (apache#5128)

Co-authored-by: chenzy15 <chenzy15@ziroom.com>

* [Hotfix][Zeta] Fix task state memory leak (apache#5139)

* [Hotfix][Zeta] Fix checkpoint error report without msg (apache#5137)

* [Improve][Zeta] Improve CheckpointCoordinator notify complete when restore (apache#5136)

* [Improve] Improve CheckpointCoordinator notify complete when restore

* update

* [Improve][Zeta] Improve CheckpointCoordinator log error when report error from task (apache#178) (apache#5134)

* [Hotfix][Zeta] Fix MultipleTableJobConfigParser ignore env option (apache#5067)

* [Fix][Zeta] Fix MultipleTableJobConfigParser ignore env option

* update

* [Improve][Connector[File] Optimize files commit order (apache#5045)

Before using `HashMap` store files path, so every checkpoint file commit is out of order.

Now switch to using `LinkedHashMap` to ensure that files are commit in the generated order

* [Hotfix][Mongodb cdc] Solve startup resume token is negative (apache#5143)


---------

Co-authored-by: chenzy15 <chenzy15@ziroom.com>

* [Feature][connector][kafka] Support read debezium format message from kafka (apache#5066)

* [Feature][CDC] Support tables without primary keys (with unique keys) (apache#163) (apache#5150)

* [Feature][Connector-V2][CDC] Support string type shard fields. (apache#5147)

* [feature][CDC base] Supports string type shard fields

* Delete invalid code

* [Feature][Connector-V2][File] Add cos source&sink (apache#4979)

* [Feature][Connector-V2][File] Add cos sink

* update doc&e2e and add pom file header

* add e2e file header and config

* add file-cos module into dist pom.xml

* [Feature][Connector-V2][File] Add cos source

---------

Co-authored-by: dengd1937 <dengd1803@gmail.com>

* [Fix][Zeta] Fix SinkFlowLifeCycle without init lastCommitInfo (apache#5152)

* [Hotfix][MongodbCDC]Refine data format to adapt to universal logic (apache#5162)

Co-authored-by: chenzy15 <chenzy15@ziroom.com>

* [Chore] Update bug-report.yml (apache#5160)

* [Improve][CDC] support exactly-once of cdc and fix the BinlogOffset comparing bug (apache#5057)

* [Improve][CDC] support exactly-once of cdc, fix the BinlogOffset comparing bug

* [Improve][CDC] adjust code style

* [Improve][CDC] fix ci error

---------

Co-authored-by: happyboy1024 <296442618@qq.com>

* [Docs][Connector-V2][Hudi] Reconstruct the Hudi connector document (apache#4905)

* [Docs][Connector-V2][Hudi] Reconstruct the Hudi connector document


---------

Co-authored-by: zhouyao <yao.zhou@marketingforce.com>

* [Docs][Connector-V2][Doris] Reconstruct the Doris connector document (apache#4903)

* [Docs][Connector-V2][Doris] Reconstruct the Doris connector document

---------

Co-authored-by: zhouyao <yao.zhou@marketingforce.com>

* [improve] [CDC Base] Add some split parameters to the optionRule (apache#5161)

* [bugfix] [File Base] Fix Hadoop Kerberos authentication related issues. (apache#5171)

* [CI] add code style check when docs changed (apache#5183)

* [Bug][Translation][Spark] Fix SeaTunnelRowConvertor fail to convert when schema contains row type. (apache#5170)

* [Improve][Zeta] Move checkpoint notify complete in checkpoint stage (apache#5185)

* [Feature][Catalog] Add JDBC Catalog auto create table (apache#4917)

* [Feature][Connector V2][File] Add config of 'file_filter_pattern', which used for filtering files. (apache#5153)

* [Feature][Connector V2][File] Add config of 'file_filter_pattern', which used for filtering files.

* [Improve][Connector-v2][Jdbc]  check url not null throw friendly message (apache#5097)

* check url not null throw friendly message

* check jdbc source config

* modify jdbc validate method

---------

Co-authored-by: 80597928 <Lzl@qq.com>
Co-authored-by: 80597928 <673421862@qq.com>

* [bugfix][zeta] Fix the issue of two identical IDs appearing when executing seatunnel.sh -l as the job resumes (apache#5191)

* [Improve][Docs][Kafka]Reconstruct the kafka connector document (apache#4778)

* [Docs][Connector-V2][Kafka]Reconstruct the kafka connector document

---------

Co-authored-by: chenzy15 <chenzy15@ziroom.com>

* [Bug][Improve][LocalFileSink]Fix LocalFile Sink file_format_type. (apache#5118)

* [Bug] [connector-v2] PostgreSQL versions below 9.5 are compatible use cdc sync problem (apache#5120)

* [e2e] kafka e2e error (apache#5200)

* [Hotfix][Connector-V2][JindoOssFile] Fix plugin-mapping.properties (apache#5215)

Co-authored-by: tyrantlucifer <tyrantlucifer@gmail.com>

* [Improve][Zeta] Don't trigger handleSaveMode when restore (apache#5192)

* move imap storage file dependency packages to submodules (apache#5218)

* [Hotfix][CI]Declare files that will always have UNIX line endings on checkout. (apache#5221)

* [Hotfix][Connector-V2][Paimon] Bump paimon-bundle version to 0.4.0-incubating (apache#5219)

* [Docs][Connector-V2][PostgreSQL] Refactor connector-v2 docs using unified format PostgreSQL apache#4590 (apache#4757)

* [Docs][Connector-V2][PostgreSQL] Refactor connector-v2 docs using unified format PostgreSQL

* [Docs] Fix Dockerfile and seatunnel-flink.yaml in Set Up with Kubernetes (apache#4793)

* [Docs] update seatunnel-flink.yaml and Dockerfile to help the demo work

* [Docs] update release-note apache#4788

---------

Co-authored-by: flynnxue <flynnxue@lilithgames.com>
Co-authored-by: ic4y <83933160+ic4y@users.noreply.github.com>

* [feature][doris] Doris factory type (apache#5061)

* [feature][doris] Web need factory and data type convertor

* [Fix] Update the Readme (apache#4968)

Use the better description for the SeaTunnel Project

* [CI] Split updated modules integration test for part 5 (apache#5208)

* [CI] Split updated modules integration test for part 5

* [CI] Split updated modules integration test for part 5

* Split e2e

* update json-smart

* fix dm error

* fix dm error

* fix dm error

* fix dm error

* fix dm error

* fix dm error

* fix dm error

* revert code

* revert code

* revert code

---------

Co-authored-by: gdliu3 <gdliu3@iflytek.com>

* [Feature][CDC][Zeta] Support schema evolution framework(DDL) (apache#5125)

* Fixed IMap file storage e2e bug (apache#5237)

* [Improve] [Connector-V2] Remove scheduler in JDBC sink apache#4736 (apache#5168)


---------

Co-authored-by: gdliu3 <gdliu3@iflytek.com>

* [Doc] [JDBC Oracle] Add JDBC Oracle Documentation (apache#5239)

* [Feature][Zeta][REST-API]Add REST API To Submit Job (apache#5107)

* [Fix] Update the project description (apache#4967)

* Update the project description

* [Feature][Zeta] Support history service record job execute error (apache#5114)

* fix:hdfs Checkpoint Storage management fails to delete historical files

* fix:hdfs Checkpoint Storage management fails to delete historical files

* fix after the savepoint job is restored, the checkpoint file cannot be generated

* [Feature][Zeta] Support history service record job execute error

* Improve Jobstate-related class additions add serialVersionUID

* add e2e test

* [hotfix]Update .asf.yaml (apache#5242)

* Update .asf.yaml

* [Hotfix]Fix array index anomalies caused by apache#5057 (apache#5195)

* [bugfix] [savepoint test] Turn on the testSavepoint test. (apache#5199)

* [BUG][Connector-V2][Jdbc] support postgresql json type  (apache#5194)

* add Postgresql json type
Co-authored-by: 80597928 <673421862@qq.com>

* [Bugfix][cdc] Fix mysql bit column to java byte (apache#4817)

* [Bugfix][AmazonDynamoDB] Fix the problem that all table data cannot be obtained (apache#5146)

* [Docs][Connector][Source][jdbc]Change the line boundary store value type to BigDecimal (apache#4900)

* [bug][jdbc][oracle]Fix the Oracle number type mapping problem (apache#5209)

* [Bugfix][zeta] Fix the serialization issue of GetMetricsOperation during multi-node operation. (apache#5206)

* [Hotfix][Zeta] Avoid Redundant Job Submissions by Checking Job Status (apache#5229)

* [Bugfix][zeta] Fixed the issue of duplicated metrics caused by job fault tolerance or restore. (apache#5214)

* [Imporve] [CDC Base] Add a fast sampling method that supports character types (apache#5179)

* fixed zeta ci error (apache#5254)

* [Doc][README] Remove useless github workflow, and adjust description of 'engineering structure'. (apache#4305)

* [Feature][Zeta]The expiration time of a historical Job can be config (apache#5180)

* fix:hdfs Checkpoint Storage management fails to delete historical files
Co-authored-by: hailin0 <wanghailin@apache.org>

* [bugfix] [e2e] Fixed a minor bug (apache#5274)

* [Improve][SQL] Support use catalogTableName as SQL expression (apache#5273)

* [Doc] Improve S3File Source & S3File Sink document (apache#5101)

* Improve S3File Source & S3File Sink document

* Fix style error (apache#5280)

* Fix StarRocksJsonSerializer will transform array/map/row to string (apache#5281)

* [Docs][Connector-V2][MyHours]Reconstruct the MyHours connector document (apache#5129)

* [Docs][Connector-V2][MyHours]Reconstruct the MyHours connector document

* fix format

* fix format

* [Improve][API & Zeta] Using connector custom serializer encode/decode states (apache#5238)

* API: Using DefaultSerializer as connector sink default serializer
* Zeta: Using connector custom serializer encode/decode states

* [Feature][Connector-V2] connector-kafka source support data conversion extracted by kafka connect source (apache#4516)

* Compatible kafka connect json apache#4137

* [Improve][CI/CD] Remove 'paths-ignore', enable the code style check for markdown files. (apache#5286)

* [Bugfix][zeta] Resolved the issue causing checkpoints to halt on tolerable-failure=0. (apache#5263)

* [Bugfix][zeta] Resolved the issue causing checkpoints to halt on tolerable-failure=0.

* remove max-concurrent

* [Feature][Connector-v2][RedisSink]Support redis to set expiration time. (apache#4975)

* Support redis to set expiration time.

* Set redis expire default value.

* add e2e test.

* add e2e test.

* modify config file name.

---------

Co-authored-by: lightzhao <zhaolianyong777@gmail.com>

* [bugfix] Fix testGetErrorInfo case error (apache#5282)

* [Feature][Zeta] Checkpoint support hdfs ha mode (apache#4942)

* fix browser long type intercept (apache#5267)

Co-authored-by: 80597928 <673421862@qq.com>

* [Docs] remove `incubating` keyword in document (apache#5257)

* [feature][web] hive add option because web need (apache#5154)

* [feature][web] hive add option because web need

* [feature][web] hive add option read_columns

* [feature][web] required update optional

* [bugfix] mvn spotless

* fix conf

* fix conf

---------

Co-authored-by: liuli <m_liuli@163.com>

* [Bug][flink-runtime][connectors-v2] Flink register table Environment  The running mode is set to`job.mode` (apache#4826)

* [Docs][Connector-V2][StarRocks]Reconstruct the StarRocks connector document (apache#5132)

* [Docs][Connector-V2][StarRocks]Reconstruct the StarRocks connector document

* [Improve][Connector-v2][HiveSink]remove drop partition when abort. (apache#4940)

Co-authored-by: lightzhao <zhaolianyong777@gmail.com>
Co-authored-by: liuli <m_liuli@163.com>
Co-authored-by: ic4y <83933160+ic4y@users.noreply.github.com>

* [Docs][Connector-V2][SelectDB-Cloud]Reconstruct the SelectDB-Cloud connector document (apache#5130)

* [Docs][Connector-V2][SelectDB-Cloud]Reconstruct the SelectDB-Cloud connector document

* fix codestyle

---------

Co-authored-by: liuli <m_liuli@163.com>

* [Docs][Connector-V2][HDFS]Refactor connector-v2 docs using unified format HDFS. (apache#4871)

* Refactor connector-v2 docs using unified format HDFS.

* add data type.

* update.

* add key feature.

* add hdfs_site_path

* 1.add data type.
2.add hdfs_site_path conf.

* add data type.

* add hdfs site conf.

---------

Co-authored-by: lightzhao <zhaolianyong777@gmail.com>
Co-authored-by: liuli <m_liuli@163.com>

* [Improve] [Connector-V2] Remove scheduler in Tablestore sink (apache#5272)


---------

Co-authored-by: gdliu3 <gdliu3@iflytek.com>

* [BUG][Connector-V2][Mongo-cdc] Incremental data kind error in snapshot phase (apache#5184)

* [BUG][Connector-V2][Mongo-cdc] Incremental data kind error in snapshot phase

* [Hotfix] Fix com.google.common.base.Preconditions to seatunnel shade one (apache#5284)

* [Merge] Fix merge conflict and fix jdbc fieldIde with compatibleMode confusion

---------

Co-authored-by: Cason-ACE <35160064+cason0126@users.noreply.github.com>
Co-authored-by: Tyrantlucifer <TyrantLucifer@gmail.com>
Co-authored-by: hailin0 <wanghailin@apache.org>
Co-authored-by: Xiaojian Sun <sunxiaojian926@163.com>
Co-authored-by: Laglangyue <35491928+laglangyue@users.noreply.github.com>
Co-authored-by: ZhilinLi <zhilinli0706@gmail.com>
Co-authored-by: ic4y <83933160+ic4y@users.noreply.github.com>
Co-authored-by: Hao Xu <sduxuhao@gmail.com>
Co-authored-by: Eric <gaojun2048@gmail.com>
Co-authored-by: Bibo <33744252+531651225@users.noreply.github.com>
Co-authored-by: 毕博 <bibo@mafengwo.com>
Co-authored-by: Carl-Zhou-CN <67902676+Carl-Zhou-CN@users.noreply.github.com>
Co-authored-by: zhouyao <yao.zhou@marketingforce.com>
Co-authored-by: Marvin <29311598@qq.com>
Co-authored-by: monster <60029759+MonsterChenzhuo@users.noreply.github.com>
Co-authored-by: gnehil <adamlee489@gmail.com>
Co-authored-by: TaoZex <45089228+TaoZex@users.noreply.github.com>
Co-authored-by: xiaofan2012 <41982310+xiaofan2022@users.noreply.github.com>
Co-authored-by: wantao <wantao@inmyshow.com>
Co-authored-by: Guangdong Liu <804167098@qq.com>
Co-authored-by: zhilinli <lzl15844876351@163.com>
Co-authored-by: zhaifengbing <ainizfb@126.com>
Co-authored-by: dalong <60906603+alibabaMapengfei@users.noreply.github.com>
Co-authored-by: FuYouJ <1247908487@qq.com>
Co-authored-by: davidfans <136911434+davidfans@users.noreply.github.com>
Co-authored-by: Fan Donglai <ddna_1022@163.com>
Co-authored-by: gdliu3 <gdliu3@iflytek.com>
Co-authored-by: DismalSnail <yepenghaobit@gmail.com>
Co-authored-by: lightzhao <40714172+lightzhao@users.noreply.github.com>
Co-authored-by: chenzy15 <chenzy15@ziroom.com>
Co-authored-by: wssmao <39487209+wssmao@users.noreply.github.com>
Co-authored-by: wsstony <tonymao777@163.com>
Co-authored-by: lightzhao <zhaolianyong777@gmail.com>
Co-authored-by: XiaoJiang521 <131635688+XiaoJiang521@users.noreply.github.com>
Co-authored-by: mcy <rewrma@163.com>
Co-authored-by: yctanGmail <138592845+yctanGmail@users.noreply.github.com>
Co-authored-by: yctan <1417983443@qq.com>
Co-authored-by: wu-a-ge <wfygowxf1@163.com>
Co-authored-by: 司马琦昂 <bruce.maqiang@gmail.com>
Co-authored-by: happyboy1024 <137260654+happyboy1024@users.noreply.github.com>
Co-authored-by: He Wang <wanghechn@qq.com>
Co-authored-by: silenceland <silenceland23@163.com>
Co-authored-by: changhuyan <877018069@qq.com>
Co-authored-by: Jarvis <liunaijie1996@163.com>
Co-authored-by: 阿丙 <50567478+gaopeng666@users.noreply.github.com>
Co-authored-by: jackyyyyyssss <127465317+jackyyyyyssss@users.noreply.github.com>
Co-authored-by: 80597928 <Lzl@qq.com>
Co-authored-by: Chengyu Yan <cheneyyin@hotmail.com>
Co-authored-by: zhangchengming601 <86779821+zhangchengming601@users.noreply.github.com>
Co-authored-by: lihjChina <237206177@qq.com>
Co-authored-by: David Zollo <davidzollo365@gmail.com>
Co-authored-by: EchoLee5 <39044001+EchoLee5@users.noreply.github.com>
Co-authored-by: dengdi <114273849+dengd1937@users.noreply.github.com>
Co-authored-by: dengd1937 <dengd1803@gmail.com>
Co-authored-by: happyboy1024 <296442618@qq.com>
Co-authored-by: FlechazoW <35768015+FlechazoW@users.noreply.github.com>
Co-authored-by: 80597928 <673421862@qq.com>
Co-authored-by: kun <66303359+Lifu12@users.noreply.github.com>
Co-authored-by: Volodymyr <volodymyrduke@gmail.com>
Co-authored-by: javalover123 <javalover123@foxmail.com>
Co-authored-by: Volodymyr <770925351@qq.com>
Co-authored-by: kksxf <flynnxue02@gmail.com>
Co-authored-by: flynnxue <flynnxue@lilithgames.com>
Co-authored-by: fang <56808812+zhibinF@users.noreply.github.com>
Co-authored-by: gejinxin <844156709@qq.com>
Co-authored-by: Wenjun Ruan <wenjun@apache.org>
Co-authored-by: Koyfin <1040080742@qq.com>
Co-authored-by: liuli <m_liuli@163.com>
  • Loading branch information
Show file tree
Hide file tree
Showing 359 changed files with 12,790 additions and 2,587 deletions.
12 changes: 7 additions & 5 deletions .asf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,20 @@
#

github:
description: SeaTunnel is a distributed, high-performance data integration platform for the synchronization and transformation of massive data (offline & real-time).
description: SeaTunnel is a next-generation super high-performance, distributed, massive data integration tool.
homepage: https://seatunnel.apache.org/
labels:
- data-integration
- change-data-capture
- cdc
- high-performance
- offline
- real-time
- data-pipeline
- sql-engine
- batch
- streaming
- data-ingestion
- apache
- seatunnel
- etl-framework
- elt
enabled_merge_buttons:
squash: true
merge: false
Expand Down
1 change: 1 addition & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*.sh text eol=lf
6 changes: 3 additions & 3 deletions .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -90,10 +90,10 @@ body:

- type: textarea
attributes:
label: Flink or Spark Version
description: Provide Flink or Spark Version.
label: Zeta or Flink or Spark Version
description: Provide Zeta or Flink or Spark Version.
placeholder: >
Please provide the version of Flink or Spark.
Please provide the version of Zeta or Flink or Spark.
validations:
required: false

Expand Down
140 changes: 136 additions & 4 deletions .github/workflows/backend.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
name: Backend
on:
push:
pull_request:
branches:
- business-dev
- "v[0-9]+.[0-9]+.[0-9]+-release"
Expand All @@ -26,8 +27,6 @@ on:
- business-dev
- "v[0-9]+.[0-9]+.[0-9]+-release"
paths-ignore:
- 'docs/**'
- '**/*.md'
- 'seatunnel-ui/**'

concurrency:
Expand Down Expand Up @@ -270,7 +269,7 @@ jobs:
- name: run updated modules integration test (part-1)
if: needs.changes.outputs.api == 'false' && needs.changes.outputs.it-modules != ''
run: |
sub_modules=`python tools/update_modules_check/update_modules_check.py sub_update_it_module ${{needs.changes.outputs.it-modules}} 2 0`
sub_modules=`python tools/update_modules_check/update_modules_check.py sub_update_it_module ${{needs.changes.outputs.it-modules}} 5 0`
./mvnw -T 1C -B verify -DskipUT=true -DskipIT=false -D"license.skipAddThirdParty"=true --no-snapshot-updates -pl $sub_modules -am -Pci
env:
MAVEN_OPTS: -Xmx2048m
Expand All @@ -295,7 +294,7 @@ jobs:
- name: run updated modules integration test (part-2)
if: needs.changes.outputs.api == 'false' && needs.changes.outputs.it-modules != ''
run: |
sub_modules=`python tools/update_modules_check/update_modules_check.py sub_update_it_module ${{needs.changes.outputs.it-modules}} 2 1`
sub_modules=`python tools/update_modules_check/update_modules_check.py sub_update_it_module ${{needs.changes.outputs.it-modules}} 5 1`
if [ ! -z $sub_modules ]; then
./mvnw -T 1C -B verify -DskipUT=true -DskipIT=false -D"license.skipAddThirdParty"=true --no-snapshot-updates -pl $sub_modules -am -Pci
else
Expand All @@ -304,6 +303,91 @@ jobs:
env:
MAVEN_OPTS: -Xmx2048m

updated-modules-integration-test-part-3:
needs: [ changes, sanity-check ]
if: needs.changes.outputs.api == 'false' && needs.changes.outputs.it-modules != ''
runs-on: ${{ matrix.os }}
strategy:
matrix:
java: [ '8' ]
os: [ 'self-hosted' ]
timeout-minutes: 90
steps:
- uses: actions/checkout@v2
- name: Set up JDK ${{ matrix.java }}
uses: actions/setup-java@v3
with:
java-version: ${{ matrix.java }}
distribution: 'temurin'
cache: 'maven'
- name: run updated modules integration test (part-3)
if: needs.changes.outputs.api == 'false' && needs.changes.outputs.it-modules != ''
run: |
sub_modules=`python tools/update_modules_check/update_modules_check.py sub_update_it_module ${{needs.changes.outputs.it-modules}} 5 2`
if [ ! -z $sub_modules ]; then
./mvnw -T 1C -B verify -DskipUT=true -DskipIT=false -D"license.skipAddThirdParty"=true --no-snapshot-updates -pl $sub_modules -am -Pci
else
echo "sub modules is empty, skipping"
fi
env:
MAVEN_OPTS: -Xmx2048m

updated-modules-integration-test-part-4:
needs: [ changes, sanity-check ]
if: needs.changes.outputs.api == 'false' && needs.changes.outputs.it-modules != ''
runs-on: ${{ matrix.os }}
strategy:
matrix:
java: [ '8' ]
os: [ 'self-hosted' ]
timeout-minutes: 90
steps:
- uses: actions/checkout@v2
- name: Set up JDK ${{ matrix.java }}
uses: actions/setup-java@v3
with:
java-version: ${{ matrix.java }}
distribution: 'temurin'
cache: 'maven'
- name: run updated modules integration test (part-4)
if: needs.changes.outputs.api == 'false' && needs.changes.outputs.it-modules != ''
run: |
sub_modules=`python tools/update_modules_check/update_modules_check.py sub_update_it_module ${{needs.changes.outputs.it-modules}} 5 3`
if [ ! -z $sub_modules ]; then
./mvnw -T 1C -B verify -DskipUT=true -DskipIT=false -D"license.skipAddThirdParty"=true --no-snapshot-updates -pl $sub_modules -am -Pci
else
echo "sub modules is empty, skipping"
fi
env:
MAVEN_OPTS: -Xmx2048m
updated-modules-integration-test-part-5:
needs: [ changes, sanity-check ]
if: needs.changes.outputs.api == 'false' && needs.changes.outputs.it-modules != ''
runs-on: ${{ matrix.os }}
strategy:
matrix:
java: [ '8' ]
os: [ 'self-hosted' ]
timeout-minutes: 90
steps:
- uses: actions/checkout@v2
- name: Set up JDK ${{ matrix.java }}
uses: actions/setup-java@v3
with:
java-version: ${{ matrix.java }}
distribution: 'temurin'
cache: 'maven'
- name: run updated modules integration test (part-5)
if: needs.changes.outputs.api == 'false' && needs.changes.outputs.it-modules != ''
run: |
sub_modules=`python tools/update_modules_check/update_modules_check.py sub_update_it_module ${{needs.changes.outputs.it-modules}} 5 4`
if [ ! -z $sub_modules ]; then
./mvnw -T 1C -B verify -DskipUT=true -DskipIT=false -D"license.skipAddThirdParty"=true --no-snapshot-updates -pl $sub_modules -am -Pci
else
echo "sub modules is empty, skipping"
fi
env:
MAVEN_OPTS: -Xmx2048m
engine-v2-it:
needs: [ changes, sanity-check ]
if: needs.changes.outputs.api == 'true'
Expand Down Expand Up @@ -637,6 +721,54 @@ jobs:
env:
MAVEN_OPTS: -Xmx4096m

jdbc-connectors-it-part-4:
needs: [ changes, sanity-check ]
if: needs.changes.outputs.api == 'true'
runs-on: ${{ matrix.os }}
strategy:
matrix:
java: [ '8', '11' ]
os: [ 'ubuntu-latest' ]
timeout-minutes: 90
steps:
- uses: actions/checkout@v2
- name: Set up JDK ${{ matrix.java }}
uses: actions/setup-java@v3
with:
java-version: ${{ matrix.java }}
distribution: 'temurin'
cache: 'maven'
- name: run jdbc connectors integration test (part-4)
if: needs.changes.outputs.api == 'true'
run: |
./mvnw -B -T 1C verify -DskipUT=true -DskipIT=false -D"license.skipAddThirdParty"=true --no-snapshot-updates -pl :connector-jdbc-e2e-part-4 -am -Pci
env:
MAVEN_OPTS: -Xmx4096m

jdbc-connectors-it-part-5:
needs: [ changes, sanity-check ]
if: needs.changes.outputs.api == 'true'
runs-on: ${{ matrix.os }}
strategy:
matrix:
java: [ '8', '11' ]
os: [ 'ubuntu-latest' ]
timeout-minutes: 90
steps:
- uses: actions/checkout@v2
- name: Set up JDK ${{ matrix.java }}
uses: actions/setup-java@v3
with:
java-version: ${{ matrix.java }}
distribution: 'temurin'
cache: 'maven'
- name: run jdbc connectors integration test (part-5)
if: needs.changes.outputs.api == 'true'
run: |
./mvnw -B -T 1C verify -DskipUT=true -DskipIT=false -D"license.skipAddThirdParty"=true --no-snapshot-updates -pl :connector-jdbc-e2e-part-5 -am -Pci
env:
MAVEN_OPTS: -Xmx4096m

kafka-connector-it:
needs: [ changes, sanity-check ]
if: needs.changes.outputs.api == 'true'
Expand Down
2 changes: 1 addition & 1 deletion DISCLAIMER
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Apache SeaTunnel (incubating) is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator PMC.
Apache SeaTunnel is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator PMC.
Incubation is required of all newly accepted projects until a further review indicates that the infrastructure,
communications, and decision making process have stabilized in a manner consistent with other successful ASF projects.
While incubation status is not necessarily a reflection of the completeness or stability of the code,
Expand Down
58 changes: 16 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
<img src="https://seatunnel.apache.org/image/logo.png" alt="seatunnel logo" height="200px" align="right" />

[![Backend Workflow](https://github.com/apache/seatunnel/actions/workflows/backend.yml/badge.svg?branch=dev)](https://github.com/apache/seatunnel/actions/workflows/backend.yml)
[![Slack](https://img.shields.io/badge/slack-%23seatunnel-4f8eba?logo=slack)](https://the-asf.slack.com/archives/C053HND1D6X)
[![Slack](https://img.shields.io/badge/slack-%23seatunnel-4f8eba?logo=slack)](https://s.apache.org/seatunnel-slack)
[![Twitter Follow](https://img.shields.io/twitter/follow/ASFSeaTunnel.svg?label=Follow&logo=twitter)](https://twitter.com/ASFSeaTunnel)

---
Expand All @@ -13,9 +13,7 @@ SeaTunnel was formerly named Waterdrop , and renamed SeaTunnel since October 12,

---

SeaTunnel is a very easy-to-use ultra-high-performance distributed data integration platform that supports real-time
synchronization of massive data. It can synchronize tens of billions of data stably and efficiently every day, and has
been used in the production of nearly 100 companies.
SeaTunnel is a next-generation super high-performance, distributed, massive data integration tool. It can synchronize tens of billions of data stably and efficiently every day, and has been used in the production of many companies.

## Why do we need SeaTunnel

Expand All @@ -25,21 +23,20 @@ SeaTunnel focuses on data integration and data synchronization, and is mainly de
- Complex synchronization scenarios: Data synchronization needs to support various synchronization scenarios such as offline-full synchronization, offline-incremental synchronization, CDC, real-time synchronization, and full database synchronization.
- High demand in resource: Existing data integration and data synchronization tools often require vast computing resources or JDBC connection resources to complete real-time synchronization of massive small tables. This has increased the burden on enterprises to a certain extent.
- Lack of quality and monitoring: Data integration and synchronization processes often experience loss or duplication of data. The synchronization process lacks monitoring, and it is impossible to intuitively understand the real-situation of the data during the task process.
- Complex technology stack: The technology components used by enterprises are different, and users need to develop corresponding synchronization programs for different components to complete data integration.
- Difficulty in management and maintenance: Limited to different underlying technology components (Flink/Spark) , offline synchronization and real-time synchronization often have be developed and managed separately, which increases the difficulty of the management and maintainance.

## Features of SeaTunnel

- Rich and extensible Connector: SeaTunnel provides a Connector API that does not depend on a specific execution engine. Connectors (Source, Transform, Sink) developed based on this API can run on many different engines, such as SeaTunnel Engine, Flink, Spark that are currently supported.
- Connector plugin: The plugin design allows users to easily develop their own Connector and integrate it into the SeaTunnel project. Currently, SeaTunnel has supported more than 70 Connectors, and the number is surging. There is the list of connectors we [supported and plan to support](https://github.com/apache/seatunnel/issues/3018).
- Diverse Connectors: SeaTunnel has supported more than 100 Connectors, and the number is surging. Here is the list of connectors we [supported and plan to support](https://github.com/apache/seatunnel/issues/3018).
- Batch-stream integration: Connectors developed based on SeaTunnel Connector API are perfectly compatible with offline synchronization, real-time synchronization, full- synchronization, incremental synchronization and other scenarios. It greatly reduces the difficulty of managing data integration tasks.
- Support distributed snapshot algorithm to ensure data consistency.
- Multi-engine support: SeaTunnel uses SeaTunnel Engine for data synchronization by default. At the same time, SeaTunnel also supports the use of Flink or Spark as the execution engine of the Connector to adapt to the existing technical components of the enterprise. In addition, SeaTunnel supports multiple versions of Spark and Flink.
- Multi-engine support: SeaTunnel uses SeaTunnel Zeta Engine for data synchronization by default. At the same time, SeaTunnel also supports the use of Flink or Spark as the execution engine of the Connector to adapt to the existing technical components of the enterprise. In addition, SeaTunnel supports multiple versions of Spark and Flink.
- JDBC multiplexing, database log multi-table parsing: SeaTunnel supports multi-table or whole database synchronization, which solves the problem of over-JDBC connections; supports multi-table or whole database log reading and parsing, which solves the need for CDC multi-table synchronization scenarios problems with repeated reading and parsing of logs.
- High throughput and low latency: SeaTunnel supports parallel reading and writing, providing stable and reliable data synchronization capabilities with high throughput and low latency.
- Perfect real-time monitoring: SeaTunnel supports detailed monitoring information of each step in the data synchronization process, allowing users to easily understand the number of data, data size, QPS and other information read and written by the synchronization task.
- Two job development methods are supported: coding and canvas design. The SeaTunnel web project https://github.com/apache/seatunnel-web provides visual management of jobs, scheduling, running and monitoring capabilities.

Besides, SeaTunnel provides a Connector API that does not depend on a specific execution engine. Connectors (Source, Transform, Sink) developed based on this API can run on many different engines, such as SeaTunnel Zeta Engine, Flink, Spark that are currently supported.

## SeaTunnel work flowchart

![SeaTunnel work flowchart](docs/en/images/architecture_diagram.png)
Expand All @@ -63,29 +60,15 @@ The default engine use by SeaTunnel is [SeaTunnel Engine](seatunnel-engine/READM

### Here's a list of our connectors with their health status.[connector status](docs/en/Connector-v2-release-state.md)

## Environmental dependency

1. java runtime environment, java >= 8

2. If you want to run SeaTunnel in a cluster environment, any of the following Spark cluster environments is usable:

- Spark on Yarn
- Spark Standalone

If the data volume is small, or the goal is merely for functional verification, you can also start in local mode without
a cluster environment, because SeaTunnel supports standalone operation. Note: SeaTunnel 2.0 supports running on Spark
and Flink.

## Compiling project
Follow this [document](docs/en/contribution/setup.md).

## Downloads

Download address for run-directly software package : https://seatunnel.apache.org/download

## Quick start
SeaTunnel uses SeaTunnel Zeta Engine as the runtime execution engine for data synchronization by default. We highly recommend utilizing Zeta engine as the runtime engine, as it offers superior functionality and performance. By the way, SeaTunnel also supports the use of Flink or Spark as the execution engine.

**SeaTunnel Engine**
**SeaTunnel Zeta Engine**
https://seatunnel.apache.org/docs/start-v2/locally/quick-start-seatunnel-engine/

**Spark**
Expand All @@ -101,6 +84,10 @@ https://seatunnel.apache.org/docs/start-v2/locally/quick-start-flink
Weibo business uses an internal customized version of SeaTunnel and its sub-project Guardian for SeaTunnel On Yarn task
monitoring for hundreds of real-time streaming computing tasks.

- Tencent Cloud

Collecting various logs from business services into Apache Kafka, some of the data in Apache Kafka is consumed and extracted through SeaTunnel, and then store into Clickhouse.

- Sina, Big Data Operation Analysis Platform

Sina Data Operation Analysis Platform uses SeaTunnel to perform real-time and offline analysis of data operation and
Expand All @@ -110,27 +97,11 @@ maintenance for Sina News, CDN and other services, and write it into Clickhouse.

Sogou Qiqian System takes SeaTunnel as an ETL tool to help establish a real-time data warehouse system.

- Qutoutiao, Qutoutiao Data Center

Qutoutiao Data Center uses SeaTunnel to support mysql to hive offline ETL tasks, real-time hive to clickhouse backfill
technical support, and well covers most offline and real-time tasks needs.

- Yixia Technology, Yizhibo Data Platform

- Yonghui Superstores Founders' Alliance-Yonghui Yunchuang Technology, Member E-commerce Data Analysis Platform

SeaTunnel provides real-time streaming and offline SQL computing of e-commerce user behavior data for Yonghui Life, a
new retail brand of Yonghui Yunchuang Technology.

- Shuidichou, Data Platform

Shuidichou adopts SeaTunnel to do real-time streaming and regular offline batch processing on Yarn, processing 3~4T data
volume average daily, and later writing the data to Clickhouse.

- Tencent Cloud

Collecting various logs from business services into Apache Kafka, some of the data in Apache Kafka is consumed and extracted through SeaTunnel, and then store into Clickhouse.

For more use cases, please refer to: https://seatunnel.apache.org/blog

## Code of conduct
Expand All @@ -140,14 +111,17 @@ By participating, you are expected to uphold this code. Please follow
the [REPORTING GUIDELINES](https://www.apache.org/foundation/policies/conduct#reporting-guidelines) to report
unacceptable behavior.

## Developer
## Contributors

Thanks to [all developers](https://github.com/apache/seatunnel/graphs/contributors)!

<a href="https://github.com/apache/seatunnel/graphs/contributors">
<img src="https://contrib.rocks/image?repo=apache/seatunnel" />
</a>

## How to compile
Please follow this [document](docs/en/contribution/setup.md).

## Contact Us

* Mail list: **dev@seatunnel.apache.org**. Mail to `dev-subscribe@seatunnel.apache.org`, follow the reply to subscribe
Expand Down
1 change: 1 addition & 0 deletions config/hazelcast.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,4 @@ hazelcast:
hazelcast.tcp.join.port.try.count: 30
hazelcast.logging.type: log4j2
hazelcast.operation.generic.thread.count: 50

3 changes: 1 addition & 2 deletions config/seatunnel.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@

seatunnel:
engine:
history-job-expire-minutes: 1440
backup-count: 1
queue-type: blockingqueue
print-execution-info-interval: 60
Expand All @@ -26,8 +27,6 @@ seatunnel:
checkpoint:
interval: 10000
timeout: 60000
max-concurrent: 1
tolerable-failure: 2
storage:
type: hdfs
max-retained: 3
Expand Down
Loading

0 comments on commit e2ef3ad

Please sign in to comment.