-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support SSL authentication with Kafka in routine load job #1235
Conversation
docs/documentation/cn/administrator-guide/http-actions/fe_get_log_file.md
Outdated
Show resolved
Hide resolved
docs/documentation/cn/administrator-guide/load-data/routine-load-manual.md
Outdated
Show resolved
Hide resolved
…ad-manual.md Co-Authored-By: kangpinghuang <40422952+kangpinghuang@users.noreply.github.com>
…log_file.md Co-Authored-By: kangkaisen <kangkaisen@apache.org>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Save the small file content to FE metadata is necessary? could we save the small file content in FE disk or BE disk?
@@ -0,0 +1,71 @@ | |||
package org.apache.doris.common.util; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
licence header
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -0,0 +1,114 @@ | |||
package org.apache.doris.common.util; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
license header
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Co-Authored-By: kangkaisen <kangkaisen@apache.org>
Co-Authored-By: kangkaisen <kangkaisen@apache.org>
@kangkaisen I've been considered this problem, here are my thoughts:
|
Ok. I See. Thank you. |
gensrc/thrift/BackendService.thrift
Outdated
@@ -144,4 +161,6 @@ service BackendService { | |||
TTabletStatResult get_tablet_stat(); | |||
|
|||
Status.TStatus submit_routine_load_task(1:list<TRoutineLoadTask> tasks); | |||
// This is used for getting some information via Backend | |||
TProxyResult get_info(1:TProxyRequest request); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, new interface should use Brpc, and old interface should be changed to Brpc.
Because Brpc is better for example it support async, connection reused
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I will change it
// check md5sum if necessary | ||
String checksum = Hex.encodeHexString(digest.digest()); | ||
if (!Strings.isNullOrEmpty(md5sum)) { | ||
if (!checksum.equals(md5sum)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ignore case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
throw new DdlException("Failed to check md5 of file: " + file.getName()); | ||
} | ||
|
||
return md5sum.equals(expectedMd5); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ignore case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
SmallFileMgr fileMgr = Catalog.getCurrentCatalog().getSmallFileMgr(); | ||
String filePath; | ||
try { | ||
filePath = fileMgr.saveToFile(fileId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need to save it in a file? I think we can get in memory and send the result.
if (entry.getValue().startsWith("FILE:")) { | ||
String file = entry.getValue().substring(entry.getValue().indexOf(":") + 1); | ||
// check and save file to disk | ||
smallFileMgr.saveToFile(dbId, KAFKA_FILE_CATALOG, file); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know why we should save it to file?
Co-Authored-By: ZHAO Chun <buaa.zhaoc@gmail.com>
Co-Authored-By: ZHAO Chun <buaa.zhaoc@gmail.com>
temporary close |
@@ -2307,7 +2313,7 @@ public boolean updateLoadJobState(LoadJob job, JobState destState, CancelType ca | |||
Catalog.getInstance().getEditLog().logLoadQuorum(job); | |||
} else { | |||
errMsg = "process loading finished fail"; | |||
processCancelled(job, cancelType, errMsg); | |||
processCancelled(job, cancelType, errMsg, failedMsg); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please keep neat
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
@@ -2348,7 +2354,7 @@ public boolean updateLoadJobState(LoadJob job, JobState destState, CancelType ca | |||
Catalog.getInstance().getEditLog().logLoadDone(job); | |||
break; | |||
case CANCELLED: | |||
processCancelled(job, cancelType, errMsg); | |||
processCancelled(job, cancelType, errMsg, failedMsg); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please keep neat
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
CREATE FILE "ca.pem" | ||
PROPERTIES | ||
( | ||
"url" = "https://test.bj.bcebos.com/kafka-key/ca.pem", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
http or https?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is just an example. both https and http is OK
|
||
// this is the kafka consumer which is used to fetch the number of partitions | ||
private KafkaConsumer<String, String> consumer; | ||
private Map<String, String> convertedCustomProperties = Maps.newHashMap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why don't you merge convertedCustomProperties
into customKafkaProperties
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
convertedCustomProperties is just a temporary data struct, and it should be re-created if file changed
return result; | ||
private List<Integer> getAllKafkaPartitions() throws UserException { | ||
convertCustomProperties(); | ||
return KafkaUtil.getAllKafkaPartitions(brokerList, topic, convertedCustomProperties); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the convertedCustomProperties is incorrect, the job will not be cancelled? The connection timeout and certification failed need to be distinguish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if convertedCustomProperties is incorrect, the job will be paused.
And it is not easy to distinguish timeout and other failure. for example, if ssl authentication failed, kafka client on BE will retry again and again , while in FE, it looks like timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of cause, you remind me that convertedCustomProperties should be re-created when routine load job transfer from PAUSE to NEED_SCHEDULE, because some files may be re-created anytime.
|
||
/* | ||
* Author: Chenmingyu | ||
* Date: May 29, 2019 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
em... Is author necessary ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this is just the IDE's auto adding
@@ -62,19 +63,17 @@ | |||
public class KafkaRoutineLoadJob extends RoutineLoadJob { | |||
private static final Logger LOG = LogManager.getLogger(KafkaRoutineLoadJob.class); | |||
|
|||
private static final int FETCH_PARTITIONS_TIMEOUT_SECOND = 5; | |||
public static final String KAFKA_FILE_CATALOG = "kafka"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The catalog is a static param of job while the catalog of file is created by user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, in my design , catalog named 'kafka' is a reserved keyword, which is exactly used by kafka client.
…-dev (561fddc 20221228) (apache#1304) ``` 20211227 20221228 db04150a8d cd65d15ede v v selectdb-cloud-release-2.0 --o---.-----o------o-----o--o------------------ . \ . \ selectdb-cloud-release-2.1 --o---o \ \ \ \___________ \ \ \ selectdb-cloud-merge-2.0-2.1(tmp) o----o---o / \ selectdb-cloud-dev ----o-----o--------o-----o--o---o--------------- ^ 561fddc 20221228 ``` * [feature](selectdb-cloud) Fix file cache metrics nullptr error (apache#1060) * [feature](selectdb-cloud) Fix abort copy when -235 (apache#1039) * [feature](selectdb-cloud) Replace libfdb_c.so to make it compatible with different OS (apache#925) * [feature](selectdb-cloud) Optimize RPC retry in cloud_meta_mgr (apache#1027) * Optimize RETRY_RPC in cloud_meta_mgr * Add random sleep for RETRY_RPC * Add a simple backoff strategy for rpc retry * [feature](selectdb-cloud) Copy into support select by column name (apache#1055) * Copy into support select by column name * Fix broker load core dump due to mis-match of number of columns between remote and schema * [feature](selectdb-cloud) Fix test_dup_mv_schema_change case (apache#1022) * [feature](selectdb-cloud) Make the broker execute on the specified cluster (apache#1043) * Make the broker execute on the specified cluster * Pass the cluster parameter * [feature](selectdb-cloud) Support concurrent BaseCompaction and CumuCompaction on a tablet (apache#1059) * [feature](selectdb-cloud) Reduce meta-service log (apache#1067) * Quote string in the tagged log * Add template to enable customized log for RPC requests * [feature](selectdb-cloud) Use read-only txn + read-write txn for `commit_txn` (apache#1065) * [feature](selectdb-cloud) Pick "[fix](load) fix that load channel failed to be released in time (apache#14119)" commit 3690c4d Author: Xin Liao <liaoxinbit@126.com> Date: Wed Nov 9 22:38:08 2022 +0800 [fix](load) fix that load channel failed to be released in time (apache#14119) * [feature](selectdb-cloud) Add compaction profile log (apache#1072) * [feature](selectdb-cloud) Fix abort txn fail when copy job `getAllFileStatus` exception (apache#1066) * Revert "[feature](selectdb-cloud) Copy into support select by column name (apache#1055)" This reverts commit f1a543e. * [feature](selectdb-cloud) Pick"[fix](metric) fix the bug of not updating the query latency metric apache#14172 (apache#1076)" * [feature](selectdb-cloud) Distinguish KV_TXN_COMMIT_ERR or KV_TXN_CONFLICT while commit failed (apache#1082) * [feature](selectdb-cloud) Support configuring base compaction concurrency (apache#1080) * [feature](selectdb-cloud) Enhance start.sh/stop.sh for selectdb_cloud (apache#1079) * [feature](selectdb-cloud) Add smoke testing (apache#1056) Add smoke test, 1. upload,query http data api. 2. internal, external stage. 3. select,insert * [feature](selectdb-cloud) Disable admin stmt in cloud mode (apache#1064) Disable the following stmt. * AdminRebalanceDiskStmt/AdminCancelRebalanceDiskStmt * AdminRepairTableStmt/AdminCancelRepairTableStmt * AdminCheckTabletsStmt * AdminCleanTrashStmt * AdminCompactTableStmt * AdminCopyTabletStmt * AdminDiagnoseTabletStmt * AdminSetConfigStmt * AdminSetReplicaStatusStmt * AdminShowConfigStmt * AdminShowReplicaDistributionStmt * AdminShowReplicaStatusStmt * AdminShowTabletStorageFormatStmt Leaving a backdoor for the user root: * AdminSetConfigStmt * AdminShowConfigStmt * AdminShowReplicaDistributionStmt * AdminShowReplicaStatusStmt * AdminDiagnoseTabletStmt * [feature](selectdb-cloud) Update copy into doc (apache#1063) * [feature](selectdb-cloud) Fix AdminSetConfigStmt cannot work with root (apache#1085) * [feature](selectdb-cloud) Fix userid null lead to checkpoint error (apache#1083) * [feature](selectdb-cloud) Support controling the space used for upload (apache#1091) * [feature](selectdb-cloud) Pick "[fix](sequence) fix that update table core dump with sequence column (apache#13847)" (apache#1092) * [Fix](memory-leak) Fix boost::stacktrace memory leak (1097) * [Fix](selectdb-cloud) Several picks to fix memtracker (apache#1087) * [enhancement](memtracker) Add independent and unique scanner mem tracker for each query (apache#13262) * [enhancement](memory) Print memory usage log when memory allocation fails (apache#13301) * [enhancement](memtracker) Print query memory usage log every second when `memory_verbose_track` is enabled (apache#13302) * [fix](memory) Fix USE_JEMALLOC=true UBSAN compilation error apache#13398 * [enhancement](memtracker) Fix bthread local consume mem tracker (apache#13368) Previously, bthread_getspecific was called every time bthread local was used. In the test at apache#10823, it was found that frequent calls to bthread_getspecific had performance problems. So a cache is implemented on pthread local based on the btls key, but the btls key cannot correctly sense bthread switching. So, based on bthread_self to get the bthread id to implement the cache. * [enhancement](memtracker) Fix brpc causing query mem tracker to be inaccurate apache#13401 * [fix](memtracker) Fix transmit_tracker null pointer because phamp is not thread safe apache#13528 * [enhancement](memtracker) Fix Brpc mem count and refactored thread context macro (apache#13469) * [fix](memtracker) Fix the usage of bthread mem tracker (apache#13708) bthead context init has performance loss, temporarily delete it first, it will be completely refactored in apache#13585. * [enhancement](memtracker) Refactor load channel + memtable mem tracker (apache#13795) * [fix](load) Fix load channel mgr lock (apache#13960) hot fix load channel mgr lock * [fix](memtracker) Fix DCHECK !std::count(_consumer_tracker_stack.begin(), _consumer_tracker_stack.end(), tracker) * [tempfix][memtracker] wait pick 0b945fe Co-authored-by: Xinyi Zou <zouxinyi02@gmail.com> * [feature](selectdb-cloud) Add more recycler case (apache#1094) * [feature](selectdb-cloud) Pick "[improvement](load) some simple optimization for reduce load memory policy (apache#14215)" (apache#1096) * [feature](selectdb-cloud) Reduce unnecessary get rowset rpc when prepare compaction (apache#1099) * [feature](selectdb-cloud) Pick "[improvement](load) reduce memory in batch for small load channels (apache#14214)" (apache#1100) * [feature](selectdb-cloud) Pick "[improvement](load) release load channel actively when error occurs (apache#14218)" (apache#1102) * [feature](selectdb-cloud) Print build info of ms/recycler to stdout when launch (apache#1105) * [feature](selectdb-cloud) copy into support select by column name and load with partial columns (apache#1104) e.g. ``` COPY INTO test_table FROM (SELECT col1, col2, col3 FROM @ext_stage('1.parquet')) COPY INTO test_table (id, name) FROM (SELECT col1, col2 FROM @ext_stage('1.parquet')) ``` * [fix](selectdb-cloud) Pick "[Fix](array-type) bugfix for array column with delete condition (apache#13361)" (apache#1109) Fix for SQL with array column: delete from tbl where c_array is null; more info please refer to apache#13360 Co-authored-by: camby <104178625@qq.com> Co-authored-by: cambyzju <zhuxiaoli01@baidu.com> * [feature](selectdb-cloud) Copy into support force (apache#1081) * [feature](selectdb-cloud) Add abort txn, abort tablet job http api (apache#1101) Abort load txn by txn_id: ``` curl "{meta_sevice_ip}:{brpc_port}/MetaService/http/abort_txn?token=greedisgood9999" -d '{ "cloud_unique_id": string, "txn_id": int64 }' ``` Abort load txn by db_id and label: ``` curl "{meta_sevice_ip}:{brpc_port}/MetaService/http/abort_txn?token=greedisgood9999" -d '{ "cloud_unique_id": string, "db_id": int64, "label": string }' ``` Only support abort compaction job currently: ``` curl "{meta_sevice_ip}:{brpc_port}/MetaService/http/abort_tablet_job?token=greedisgood9999" -d '{ "cloud_unique_id": string, "job" : { "idx": {"tablet_id": int64}, "compaction": [{"id": string}] } }' ``` * [feature](selectdb-cloud) Fix external stage data for smoke test and retry to create stage (apache#1119) * [feature](selectdb-cloud) Fix data leaks when truncating table (apache#1114) * Drop cloud partition when truncating table * Add retry strategy for dropCloudMaterializedIndex * [feature](selectdb-cloud) Fix missing library when compiling unit test (apache#1128) * [feature](selectdb-cloud) Validate the object storage when create stage (apache#1115) * [feature](selectdb-cloud) Fix incorrectly setting cumulative point when committing base compaction (apache#1127) * [feature](selectdb-cloud) Fix missing lease when preparing cumulative compaction (apache#1131) * [feature](selectdb-cloud) Fix unbalanced tablet distribution (apache#1121) * Fix the bug of unbalanced tablet distribution * Use replica index hash to BE * [feature](selectdb-cloud) Fix core dump when get tablets info by BE web page (apache#1113) * [feature](selectdb-cloud) Fix start_fe.sh --version (apache#1106) * [feature](selectdb-cloud) Print tablet stats before and after compaction (apache#1132) * Log num rowsets before and after compaction * Print tablet stats after committing compaction * [feature](selectdb-cloud) Allow root user execute AlterSystemStmt (apache#1143) * [feature](selectdb-cloud) Fix BE UT (apache#1141) * [feature](selectdb-cloud) Select BE for the first bucket of every partition randomly (apache#1136) * [feature](selectdb-cloud) Fix query_limit int -> int64 (apache#1154) * [feature](selectdb-cloud) Add more cloud recycler case (apache#1116) * add more cloud recycler case * modify cloud recycler case dateset from sf0.1 to sf1 * [feature](selectdb-cloud) Fix misuse of aws transfer which may delete tmp file prematurely (apache#1160) * [feature](selectdb-cloud) Add test for copy into http data api and userId (apache#1044) * Add test for copy into http data api and userId * Add external and internal stage cross use regression case. * [feature](selectdb-cloud) Pass the cloud compaction regression test (apache#1173) * [feature](selectdb-cloud) Modify max_bytes_per_broker_scanner default value to 150G (apache#1184) * [feature](selectdb-cloud) Fix missing lock when calling Tablet::delete_predicates (apache#1182) * [improvement](config)change default remote_fragment_exec_timeout_ms to 30 seconds * [improvement](config) change default value of broker_load_default_timeout_second to 12 hours * [feature](selectdb-cloud) Fix replay copy into (apache#1167) * Add stage ddl regression * fix replay copy into * remove unused log * fix user name * [feature](selectdb-cloud) Fix FE --version option not work after fe started (apache#1161) * [feature](selectdb-cloud) BE accesses object store using HTTP (apache#1111) * [feature](selectdb-cloud) Refactor recycle copy jobs (apache#1062) * [fix](FE) Pick fix from doris master (apache#1177) (apache#1178) Commit: 53e5f39 Author: starocean999 <40539150+starocean999@users.noreply.github.com> Committer: GitHub <noreply@github.com> Date: Mon Oct 31 2022 10:19:32 GMT+0800 (China Standard Time) fix result exprs should be substituted in the same way as agg exprs (apache#13744) Commit: a4a9912 Author: starocean999 <40539150+starocean999@users.noreply.github.com> Committer: GitHub <noreply@github.com> Date: Thu Nov 03 2022 10:26:59 GMT+0800 (China Standard Time) fix group by constant value bug (apache#13827) Commit: 84b969a Author: starocean999 <40539150+starocean999@users.noreply.github.com> Committer: GitHub <noreply@github.com> Date: Thu Nov 10 2022 11:10:42 GMT+0800 (China Standard Time) fix the grouping expr should check col name from base table first, then alias (apache#14077) Commit: ae4f4b9 Author: starocean999 <40539150+starocean999@users.noreply.github.com> Committer: GitHub <noreply@github.com> Date: Thu Nov 24 2022 10:31:58 GMT+0800 (China Standard Time) fix having clause should use column name first then alias (apache#14408) * [feature](selectdb-cloud) Deal with getNextTransactionId rpc exception (apache#1181) Before fixing, getNextTransactionId will return -1 if there is RPC exception, it will cause schema change and the previous load task execute in parallel unexpectedly. * [feature](selectdb-cloud) Throw exception for unsupported operations in CloudGlobalTransactionMgr (apache#1180) * [improvement](load) Add more log on RPC error (apache#1183) * [feature](selectdb-cloud) Add copy_into case(json, parquet, orc) and tpch_sf1 to smoke test (apache#1140) * [feature](selectdb-cloud) Recycle dropped stage (apache#1071) * log s3 response code * add log in S3Accessor::delete_objects_by_prefix * Fix show copy * remove empty line * [feature](selectdb-cloud) Support bthread for new scanner (apache#1117) * Support bthread for new scanner * Keep the number of remote threads same as local threads * [feature](selectdb-cloud) Implement self-explained cloud unique id for instance id searching (apache#1089) 1. Implement self-explained cloud unique id for instance id searching 2. Fix register core when metaservice start error 3. Fix drop_instance not set mtime 4. Add HTTP API to get instance info ``` curl "127.0.0.1:5008/MetaService/http/get_instance?token=greedisgood9999&cloud_unique_id=regression-cloud-unique-id-fe-1" curl "127.0.0.1:5008/MetaService/http/get_instance?token=greedisgood9999&cloud_unique_id=1:regression_instance0:regression-cloud-unique-id-fe-1" curl "127.0.0.1:5008/MetaService/http/get_instance?token=greedisgood9999&instance_id=regression_instance0" ``` * [improvement](memory) simplify memory config related to tcmalloc and add gc (apache#1191) * [improvement](memory) simplify memory config related to tcmalloc There are several configs related to tcmalloc, users do know how to config them. Actually users just want two modes, performance or compact, in performance mode, users want doris run query and load quickly while in compact mode, users want doris run with less memory usage. If we want to config tcmalloc individually, we can use env variables which are supported by tcmalloc. * [improvement](tcmalloc) add moderate mode and avoid oom with a lot of cache (apache#14374) ReleaseToSystem aggressively when there are little free memory. * [feature](selectdb-cloud) Pick "[fix](hashjoin) fix coredump of hash join in ubsan build apache#13479" (apache#1190) commit b5cd167 Author: TengJianPing <18241664+jacktengg@users.noreply.github.com> Date: Thu Oct 20 10:16:19 2022 +0800 [fix](hashjoin) fix coredump of hash join in ubsan build (apache#13479) * [feature](selectdb-cloud) Support close FileWriter without forcing sync data to storage medium (apache#1134) * Trace accumulated time * Support close FileWriter without forcing sync data to storage medium * Avoid trace overhead when disable trace * [feature](selectdb-cloud) Pick "[BugFix](function) fix reverse function dynamic buffer overflow due to illegal character apache#13671" (apache#1146) * pick [opt](exec) Replace get_utf8_byte_length function by array (apache#13664) * pick [BugFix](function) fix reverse function dynamic buffer overflow due to illegal character apache#13671 Co-authored-by: HappenLee <happenlee@hotmail.com> * [feature](selectdb-cloud) Pick "[fix](fe) Inconsistent behavior for string comparison in FE and BE (apache#13604)" (apache#1150) Co-authored-by: xueweizhang <zxw520blue1@163.com> * [feature](selectdb-cloud) Copy into support delete_on condition (apache#1148) * [feature](selectdb-cloud) Pick "[fix](agg)fix group by constant value bug (apache#13827)" (apache#1152) * [fix](agg)fix group by constant value bug * keep only one const grouping exprs if no agg exprs Co-authored-by: starocean999 <40539150+starocean999@users.noreply.github.com> * [feature](selectdb-cloud) Pick "[fix](join)the build and probe expr should be calculated before converting input block to nullable (apache#13436)" (apache#1155) * [fix](join)the build and probe expr should be calculated before converting input block to nullable * remove_nullable can be called on const column Co-authored-by: starocean999 <40539150+starocean999@users.noreply.github.com> * [feature](selectdb-cloud) Pick "[Bug](predicate) fix core dump on bool type runtime filter (apache#13417)" (apache#1156) fix core dump on bool type runtime filter Co-authored-by: Pxl <pxl290@qq.com> * [feature](selectdb-cloud) Pick "[Fix](agg) fix bitmap agg core dump when phmap pointer assert alignment (apache#13381)" (apache#1157) Co-authored-by: zhangstar333 <87313068+zhangstar333@users.noreply.github.com> * [feature](selectdb-cloud) Pick "[Bug](function) fix core dump on case when have 1000 condition apache#13315" (apache#1158) Co-authored-by: Pxl <pxl290@qq.com> * [feature](selectdb-cloud) Pick "[fix](sort)the sort expr nullable info is wrong in some case (apache#12003)" * [feature](selectdb-cloud) Pick "[Improvement](decimal) print decimal according to the real precision and scale (apache#13437)" * [feature](selectdb-cloud) Pick "[bugfix](VecDateTimeValue) eat the value of microsecond in function from_date_format_str (apache#13446)" * [bugfix](VecDateTimeValue) eat the value of microsecond in function from_date_format_str * add sql based regression test Co-authored-by: xiaojunjie <xiaojunjie@baidu.com> * [feature](selectdb-cloud) Allow ShowProcesslistStmt for normal user (apache#1153) * [feature](selectdb-cloud) tcmalloc gc does not work in somecases (apache#1202) * [feature](selectdb-cloud) show data stmt supports db level stats and add metrics for table data size (apache#1145) * [feature](selectdb-cloud) Fix bug in calculating number of available threads for base compaction (apache#1203) * [feature](selectdb-cloud) Fix unexpected remaining cluster ids on observer when dropping cluster (apache#1194) We don't have the RPC `dropCluster` on, all clusters are built with tags in the backends info.. In the previous, FE master drop a cluster by counting clusters retrieved from meta-service, observers update map `clusteIdToBackend` and `clusterNameToId` by replaying backend node operations, which leads to inconsistency of FE master and FE observer. We can treat empty clusters as dropped clusters to keep consistency. Check <https://selectdb.feishu.cn/wiki/wikcnqI6HfD5mw8kHoGD5DqDxOe> for more info. * [feature](selectdb-cloud) Bump version to 2.0.13 * [opt](tcmalloc) Optimize policy of tcmalloc gc (apache#1214) Release memory when memory pressure is above pressure limit and keep at lease 2% memory as tcmalloc cache. * [feature](selectdb-cloud) Fix some bugs of cloud cluster (apache#1213) 1. fix executing load in multi clusters 2. fix use@ cluster on fe observer 3. fix forward without cloud cluster, we set cloud cluster when use cluster on observer * [fix](tcmalloc) Do not release cache aggressively when rss is low (apache#1216) * [fix](tcmalloc) Fix negative to_free_bytes due to physical_limit (apache#1217) * [feature](selectdb-cloud) Fix old cluster information left in Context (apache#1220) * [feature](selectdb-cloud) Add multi cluster regression case (apache#1226) * [feature](selectdb-cloud) Fix too many obs client log (apache#1227) * [fix](memory) Fix memory leak by calling boost::stacktrace (apache#14269) (incomplete pick) (apache#1210) boost::stacktrace::stacktrace() has memory leak, so use glog internal func to print stacktrace. The reason for the memory leak of boost::stacktrace is that a state is saved in the thread local of each thread but not actively released. The test found that each thread leaked about 100M after calling boost::stacktrace. refer to: boostorg/stacktrace#118 boostorg/stacktrace#111 Co-authored-by: Xinyi Zou <zouxinyi02@gmail.com> * [feature](selectdb-cloud) Check md5sum of libfdb.xz (apache#1163) * [feature](selectdb-cloud) Add multi cluster regression case (apache#1231) * add multi cluster regression case * refine code of multi cluster regression test * [fix](memtracker) Fix segment_meta_mem_tracker pick error (stacktrace) (apache#1237) * [feature](selectdb-cloud) Fix and improve compaction trace (apache#1233) * [feature](selectdb-cloud) Support cloud cluster in select hints (apache#984) e.g. ``` SELECT /*+ SET_VAR(cloud_cluster = ${cluster_name}) */ * from table ``` * [feature](selectdb-cloud) Fix load parquet coredump (apache#1238) * [feature](selectdb-cloud) Improve FE cluster metrics for monitoring (apache#1232) * [feature](selectdb-cloud) Add multi cluster async copy into regression case (apache#1242) * add multi cluster regression case * refine code of multi cluster regression test * Add multi cluster async copy into regression case * [feature](selectdb-cloud) Add error url regression case (apache#1246) * [feature](selectdb-cloud) Upgrade mariadb client version (apache#1240) This change _may_ fix "Failed to execute sql: java.lang.ClassCastException: java.util.LinkedHashMap$Entry cannot be cast to java.util.HashMap$TreeNode" * [feature](selectdb-cloud) Fix replay copy job and fail msg (apache#1239) * [feature](selectdb-cloud) Fix improper number of input rowsets of cumulative compaction (apache#1235) Remove the logic that returns input rowsets directly if the total size is larger than the promotion size in cumulative compaction policy. * Pick "[Feature](runtime-filter) add runtime filter breaking change adapt apache#13246" (apache#1221) This commit fix tpcds q85. * [feature](selectdb-cloud) Update http api doc (apache#1230) * [feature](selectdb-cloud) Optimize count/max/min query by caching the index info when write (apache#1222) * [feature](selectdb-cloud) Fix the codedump about fragment_executor double prepare (apache#1249) * [feature](selectdb-cloud) Clean copy jobs by num (apache#1219) * [feature](selectdb-cloud) Fix is_same_v failed bug in begin_rpc (apache#1250) * [feature](selectdb-cloud) Change delete logic of fdbbackup (apache#1248) * [feature](selectdb-cloud) Fix misuse of aws transfer which may delete tmp file prematurely (apache#1159) * Fix misusage of aws transfer manager * Share TransferManager in a S3FileSystem * Fix uploading incorrect data when opening file failed * Add ut for uploading to s3 * [feature](selectdb-cloud) Bump version to 2.0.14 (apache#1255) * [feature](selectdb-cloud) Improve copy into with delete on for json/parquet/orc (apache#1257) * [feature](selectdb-cloud) Implement tablet balance at partition level (apache#1247) * [feature](selectdb-cloud) Add pad_segment http action to manually overwrite an unrecoverable segment with an empty segment (apache#1254) * [feature](selectdb-cloud) Fix be ut (apache#1262) * [feature](selectdb-cloud) Modify regression case to adapt cloud mode (apache#1264) * [feature][selectdb-cloud] Fix unknown table caused by partition level balance when replay (apache#1265) * [feature][selectdb-cloud] Adjust the log level of table creation (apache#1260) * [feature](selectdb-cloud) Add config of the number of warn log files (apache#1245) * [feature](selectdb-cloud) Fix test_multiply case incorrect variable (apache#1266) * [feature](selectdb-cloud) Check connection timeout when create stage (apache#1253) * [feature][selectdb-cloud] Add auth check for undetermined cluster (apache#1258) * [feature](selectdb-cloud) Meta-service support conf rate limit (apache#1205) * [feature](selectdb-cloud) Check the config for file cache when launch to increase robustness (apache#1269) * [feature][selectdb-cloud] Add MaxBuildRowsetTime, MaxBuildRowsetTime, UploadSpeed in tablet sink profile (apache#1252) * [feature](selectdb-cloud) Fix three regression case for cloud (apache#1271) * [feature](selectdb-cloud) Reduce log of get_tablet_stats (apache#1274) * [feature](selectdb-cloud) Deprecate max_upload_speed and min_upload_speed in PTabletWriterAddBlockResult * [feature](selectdb-cloud) Add more conf for BRPC to get rid of "overcrowed" DECLARE_uint64(max_body_size); DECLARE_int64(socket_max_unwritten_bytes); * Pick "[Chore](regression) Fix wrong result for decimal (apache#13644)" commit e007343 Author: Gabriel <gabrielleebuaa@gmail.com> Date: Wed Oct 26 09:24:46 2022 +0800 [Chore](regression) Fix wrong result for decimal (apache#13644) * [feature](selectdb-cloud) Fix transfer handle doesn't init (apache#1276) * [feature](selectdb-cloud) Fix test_segment_iterator_delete case (apache#1275) * [feature][selectdb-cloud] Update copy upload doc (apache#1273) * [Fix](inverted index) pick clucene error processing from dev (apache#1287) * [bug][inverted]fix be core when throw CLuceneError (apache#1261) * [bug][inverted]fix be core when throw CLuceneError * catch clucene error and add warning logs * optimize code * [Fix](inverted index) return error if inverted index writer init failed (apache#1267) * [Fix](inverted index) return error if inverted index writer init failed * [Fix](segment_writer) need to return error status when create segment writer Co-authored-by: airborne12 <airborne12@gmail.com> Co-authored-by: luennng <luennng@gmail.com> Co-authored-by: airborne12 <airborne12@gmail.com> * [enhancement] support convert TYPE_FLOAT in function convert_type_to_primitive (apache#1290) * [feature][selectdb-cloud] Fix meta service range get instance when launch (apache#1293) Co-authored-by: Lightman <31928846+Lchangliang@users.noreply.github.com> Co-authored-by: meiyi <myimeiyi@gmail.com> Co-authored-by: Xiaocc <598887962@qq.com> Co-authored-by: Lei Zhang <27994433+SWJTU-ZhangLei@users.noreply.github.com> Co-authored-by: Xin Liao <liaoxinbit@126.com> Co-authored-by: Luwei <814383175@qq.com> Co-authored-by: plat1ko <platonekosama@gmail.com> Co-authored-by: deardeng <565620795@qq.com> Co-authored-by: Kidd <107781942+k-i-d-d@users.noreply.github.com> Co-authored-by: Xinyi Zou <zouxinyi02@gmail.com> Co-authored-by: zhannngchen <48427519+zhannngchen@users.noreply.github.com> Co-authored-by: camby <104178625@qq.com> Co-authored-by: cambyzju <zhuxiaoli01@baidu.com> Co-authored-by: Yongqiang YANG <98214048+dataroaring@users.noreply.github.com> Co-authored-by: starocean999 <40539150+starocean999@users.noreply.github.com> Co-authored-by: Gabriel <gabrielleebuaa@gmail.com> Co-authored-by: AlexYue <yj976240184@qq.com> Co-authored-by: xueweizhang <zxw520blue1@163.com> Co-authored-by: Pxl <pxl290@qq.com> Co-authored-by: zhangstar333 <87313068+zhangstar333@users.noreply.github.com> Co-authored-by: xiaojunjie <971308896@qq.com> Co-authored-by: xiaojunjie <xiaojunjie@baidu.com> Co-authored-by: airborne12 <airborne08@gmail.com> Co-authored-by: luennng <luennng@gmail.com> Co-authored-by: airborne12 <airborne12@gmail.com> Co-authored-by: YueW <45946325+Tanya-W@users.noreply.github.com>
Implement a Small File Manager which allows user to upload small files, save them in Doris and use them later. Details can be found in
docs/documentation/cn/administrator-guide/small_file_mgr.md
anddocs/help/Contents/Administration/small_files.md
.Support SSL authentication with Kafka in routine load job. Details can be found in
docs/documentation/cn/administrator-guide/load-data/routine-load-manual.md
anddocs/help/Contents/Data Manipulation/routine_load.md
Remove Kafka Java client. Now we only use librdkafka on BE to connect Kafka brokers.
Fix bugs that invalid routine load task configuration may cause too many aborted transactions.
Modify the
get_log_file
restful api on FE. Details can be found indocs/documentation/cn/administrator-guide/http-actions/fe_get_log_file.md
.Optimize the error messages of CANCEL LOAD operation.
ISSUE #1234