Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(bulkload): bulkload cause many node coredump #2036

Closed
wants to merge 29 commits into from

Conversation

ruojieranyishen
Copy link
Collaborator

What problem does this PR solve?

Related issue:
#2006

What is changed and how does it work?

Avoid using _metadata.files reference ,and add a read_lock

Tests

Because bulkload imports a large amount of data.

  • Manual test

Test 1: bulkload files miss four sst files.

image
After fix : Table ingest_p4_10G partition1 is missing files, and the table ballot does not increase after bulkload.

[2024/5/23 15:11:10] [general]
[2024/5/23 15:11:10] app_name           : ingest_p4_10G
[2024/5/23 15:11:10] app_id             : 100          
[2024/5/23 15:11:10] partition_count    : 4            
[2024/5/23 15:11:10] max_replica_count  : 3            
[2024/5/23 15:11:10] 
[2024/5/23 15:11:10] [replicas]
[2024/5/23 15:11:10] pidx  ballot  replica_count  primary                              secondaries                                                                
[2024/5/23 15:11:10] 0     3       3/3            c3-hadoop-pegasus-tst-st05.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 15:11:10] 1     3       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 15:11:10] 2     3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 15:11:10] 3     3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 15:11:10] 
[2024/5/23 15:11:10] [nodes]
[2024/5/23 15:11:10] node                                 primary  secondary  total  
[2024/5/23 15:11:10] c3-hadoop-pegasus-tst-st02.bj:31101  1        1          2      
[2024/5/23 15:11:10] c3-hadoop-pegasus-tst-st01.bj:31101  1        1          2      
[2024/5/23 15:11:10] c3-hadoop-pegasus-tst-st05.bj:31101  1        2          3      
[2024/5/23 15:11:10] c3-hadoop-pegasus-tst-st04.bj:31101  0        2          2      
[2024/5/23 15:11:10] c3-hadoop-pegasus-tst-st03.bj:31101  1        2          3      



[2024/5/23 15:13:12] >>> start_bulk_load -a ingest_p4_10G  -c c3tst-performance2 -p hdfs_zjy -r /user/s_pegasus/lpfsplit

[2024/5/23 15:15:58] >>> query_bulk_load_status -a ingest_p4_10G -d
[2024/5/23 15:15:58] [all partitions]
[2024/5/23 15:15:58] partition_index  partition_status  is_cleaned_up  
[2024/5/23 15:15:58] 0                BLS_FAILED        NO             
[2024/5/23 15:15:58] 1                BLS_FAILED        NO             
[2024/5/23 15:15:58] 2                BLS_FAILED        NO             
[2024/5/23 15:15:58] 3                BLS_FAILED        NO    


[2024/5/23 15:16:13] [general]
[2024/5/23 15:16:13] app_name           : ingest_p4_10G
[2024/5/23 15:16:13] app_id             : 100          
[2024/5/23 15:16:13] partition_count    : 4            
[2024/5/23 15:16:13] max_replica_count  : 3            
[2024/5/23 15:16:13] 
[2024/5/23 15:16:13] [replicas]
[2024/5/23 15:16:13] pidx  ballot  replica_count  primary                              secondaries                                                                
[2024/5/23 15:16:13] 0     3       3/3            c3-hadoop-pegasus-tst-st05.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 15:16:13] 1     3       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 15:16:13] 2     3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 15:16:13] 3     3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 15:16:13] 
[2024/5/23 15:16:13] [nodes]
[2024/5/23 15:16:13] node                                 primary  secondary  total  
[2024/5/23 15:16:13] c3-hadoop-pegasus-tst-st02.bj:31101  1        1          2      
[2024/5/23 15:16:13] c3-hadoop-pegasus-tst-st01.bj:31101  1        1          2      
[2024/5/23 15:16:13] c3-hadoop-pegasus-tst-st05.bj:31101  1        2          3      
[2024/5/23 15:16:13] c3-hadoop-pegasus-tst-st04.bj:31101  0        2          2      
[2024/5/23 15:16:13] c3-hadoop-pegasus-tst-st03.bj:3

Test 2: Bulkload Download stage restart a node

After fix : No continuous core dumps on multiple nodes.

[2024/5/23 15:59:18] >>> app ingest_p32_10G -dr
[2024/5/23 15:59:18] [parameters]
[2024/5/23 15:59:18] app_name  : ingest_p32_10G
[2024/5/23 15:59:18] detailed  : true          
[2024/5/23 15:59:18] 
[2024/5/23 15:59:18] [general]
[2024/5/23 15:59:18] app_name           : ingest_p32_10G
[2024/5/23 15:59:18] app_id             : 101           
[2024/5/23 15:59:18] partition_count    : 32            
[2024/5/23 15:59:18] max_replica_count  : 3             
[2024/5/23 15:59:18] 
[2024/5/23 15:59:18] [replicas]
[2024/5/23 15:59:18] pidx  ballot  replica_count  primary                              secondaries                                                                
[2024/5/23 15:59:18] 0     3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 15:59:18] 1     3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 15:59:18] 2     3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 15:59:18] 3     3       3/3            c3-hadoop-pegasus-tst-st05.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 15:59:18] 4     3       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 15:59:18] 5     3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 15:59:18] 6     3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 15:59:18] 7     3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 15:59:18] 8     3       3/3            c3-hadoop-pegasus-tst-st05.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 15:59:18] 9     3       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 15:59:18] 10    3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 15:59:18] 11    3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 15:59:18] 12    3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 15:59:18] 13    3       3/3            c3-hadoop-pegasus-tst-st05.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 15:59:18] 14    3       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 15:59:18] 15    3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 15:59:18] 16    3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 15:59:18] 17    3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 15:59:18] 18    3       3/3            c3-hadoop-pegasus-tst-st05.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 15:59:18] 19    3       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 15:59:18] 20    3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 15:59:18] 21    3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 15:59:18] 22    3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 15:59:18] 23    3       3/3            c3-hadoop-pegasus-tst-st05.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 15:59:18] 24    3       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 15:59:18] 25    3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 15:59:18] 26    3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 15:59:18] 27    3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 15:59:18] 28    3       3/3            c3-hadoop-pegasus-tst-st05.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 15:59:18] 29    3       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 15:59:18] 30    3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 15:59:18] 31    3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  



[2024/5/23 15:59:39] >>> start_bulk_load -a ingest_p32_10G  -c c3tst-performance2 -p hdfs_zjy -r /user/s_pegasus/lpfsplit

[2024/5/23 16:01:18] 2024-05-23 16:01:18 Stop task 4 of replica on 10.142.100.15(0) success
[2024/5/23 16:01:50] 2024-05-23 16:01:50 Start task 4 of replica on 10.142.100.15(0) success

[2024/5/23 16:03:17] [general]
[2024/5/23 16:03:17] app_name           : ingest_p32_10G
[2024/5/23 16:03:17] app_id             : 101           
[2024/5/23 16:03:17] partition_count    : 32            
[2024/5/23 16:03:17] max_replica_count  : 3             
[2024/5/23 16:03:17] 
[2024/5/23 16:03:17] [replicas]
[2024/5/23 16:03:17] pidx  ballot  replica_count  primary                              secondaries                                                                
[2024/5/23 16:03:17] 0     5       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 1     5       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 2     3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 16:03:17] 3     6       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 4     5       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 5     3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 16:03:17] 6     3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 16:03:17] 7     4       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st05.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 16:03:17] 8     6       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 9     5       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 10    3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 16:03:17] 11    5       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 12    4       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 13    6       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 14    5       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 15    5       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 16    3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 16:03:17] 17    4       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 18    6       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 19    3       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 16:03:17] 20    3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st02.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 16:03:17] 21    3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 16:03:17] 22    3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st04.bj:31101]  
[2024/5/23 16:03:17] 23    6       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 24    5       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 25    3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 16:03:17] 26    3       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st01.bj:31101]  
[2024/5/23 16:03:17] 27    3       3/3            c3-hadoop-pegasus-tst-st01.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st02.bj:31101]  
[2024/5/23 16:03:17] 28    6       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st04.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 29    5       3/3            c3-hadoop-pegasus-tst-st03.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  
[2024/5/23 16:03:17] 30    3       3/3            c3-hadoop-pegasus-tst-st04.bj:31101  [c3-hadoop-pegasus-tst-st01.bj:31101,c3-hadoop-pegasus-tst-st03.bj:31101]  
[2024/5/23 16:03:17] 31    5       3/3            c3-hadoop-pegasus-tst-st02.bj:31101  [c3-hadoop-pegasus-tst-st03.bj:31101,c3-hadoop-pegasus-tst-st05.bj:31101]  

Side effects

Locking _metadata.files may incur a performance penalty.

acelyc111 and others added 28 commits July 17, 2024 15:46
…nment variable (apache#2035)

Building Pegasus thirdparty libraries costs long time, it would be meaningful
to reuse a built thirdparty directory when build Pegasus source code in different
directories.

This patch introduces an environment variable `PEGASUS_THIRDPARTY_ROOT` to indicate
the thirdparty directory, if it has been built, it can be skipt to save time and
disk space.
…ds (apache#2030)

Add JSON output to some backup policy commands to facilitate the writing of automation scripts.

Backup policy commands including:
- ls_backup_policy
- query_backup_policy

ls_backup_policy Output example by Tabler format
```
[p1]
backup_provider_type  : hdfs_service
backup_interval       : 86400s
app_ids               : {3}
start_time            : 03:36
status                : enabled
backup_history_count  : 1

[p2]
backup_provider_type  : hdfs_service
backup_interval       : 86400s
app_ids               : {3}
start_time            : 20:25
status                : enabled
backup_history_count  : 1
```
ls_backup_policy Output example by JSON format
```
{
    "p1": {
        "backup_provider_type": "hdfs_service",
        "backup_interval": "86400s",
        "app_ids": "{3}",
        "start_time": "03:36",
        "status": "enabled",
        "backup_history_count": "1"
    },
    "p2": {
        "backup_provider_type": "hdfs_service",
        "backup_interval": "86400s",
        "app_ids": "{3}",
        "start_time": "20:25",
        "status": "enabled",
        "backup_history_count": "1"
    }
}
```

query_backup_policy Output example by Tabler format
```
[p1]
backup_provider_type  : hdfs_service
backup_interval       : 86400s
app_ids               : {3}
start_time            : 03:36
status                : enabled
backup_history_count  : 1

[backup_info]
id             start_time           end_time  app_ids
1716781003199  2024-05-27 03:36:43  -         {3}

[p2]
backup_provider_type  : hdfs_service
backup_interval       : 86400s
app_ids               : {3}
start_time            : 20:25
status                : enabled
backup_history_count  : 1

[backup_info]
id             start_time           end_time  app_ids
1716840160297  2024-05-27 20:02:40  -         {3}
```
query_backup_policy Output example by JSON format
   ```
   {
        "p1": {
            "backup_provider_type": "hdfs_service",
            "backup_interval": "86400s",
            "app_ids": "{3}",
            "start_time": "03:36",
            "status": "enabled",
            "backup_history_count": "1"
        },
        "p1_backup_info": {
            "1716781003199": {
                "id": "1716781003199",
                "start_time": "2024-05-27 03:36:43",
                "end_time": "-",
                "app_ids": "{3}"
            }
        },
        "p2": {
            "backup_provider_type": "hdfs_service",
            "backup_interval": "86400s",
            "app_ids": "{3}",
            "start_time": "20:25",
            "status": "enabled",
            "backup_history_count": "1"
        },
        "p2_backup_info": {
            "1716840160297": {
                "id": "1716840160297",
                "start_time": "2024-05-27 20:02:40",
                "end_time": "-",
                "app_ids": "{3}"
            }
        }
    }
```
This patch adds a new flag `--separate_servers` to indicate whether to pack `pegasus_collector`,`pegasus_meta_server` and `pegasus_replica_server` binaries, otherwise a combined `pegasus_server` binary will be packed in the pegasus_server_xxx.tar. 

When build server in option `--separate_servers`,the corresponding option to use pack command is:
```
./run.sh pack_server -s  or ./run.sh pack_server --separate_servers
./run.sh pack_tools -s   or  ./run.sh pack_tools --separate_servers
```
…on that has been existing for the same table with the same remote cluster (apache#2038)

apache#2039
…nput mode (apache#2040)

The marco `PARSE_STRS` execute strs with `param_index`,and it only
execute the number of params_index of input strs.

The marco `PARSE_OPT_STRS` can execute input strs with flag.

The historical flag input mode should be continued.
apache#1881

By uploading generation thrift files, the go client can be used directly by users through "go get" without the need to compile it locally.
…to primary meta server if it was changed (apache#1916)

apache#1880
apache#1856

As for apache#1856:
when go client is writing to one partition and the replica node core dump, go client will finish 
after timeout without updating the configuration. In this case, the go client only restart to solve
the problem. 

In this pr, the client would update configuration of table automatically when someone replica
core dump. After testing, we found that the the replica error is "context.DeadlineExceeded"
(incubator-pegasus/go-client/pegasus/table_connector.go) when the replica core dump.

Therefore, when client meets the error, the go client will update configuration automatically.
Besides, this request will not retry. Because only in the case of timeout, the configuration will be
automatically updated. If you try again before then, it will still fail. There is also the risk of infinite
retries. Therefore, it is better to directly return the request error to the user and let the user try
again.

As for apache#1880:
When the client sends an RPC message "RPC_CM_QUERY_PARTITION_CONFIG_BY_INDEX" to the
meta server, if the meta server isn't primary, the response that forward to the primary meta server
will return. 

According to the above description, assuming that the client does not have a primary meta server
configured, we can connect to the primary meta server in this way.

About tests:
1. Start onebox, and the primary meta server is not added to the go client configuration.
2. The go client writes data to a certain partition and then kills the replica process.
…che#2044)

apache#2007

In servers, we assume that the remote IPs may can't be reverse resolved, in
this case, warning or error messages are logged instead of crashing.
But in tests, we assume that all the IPs can be reverse resolved.
apache#2047

Use "github.com/open-falcon" instead of dead link "open-falcon.org".
…ncluding both local writes and duplications (apache#2045)

There are many kinds of decrees while writing locally and duplicating to remote
clusters, for example, the max decree in prepare list, the last decree that has ever
been committed, the last decree that has been applied into rocksdb memtable,
the last decree that has been flushed into rocksdb sst files, the max decree that
has been confirmed by remote cluster for duplication, etc..

These decrees are very useful while we want to watch the progress of all the local
writes and duplications. These decrees might also help us diagnose the problems.
Therefore, we provide a tool in the way of `remote_command` to show the decrees
for each replica.
… to the remote cluster (apache#2048)

apache#2050

As is described by the issue, the problem is that we have to waits 2 ~ 3 minutes
(until some empty write gets in) before the last mutation is duplicated to the remote
cluster.

The reason is that the last committed decree of the last mutation (i.e.
`mutation.data.header.last_committed_decree`), rather than the decree of the
last mutation (i.e. `mutation.data.header.decree`), is chosen as the max decree
that is duplicated to the remote cluster. Instead, the max committed decree should
be chosen as the max decree that is duplicated to the remote cluster.

After the optimization, the delay has been reduced from 2 ~ 3 minutes to about
0.1 seconds.
Use run.sh start pegasus shell will create a symbolic link every time. But in docker
production environment. It cant create a symbolic link in the only-read filesystem
of container. So when symbolic link is exist we should not create.
The downgrade_node scripts usually used in the scale down replica server.
The function of downgrade_node scripts implementation with wild char of
shell output.

- The character matching is not success for every single line in shell output.
So add "set -e" will exit with 1 and report failed. 

- Fix a shell grammar problem.
…mmand (apache#2057)

Before this patch, once a backup policy is added and enabled, it's
impossible to disable it when a new job of the policy is starting,
even if there are some reasons block the job to complete.

This patch add a new flag '-f|--force' to disable the policy by force,
then it's possible to stop the job after restarting the servers.
…2059)

After refactoring to use RocksDB APIs to read files from local
filesystem, it may cause stack overflow when the file to read
is larger than the stack size (say 8MB).

This patch changes to use heap instead of stack to store the
file content.
… format (apache#2058)

Some remote commands shell output are format by json. And some remote
command are not.

Change the output of register_int_command, register_bool_command to
JSON format to improve readability by programs (e.g., Python scripts).
…t for the metric (apache#2060)

apache#2061

The monitor of the block cache usage is server-level and created by
std::call_once in a replica-level object, running periodically to update
the block cache usage.

However, once the replica-level object is stopped, the server-level
monitor would be cancelled; as a result, the block cache usage would
never be updated.
apache#2054)

apache#2069

To create the checkpoint of the replica with 0 or 1 record immediately:

- set the min decree for checkpoint to at least 1, which means the checkpoint
would inevitably be created even if the replica is empty.
- for the empty replica, an empty write would be committed to increase the
decree to at least 1 to ensure that the checkpoint would be created.
- the max decree in rocksdb memtable (the last applied decree) is considered
as the min decree that should be covered by the checkpoint, which means
currently all of the data in current rocksdb should be included into the created
checkpoint.

The following configuration is added to control the retry interval for triggering
checkpoint:

```diff
[replication]
+ trigger_checkpoint_retry_interval_ms = 100
```
After GitHub actions forced to run on node20 [1], and node20 depends
on GLIBC_2.28, we have to run actions on newer operation systems which
has higher builtin glibc.

Before this patch, we are using clang-format-3.9 to format C++ code,
but if we using newer OS (say Ubuntu 22.04), the clang-format-3.9 is
too old and it's difficult to install such an old version.

This patch bumps the clang-format to 14 as the code format tool, and
update relative chores, such as updating `cpp_clang_format_linter`
action job in .github/workflows/lint_and_test_cpp.yaml, removing
clang-format-3.9 docker images, adding more options in `.clang-format`
(almost all of the options are kept as old version and default values).

The main part of this patch is the C++ code updating according to the
newer clang-format, they are formated automatically.

1. https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
…ckout from v3 to v4 (apache#2062)

After All GitHub Actions run on Node20 instead of Node16 by default [1],
this patch bumps the actions/checkout version from v3 to v4.

When the related yaml files changed, the clients CIs are triggered, then
they expose that the new introduced structure `host_port` in IDL as unknow
because it is not generated by thrift automatically, we have to implement
it manually. So this patch also simplily implements the `host_port`
structure in python-client and nodejs-client.

It should be mentioned that only the `build_debug_on_centos7` is still using
`actions/checkout@v3`, see [2] for details.

1. https://github.blog/changelog/2024-03-07-github-actions-all-actions-will-run-on-node20-instead-of-node16-by-default/
2. apache#2065
The glibc version on ubuntu1804 and centos7 is lower than the node20 required, so
we need to force the node version to 16 when running CI.

See more details: actions/checkout#1809
… Ubuntu 18.04 (apache#2072)

To solve problem in GitHub actions:
```
Run actions/checkout@v4
/usr/bin/docker exec  e63787d641b0351b6c65ad895ccd98db84d6796141ad087c4952bc7f68b03753 sh -c "cat /etc/*release | grep ^ID"
/__e/node[20](https://github.com/apache/incubator-pegasus/actions/runs/9908766114/job/27375256228#step:3:21)/bin/node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by /__e/node20/bin/node)
```
@ruojieranyishen
Copy link
Collaborator Author

PR is broken, I opened a new one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants