Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shard based recovery #799

Closed
6 of 12 tasks
Rachelint opened this issue Mar 31, 2023 · 2 comments
Closed
6 of 12 tasks

Shard based recovery #799

Rachelint opened this issue Mar 31, 2023 · 2 comments
Labels
feature New feature or request

Comments

@Rachelint
Copy link
Contributor

Rachelint commented Mar 31, 2023

Describe This Problem

Now table recovery in on table level but wal's storing in on shard level.
The recovery performance may be unsatisfied especially in kafka based wal.

Proposal

  • Split actual table recovery from schema, and refactor table engine
  • Impl shard based table meta recovery
  • Impl shard based table data recovery

1. Split actual table recovery from schema, and refactor table engine

We should begin at modifying the high level interface(Schema and TableEngine) for adapting to the new recovery process.

Now the path about Schema and TableEngine when opening tables on shard is like:

For modify interfaces above to open whole tables on shard together rather than respectively, the most troublesome place is:

  • Tables on the same shard may belong to different schema, so we are unable to add the api like open_tables_on_shard to Schema.

My solution about this is:

In this stage, we still keep the origin interface of TableEngine, the path may be like:

  • Refactor the TableEngine interface to support shard level opening
    tbc...
  • Furthermore, split Schema and TableEngine completely?
  • Just keep register_table and unregister_table in Schema
  • Remove all table operation(create,drop,open,close), and call them directly in TableEngine

2. Impl shard based table meta recovery

  • Refactor manifest module
    • Rename the original TableData to TableContext, and extract members which will be recovered from manifest as the new TableData.
    • Register the TableData into manifest when create table and open table, and unregister it when drop table and close table.
    • When do snapshot in Manifest, we just make use of the hold TableData rather than scanning the persist wals.
  • Place TableDatas into Manifest, and we update the memory and storage in just one place.
  • Shard based manifest recovery

3. Impl shard based table data recovery

4. other

Additional Context

No response

@Rachelint Rachelint added the feature New feature or request label Mar 31, 2023
@Rachelint Rachelint added help wanted Extra attention is needed and removed help wanted Extra attention is needed labels Mar 31, 2023
@ShiKaiWi
Copy link
Member

ShiKaiWi commented Apr 3, 2023

@Rachelint In the #800, the separate runtime for recovery has been mentioned, and I guess it should be taken into considerations together with this.

Rachelint added a commit that referenced this issue May 9, 2023
## Which issue does this PR close?

Closes #
Part of #799 

## Rationale for this change
Now, we update `TableData` and store its wal seperately. The order of
two operations above is maintained by developer, that will lead to a big
bug if developer is not so careful when modifying related codes.
<!---
Why are you proposing this change? If this is already explained clearly
in the issue, then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?
+ Place table datas into manifest.
+ Update it both in memory and storage in the same call.
<!---
There is no need to duplicate the description in the issue here, but it
is sometimes worth providing a summary of the individual changes in this
PR to help reviewers understand the structure.
-->

## Are there any user-facing changes?
None.
<!---
Please mention if:

- there are user-facing changes that need to update the documentation or
configuration.
- this is a breaking change to public APIs
-->

## How does this change test
Test by ut.
<!-- 
Please describe how you test this change (like by unit test case,
integration test or some other ways) if this change has touched the
code.
-->
chunshao90 pushed a commit to chunshao90/ceresdb that referenced this issue May 15, 2023
## Which issue does this PR close?

Closes #
Part of apache#799 

## Rationale for this change
Now, we update `TableData` and store its wal seperately. The order of
two operations above is maintained by developer, that will lead to a big
bug if developer is not so careful when modifying related codes.
<!---
Why are you proposing this change? If this is already explained clearly
in the issue, then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

## What changes are included in this PR?
+ Place table datas into manifest.
+ Update it both in memory and storage in the same call.
<!---
There is no need to duplicate the description in the issue here, but it
is sometimes worth providing a summary of the individual changes in this
PR to help reviewers understand the structure.
-->

## Are there any user-facing changes?
None.
<!---
Please mention if:

- there are user-facing changes that need to update the documentation or
configuration.
- this is a breaking change to public APIs
-->

## How does this change test
Test by ut.
<!-- 
Please describe how you test this change (like by unit test case,
integration test or some other ways) if this change has touched the
code.
-->
Rachelint added a commit that referenced this issue May 22, 2023
## Which issue does this PR close?

Closes #
Part of #799 

## Rationale for this change
+ Add `open_shard` and `close_shard` methods into `TableEngine`.
+ Impl the methods above on demand.

## What changes are included in this PR?
See above.
## Are there any user-facing changes?
None.
## How does this change test
Test by ut.
jiacai2050 pushed a commit that referenced this issue Jun 15, 2023
## Rationale
Part of #799 

## Detailed Changes
- Define `WalReplayer` to carry out replay work.
- Support both `TableBased`(original) and `RegionBased` replay mode in
`WalReplayer`.
- Expose related configs.

## Test Plan
- Modify exist unit tests to cover the `RegionBased` wal replay.
- Refactor the integration test to cover recovery logic(TODO).
Rachelint added a commit that referenced this issue Jun 16, 2023
## Rationale
Part of #799 
Now we run the test about recovery manually that is so tired, this pr
add this into integration tests which will be run automatically in ci.

## Detailed Changes
+ Add  integration test about recovery.
+ Add above test to ci.

## Test Plan
None.
Rachelint added a commit that referenced this issue Jun 20, 2023
## Rationale
Part of #799 
We use `rskafka` as our kafka client.
However I found it will retry without limit even though kafka service is
unavailable...
(see
[influxdata/rskafka#65)
Worse, I found `rskafka` is almostis no longer maintained...

For quick fix, I forked it to support limited retry.

## Detailed Changes
+ Use the instead forked `rskafka`(supporting limited retry).
+ Add more logs in recovery path for better debugging.

## Test Plan
Test manually.
Rachelint added a commit that referenced this issue Jun 20, 2023
## Rationale
Part of #799 

## Detailed Changes
see title.

## Test Plan
None.
dust1 pushed a commit to dust1/ceresdb that referenced this issue Aug 9, 2023
## Rationale
Part of apache#799 

## Detailed Changes
- Define `WalReplayer` to carry out replay work.
- Support both `TableBased`(original) and `RegionBased` replay mode in
`WalReplayer`.
- Expose related configs.

## Test Plan
- Modify exist unit tests to cover the `RegionBased` wal replay.
- Refactor the integration test to cover recovery logic(TODO).
dust1 pushed a commit to dust1/ceresdb that referenced this issue Aug 9, 2023
## Rationale
Part of apache#799 
Now we run the test about recovery manually that is so tired, this pr
add this into integration tests which will be run automatically in ci.

## Detailed Changes
+ Add  integration test about recovery.
+ Add above test to ci.

## Test Plan
None.
dust1 pushed a commit to dust1/ceresdb that referenced this issue Aug 9, 2023
)

## Rationale
Part of apache#799 
We use `rskafka` as our kafka client.
However I found it will retry without limit even though kafka service is
unavailable...
(see
[influxdata/rskafka#65)
Worse, I found `rskafka` is almostis no longer maintained...

For quick fix, I forked it to support limited retry.

## Detailed Changes
+ Use the instead forked `rskafka`(supporting limited retry).
+ Add more logs in recovery path for better debugging.

## Test Plan
Test manually.
dust1 pushed a commit to dust1/ceresdb that referenced this issue Aug 9, 2023
## Rationale
Part of apache#799 

## Detailed Changes
see title.

## Test Plan
None.
@jiacai2050
Copy link
Contributor

Most tasks are finished, so closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants