-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rbd: add a workaround to fix rbd snapshot scheduling #2656
Conversation
Moved to add scheduling to the promote operation as scheduling need to be added when the image is promoted and this is the correct method of adding the scheduling to make the scheduling take place. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
added helper function to get the cluster ID. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
currently we have a bug in rbd mirror scheduling module. After doing failover and failback the scheduling is not getting updated and the mirroring snapshots are not getting created periodically as per the scheduling interval. This PR workarounds this one by doing below operations * Create a dummy (unique) image per cluster and this image should be easily identified. * During Promote operation on any image enable the mirroring on the dummy image. when we enable the mirroring on the dummy image the pool will get updated and the scheduling will be reconfigured. * During Demote operation on any image disable the mirroring on the dummy image. the disable need to be done to enable the mirroring again when we get the promote request to make the image as primary * When the DR is no more needed, this image need to be manually cleanup as for now as we dont want to add a check in the existing DeleteVolume code path for delete dummy image as it impact the performance of the DeleteVolume workflow. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
fa0efab
to
59e649e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I will test the change.
if err != nil { | ||
return err | ||
} | ||
dummyImageOpsLock.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unlock should be on defer? If there is any error in above steps the unlock never happens and we wount be able to get lock again?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, fixed now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ShyamsundarR I tested your patch and it works as expected. So it is fine other than the legitimate comment from @Madhu-1. We should be able to merge it once that is fixed.
59e649e
to
70b0a17
Compare
The dummy mirror image needs to be disabled and then reenabled for mirroring, to ensure a newly promoted primary is now starting to schedule snapshots. Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
70b0a17
to
e12152b
Compare
Latest push was to add the "." to the end of the comment to address linter failure. |
@Mergifyio rebase |
/retest all |
@Mergifyio rebase |
encryption tests are failing. not related to this PR. |
@humblec looks like cephcsi CI is broken due to a vault issue. am planning to merge this one manually. WDYT? |
|
dummyImageCreated operation = "dummyImageCreated" | ||
// Read write lock to ensure that only one operation is happening at a time. | ||
operationLock = sync.Map{} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:extra line not needed, but we can live with that.
return nil, status.Errorf(codes.Internal, "failed to get mirroring mode %s", err.Error()) | ||
} | ||
|
||
log.DebugLog(ctx, "Attempting to tickle dummy image for restarting RBD schedules") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Attempting/attempting
@humblec are you expecting a change for the above comments as part of this PR? |
Thanks @BenamarMk for confirming.
@Madhu-1 yeah, the CI failure is on encryption, considering the urgency of this PR, I am fine to merge manually 👍 , I have a couple of minor comments though. |
@humblec we can address that in a follow-up PR as it's not a blocker for this PR? |
@Madhu-1 @ShyamsundarR compared to previous PR, I couldnt get the corner case this PR fixed though. Appreciated if we can have a quick mention some where in this PR. It looks to me that, now the dummy image scheduling has been set to short/1minute period for other images to rearrange the reschedule ( may be more iterations for them to reschedule) compared to previous version, I could be wrong. |
sure, nw go ahead |
the previous PR was not resetting the scheduling for all the rbd images the enable and disable was done only on one PromoteVolume. To reset the scheduling the enable and disable of mirroring need to be done on all the images. more detail are in this PR description |
Tested an app with multiple PVCs (for the same app) and failover/relocation multiple times and the fix worked. |
|
||
interval, startTime := getSchedulingDetails(req.GetParameters()) | ||
if interval != admin.NoInterval { | ||
err = rbdVol.addSnapshotScheduling(interval, startTime) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ShyamsundarR @Madhu-1 we have a bug in this line. rbdVol
was changed to rbdVol dummy
object in line 527 changed. In this line, we still point to the same dummy rbdVol. Adding schedule here is adding it to the wrong image. I didn't catch this in my testing because my real image also has a schedule of 1m. If you change the real image schedule to something else, you will see the dummy image schedule added to "something else".
@Madhu-1 It is an easy fix. Can you make the change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it. tested the fix making a PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we added a workaround for rbd scheduling by creating a dummy image in ceph#2656. with the fix we are creating a dummy image of the size of the first actual rbd image which is sent in EnableVolumeReplication request if the actual rbd image size is 1Tib we are creating a dummy image of 1Tib which is not good. even though its a thin provisioned rbd images this is causing issue for the transfer of the snapshot during the mirroring operation. This commit recreates the rbd image with 1Mib size which is the smaller supported size in rbd. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we added a workaround for rbd scheduling by creating a dummy image in ceph#2656. with the fix we are creating a dummy image of the size of the first actual rbd image which is sent in EnableVolumeReplication request if the actual rbd image size is 1TiB we are creating a dummy image of 1TiB which is not good. even though its a thin provisioned rbd images this is causing issue for the transfer of the snapshot during the mirroring operation. This commit recreates the rbd image with 1MiB size which is the smaller supported size in rbd. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we added a workaround for rbd scheduling by creating a dummy image in ceph#2656. with the fix we are creating a dummy image of the size of the first actual rbd image which is sent in EnableVolumeReplication request if the actual rbd image size is 1TiB we are creating a dummy image of 1TiB which is not good. even though its a thin provisioned rbd images this is causing issue for the transfer of the snapshot during the mirroring operation. This commit recreates the rbd image with 1MiB size which is the smaller supported size in rbd. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we added a workaround for rbd scheduling by creating a dummy image in ceph#2656. with the fix we are creating a dummy image of the size of the first actual rbd image which is sent in EnableVolumeReplication request if the actual rbd image size is 1TiB we are creating a dummy image of 1TiB which is not good. even though its a thin provisioned rbd images this is causing issue for the transfer of the snapshot during the mirroring operation. This commit recreates the rbd image with 1MiB size which is the smaller supported size in rbd. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we added a workaround for rbd scheduling by creating a dummy image in ceph#2656. with the fix we are creating a dummy image of the size of the first actual rbd image which is sent in EnableVolumeReplication request if the actual rbd image size is 1TiB we are creating a dummy image of 1TiB which is not good. even though its a thin provisioned rbd images this is causing issue for the transfer of the snapshot during the mirroring operation. This commit recreates the rbd image with 1MiB size which is the smaller supported size in rbd. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we added a workaround for rbd scheduling by creating a dummy image in #2656. with the fix we are creating a dummy image of the size of the first actual rbd image which is sent in EnableVolumeReplication request if the actual rbd image size is 1TiB we are creating a dummy image of 1TiB which is not good. even though its a thin provisioned rbd images this is causing issue for the transfer of the snapshot during the mirroring operation. This commit recreates the rbd image with 1MiB size which is the smaller supported size in rbd. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we added a workaround for rbd scheduling by creating a dummy image in ceph#2656. with the fix we are creating a dummy image of the size of the first actual rbd image which is sent in EnableVolumeReplication request if the actual rbd image size is 1TiB we are creating a dummy image of 1TiB which is not good. even though its a thin provisioned rbd images this is causing issue for the transfer of the snapshot during the mirroring operation. This commit recreates the rbd image with 1MiB size which is the smaller supported size in rbd. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
we added a workaround for rbd scheduling by creating a dummy image in ceph#2656. with the fix we are creating a dummy image of the size of the first actual rbd image which is sent in EnableVolumeReplication request if the actual rbd image size is 1TiB we are creating a dummy image of 1TiB which is not good. even though its a thin provisioned rbd images this is causing issue for the transfer of the snapshot during the mirroring operation. This commit recreates the rbd image with 1MiB size which is the smaller supported size in rbd. Signed-off-by: Madhu Rajanna <madhupr007@gmail.com> (cherry picked from commit 9a4533e)
To address the problem that snapshot schedules are triggered for volumes that are promoted, a dummy image was disabled/enabled for replication. This was done as a workaround, because the promote operation was not triggering the schedules for the image being promoted. The bugs related to the same have been fixed in RBD mirroring functionality and hence the workaround ceph#2656 can be removed from the code base. ceph tracker https://tracker.ceph.com/issues/53914 Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To address the problem that snapshot schedules are triggered for volumes that are promoted, a dummy image was disabled/enabled for replication. This was done as a workaround, because the promote operation was not triggering the schedules for the image being promoted. The bugs related to the same have been fixed in RBD mirroring functionality and hence the workaround ceph#2656 can be removed from the code base. ceph tracker https://tracker.ceph.com/issues/53914 Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To address the problem that snapshot schedules are triggered for volumes that are promoted, a dummy image was disabled/enabled for replication. This was done as a workaround, because the promote operation was not triggering the schedules for the image being promoted. The bugs related to the same have been fixed in RBD mirroring functionality and hence the workaround #2656 can be removed from the code base. ceph tracker https://tracker.ceph.com/issues/53914 Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
To address the problem that snapshot schedules are triggered for volumes that are promoted, a dummy image was disabled/enabled for replication. This was done as a workaround, because the promote operation was not triggering the schedules for the image being promoted. The bugs related to the same have been fixed in RBD mirroring functionality and hence the workaround ceph#2656 can be removed from the code base. ceph tracker https://tracker.ceph.com/issues/53914 Signed-off-by: Madhu Rajanna <madhupr007@gmail.com> (cherry picked from commit 71e5b3f)
currently, we have a bug in the rbd mirror scheduling module. After doing failover and failback the scheduling is not getting updated and the mirroring snapshots are not getting created periodically as per the scheduling interval. This PR workaround this one by doing the below operations
Create a dummy (csi-vol-dummy-ceph fsID) image per cluster and this image should be easily identified.
During Promote operation on any image enables the mirroring on the dummy image. when we enable the mirroring on the dummy image the pool will get updated and the scheduling will be reconfigured.
During Demote operation on any image disables the mirroring on the dummy image. the disabled need to be done to enable the mirroring again when we get the promote request to make the image as the primary
When the DR is no more needed, this image needs to be manually cleanup for now as we don't want to add a check in the existing DeleteVolume code path for deleting dummy images as it impacts the performance of the DeleteVolume workflow.
Moved to add scheduling to the promote operation as scheduling need to be added when the image is promoted and this is the correct method of adding the scheduling to make the scheduling take place.
More details at https://bugzilla.redhat.com/show_bug.cgi?id=2019161
Signed-off-by: Madhu Rajanna madhupr007@gmail.com
Signed-off-by: Shyamsundar Ranganathan srangana@redhat.com