Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CBG-4420: handle rev tree in history on processRev #7245

Closed
wants to merge 74 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
74 commits
Select commit Hold shift + click to select a range
d172158
Rebase of anemone on main, includes:
gregns1 Oct 11, 2023
c99f086
CBG-3209: Add cv index and retrieval for revision cache (#6491) (reba…
gregns1 Oct 26, 2023
2c91620
4.0: Bump SG API version (#6578)
bbrks Nov 15, 2023
c4edc7a
CBG-3503 Update HLV on import (#6572)
adamcfraser Nov 15, 2023
adb697c
CBG-3211: Add PutExistingRev for HLV (#6515)
gregns1 Nov 16, 2023
92454c1
CBG-3355: Add current version to channel cache (#6571)
gregns1 Nov 16, 2023
ec8b0dc
CBG-3607: disable the ability to set shared_bucket_access to false. I…
gregns1 Dec 4, 2023
6270a1c
CBG-3356: Add current version to ChangeEntry (#6575)
gregns1 Dec 7, 2023
e222bbf
Beryllium: Rename `SourceAndVersion` to `Version` / Improve HLV comme…
bbrks Dec 15, 2023
bd4a8d9
CBG-3354 Channel query support for current version (#6625)
adamcfraser Jan 9, 2024
d4209bf
CBG-3212: add api to fetch a document by its CV value (#6579)
gregns1 Jan 18, 2024
c5b4885
beryllium: fix misspell typos (#6648)
bbrks Jan 19, 2024
ecf62be
CBG-3254: CBL pull replication for v4 protocol (#6640)
gregns1 Jan 24, 2024
3f8bb58
CBG-3213 Version support for channel removals (#6650)
adamcfraser Jan 26, 2024
2b08fb4
CBG-3719: convert in memory format of HLV to match XDCR/CBL format (#…
gregns1 Feb 1, 2024
9a1330a
CBG-3788 Support HLV operations in BlipTesterClient (#6689)
adamcfraser Feb 16, 2024
c6466ff
CBG-3255 Replication protocol support for HLV - push replication (#6…
adamcfraser Mar 11, 2024
ec25815
CBG-3808: vrs -> ver to match XDCR format (#6723)
gregns1 Mar 12, 2024
59bb791
CBG-3797 Attachment handling for HLV push replication (#6702)
adamcfraser Mar 13, 2024
f673435
CBG-3764-anemone Correct error type checking (#6810)
adamcfraser May 7, 2024
46f4fec
CBG-3877 Persist HLV to _vv xattr (#6843)
adamcfraser May 25, 2024
c46bf7b
CBG-4177: remove no xattr CI tests (#7074)
gregns1 Aug 14, 2024
d8a404a
CBG-3917: pass revNo from cbgt into feed event (#7076)
gregns1 Aug 16, 2024
9b65f93
CBG-3993: use md5 hash for sourceID in HLV (#7073)
gregns1 Aug 16, 2024
7842a05
CBG-3715: populate pRev on mou (#7099)
gregns1 Sep 10, 2024
a2da319
CBG-4206: read/write attachments to global sync xattr (#7107)
gregns1 Sep 12, 2024
37fc177
CBG-4207: Attachment metadata migration on import (#7117)
gregns1 Sep 20, 2024
05028af
CBG-4209: Add test for blip doc update attachment metadata migration …
gregns1 Sep 20, 2024
37b7419
CBG-4253 create interfaces for integration testing (#7112)
torcolvin Sep 26, 2024
a12a948
Require CBS 7.6 to support anemone (#7138)
torcolvin Sep 30, 2024
5bf5fc6
CBG-3861 support updating vv on xdcr (#7118)
torcolvin Oct 2, 2024
024d333
CBG-4255 expand interface for CRUD operations (#7139)
torcolvin Oct 2, 2024
240c516
CBG-4247: refactor in memory format for hlv (#7136)
gregns1 Oct 2, 2024
ee7013d
CBG-4210: Attachment metadata migration background job (#7125)
gregns1 Oct 7, 2024
8784fef
Update minimum Couchbase Server version (#7145)
torcolvin Oct 8, 2024
7a39ffc
CBG-4271: re enable attachment tests for v4 protocol (#7144)
gregns1 Oct 10, 2024
1c08cf0
CBG-4261 have simple topologies working (#7152)
torcolvin Oct 11, 2024
5295ce4
CBG-3909: use deltas for pv and mv when persisting to the bucket (#7096)
gregns1 Oct 11, 2024
e29801e
CBG-4289 fix import CV value for HLV code (#7146)
torcolvin Oct 16, 2024
0fcdba8
CBG-4254 implement Couchbase Server peer (#7158)
torcolvin Oct 17, 2024
458508a
CBG-4254 implement Sync Gateway peer (#7160)
torcolvin Oct 17, 2024
0153238
CBG-4292 compute mouMatch on the metadataOnlyUpdate before it is modi…
torcolvin Oct 21, 2024
a143e33
CBG-4300 improve rosmar XDCR handling (#7162)
torcolvin Oct 21, 2024
ef289fe
CBG-4263 preserve _sync xattr on the target (#7171)
torcolvin Oct 22, 2024
999cf14
CBG-4212: Trigger attachment migration job upon db startup (#7151)
gregns1 Nov 5, 2024
502d732
CBG-4281 improve rosmar XDCR algorithm (#7177)
torcolvin Nov 5, 2024
e93896b
CBG-4213: add attachment migration api (#7183)
gregns1 Nov 8, 2024
5ebe44e
CBG-4263 create single actor tests (#7187)
torcolvin Nov 15, 2024
0098c6b
CBG-3736: delta sync for cv (#7141)
gregns1 Nov 20, 2024
3d52f7a
CBG-4369 optionally return CV on rest API (#7203)
torcolvin Nov 21, 2024
ba0ab1f
CBG-4365 rosmar xdcr, use _mou.cas for conflict resolution (#7206)
torcolvin Nov 25, 2024
5704a24
CBG-4383 handle no revpos in attachment block (#7210)
torcolvin Nov 26, 2024
fc26223
CBG-4329 use rudimentary backoff to wait for cbl mock version (#7212)
torcolvin Nov 26, 2024
ed2558c
Post-rebase fixes
adamcfraser Nov 28, 2024
69d21de
CBG-4369 add missing API docs
adamcfraser Nov 29, 2024
36cd025
Fix TestResyncMou post-rebase
adamcfraser Nov 29, 2024
14876c7
CBG-4303: conflicting writes muti actor tests, skipping failures (#7205)
gregns1 Dec 2, 2024
a8628e6
Change image for anemone default integration job (#7220)
bbrks Dec 2, 2024
262c23f
CBG-4302: add multi actor, non-conflicting write tests (#7224)
gregns1 Dec 2, 2024
09f5cb6
Cleanup topologytests (#7225)
torcolvin Dec 2, 2024
1a4559c
CBG-4317 uptake fix for TLS without certs for import feed (#7192)
torcolvin Dec 3, 2024
4fc9df0
CBG-4250 Add pv support to rosmar xdcr (#7230)
adamcfraser Dec 5, 2024
f6fb341
CBG-4366 enable resurrection tests (#7229)
torcolvin Dec 5, 2024
2e2afce
CBG-4250 Test fix for docs processed (#7232)
adamcfraser Dec 6, 2024
113a4ef
CBG-4265 avoid panic in rosmar xdcr tests (#7231)
torcolvin Dec 9, 2024
8668774
CBG-4389: extract cv from known revs and store backup rev by cv (#7237)
gregns1 Dec 11, 2024
ac11957
refactor topologytests (#7238)
torcolvin Dec 12, 2024
407a5e0
CBG-4408 disable CBS topologytests by default (#7240)
torcolvin Dec 12, 2024
dcc98f1
CBG-4331: legacy rev handling for version 4 replication protocol (#7239)
gregns1 Dec 13, 2024
cafd49e
refactor topologytests (#7241)
torcolvin Dec 13, 2024
ddc841e
CBG-4417 construct missing CV entry from HLV if not present (#7242)
torcolvin Dec 13, 2024
9da680a
CBG-4410 restructure multi actor non conflict tests (#7243)
torcolvin Dec 13, 2024
6690e7c
CBG-4420: handle rev tree in history on processRev
gregns1 Dec 16, 2024
a28a6f1
updates based off review + new tests
gregns1 Dec 17, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 0 additions & 19 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -134,25 +134,6 @@ jobs:
with:
test-results: test.json

test-no-xattrs:
runs-on: ubuntu-latest
env:
GOPRIVATE: github.com/couchbaselabs
SG_TEST_USE_XATTRS: false
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: 1.23.3
- name: Run Tests
run: go test -tags cb_sg_devmode -shuffle=on -timeout=30m -count=1 -json -v "./..." | tee test.json | jq -s -jr 'sort_by(.Package,.Time) | .[].Output | select (. != null )'
shell: bash
- name: Annotate Failures
if: always()
uses: guyarb/golang-test-annotations@v0.8.0
with:
test-results: test.json

python-format:
runs-on: ubuntu-latest
steps:
Expand Down
1 change: 1 addition & 0 deletions .golangci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ issues:
- path: (_test\.go|utilities_testing\.go)
linters:
- goconst
- prealloc
- path: (_test\.go|utilities_testing\.go)
linters:
- govet
Expand Down
4 changes: 2 additions & 2 deletions base/bucket_gocb_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -414,11 +414,11 @@ func TestXattrWriteCasSimple(t *testing.T) {
assert.Equal(t, Crc32cHashString(valBytes), macroBodyHashString)

// Validate against $document.value_crc32c
_, xattrs, _, err = dataStore.GetWithXattrs(ctx, key, []string{"$document"})
_, xattrs, _, err = dataStore.GetWithXattrs(ctx, key, []string{VirtualDocumentXattr})
require.NoError(t, err)

var retrievedVxattr map[string]interface{}
require.NoError(t, json.Unmarshal(xattrs["$document"], &retrievedVxattr))
require.NoError(t, json.Unmarshal(xattrs[VirtualDocumentXattr], &retrievedVxattr))

vxattrCrc32c, ok := retrievedVxattr["value_crc32c"].(string)
assert.True(t, ok, "Unable to retrieve virtual xattr crc32c as string")
Expand Down
9 changes: 8 additions & 1 deletion base/constants.go
Original file line number Diff line number Diff line change
Expand Up @@ -135,14 +135,21 @@ const (
// SyncPropertyName is used when storing sync data inline in a document.
SyncPropertyName = "_sync"
// SyncXattrName is used when storing sync data in a document's xattrs.
SyncXattrName = "_sync"
SyncXattrName = "_sync"
VvXattrName = "_vv"
GlobalXattrName = "_globalSync"

// MouXattrName is used when storing metadata-only update information in a document's xattrs.
MouXattrName = "_mou"

// Intended to be used in Meta Map and related tests
MetaMapXattrsKey = "xattrs"

// VirtualXattrRevSeqNo is used to fetch rev seq no from documents virtual xattr
VirtualXattrRevSeqNo = "$document.revid"
// VirtualDocumentXattr is used to fetch the documents virtual xattr
VirtualDocumentXattr = "$document"

// Prefix for transaction metadata documents
TxnPrefix = "_txn:"

Expand Down
104 changes: 90 additions & 14 deletions base/constants_syncdocs.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ licenses/APL2.txt.
package base

import (
"context"
"errors"
"fmt"
"strconv"
Expand Down Expand Up @@ -377,58 +378,133 @@ func CollectionSyncFunctionKeyWithGroupID(groupID string, scopeName, collectionN

// SyncInfo documents are stored in collections to identify the metadataID associated with sync metadata in that collection
type SyncInfo struct {
MetadataID string `json:"metadataID"`
MetadataID string `json:"metadataID,omitempty"`
MetaDataVersion string `json:"metadata_version,omitempty"`
}

// initSyncInfo attempts to initialize syncInfo for a datastore
// 1. If syncInfo doesn't exist, it is created for the specified metadataID
// 2. If syncInfo exists with a matching metadataID, returns requiresResync=false
// 3. If syncInfo exists with a non-matching metadataID, returns requiresResync=true
func InitSyncInfo(ds DataStore, metadataID string) (requiresResync bool, err error) {
// If syncInfo exists and has metaDataVersion greater than or equal to 4.0, return requiresAttachmentMigration=false, else requiresAttachmentMigration=true to bring migrate metadata attachments.
func InitSyncInfo(ctx context.Context, ds DataStore, metadataID string) (requiresResync bool, requiresAttachmentMigration bool, err error) {

var syncInfo SyncInfo
_, fetchErr := ds.Get(SGSyncInfo, &syncInfo)
if IsDocNotFoundError(fetchErr) {
if metadataID == "" {
return false, nil
return false, true, nil
}
newSyncInfo := &SyncInfo{MetadataID: metadataID}
_, addErr := ds.Add(SGSyncInfo, 0, newSyncInfo)
if IsCasMismatch(addErr) {
// attempt new fetch
_, fetchErr = ds.Get(SGSyncInfo, &syncInfo)
if fetchErr != nil {
return true, fmt.Errorf("Error retrieving syncInfo (after failed add): %v", fetchErr)
return true, true, fmt.Errorf("Error retrieving syncInfo (after failed add): %v", fetchErr)
}
} else if addErr != nil {
return true, fmt.Errorf("Error adding syncInfo: %v", addErr)
return true, true, fmt.Errorf("Error adding syncInfo: %v", addErr)
}
// successfully added
return false, nil
requiresAttachmentMigration, err = CompareMetadataVersion(ctx, syncInfo.MetaDataVersion)
if err != nil {
return syncInfo.MetadataID != metadataID, true, err
}
return false, requiresAttachmentMigration, nil
} else if fetchErr != nil {
return true, fmt.Errorf("Error retrieving syncInfo: %v", fetchErr)
return true, true, fmt.Errorf("Error retrieving syncInfo: %v", fetchErr)
}
// check for meta version, if we don't have meta version of 4.0 we need to run migration job
requiresAttachmentMigration, err = CompareMetadataVersion(ctx, syncInfo.MetaDataVersion)
if err != nil {
return syncInfo.MetadataID != metadataID, true, err
}

return syncInfo.MetadataID != metadataID, nil
return syncInfo.MetadataID != metadataID, requiresAttachmentMigration, nil
}

// SetSyncInfo sets syncInfo in a DataStore to the specified metadataID
func SetSyncInfo(ds DataStore, metadataID string) error {
// SetSyncInfoMetadataID sets syncInfo in a DataStore to the specified metadataID, preserving metadata version if present
func SetSyncInfoMetadataID(ds DataStore, metadataID string) error {

// If the metadataID isn't defined, don't persist SyncInfo. Defensive handling for legacy use cases.
if metadataID == "" {
return nil
}
syncInfo := &SyncInfo{
MetadataID: metadataID,
_, err := ds.Update(SGSyncInfo, 0, func(current []byte) (updated []byte, expiry *uint32, delete bool, err error) {
var syncInfo SyncInfo
if current != nil {
parseErr := JSONUnmarshal(current, &syncInfo)
if parseErr != nil {
return nil, nil, false, parseErr
}
}
// if we have a metadataID to set, set it preserving the metadata version if present
syncInfo.MetadataID = metadataID
bytes, err := JSONMarshal(&syncInfo)
return bytes, nil, false, err
})
return err
}

// SetSyncInfoMetaVersion sets sync info in DataStore to specified metadata version, preserving metadataID if present
func SetSyncInfoMetaVersion(ds DataStore, metaVersion string) error {
if metaVersion == "" {
return nil
}
return ds.Set(SGSyncInfo, 0, nil, syncInfo)
_, err := ds.Update(SGSyncInfo, 0, func(current []byte) (updated []byte, expiry *uint32, delete bool, err error) {
var syncInfo SyncInfo
if current != nil {
parseErr := JSONUnmarshal(current, &syncInfo)
if parseErr != nil {
return nil, nil, false, parseErr
}
}
// if we have a meta version to set, set it preserving the metadata ID if present
syncInfo.MetaDataVersion = metaVersion
bytes, err := JSONMarshal(&syncInfo)
return bytes, nil, false, err
})
return err
}

// SerializeIfLonger returns name as a sha1 string if the length of the name is greater or equal to the length specificed. Otherwise, returns the original string.
// SerializeIfLonger returns name as a sha1 string if the length of the name is greater or equal to the length specified. Otherwise, returns the original string.
func SerializeIfLonger(name string, length int) string {
if len(name) < length {
return name
}
return Sha1HashString(name, "")
}

// CompareMetadataVersion Will build comparable build version for comparison with meta version defined in syncInfo, then
// will return true if we require attachment migration, false if not.
func CompareMetadataVersion(ctx context.Context, metaVersion string) (bool, error) {
if metaVersion == "" {
// no meta version passed in, thus attachment migration should take place
return true, nil
}
syncInfoVersion, err := NewComparableBuildVersionFromString(metaVersion)
if err != nil {
return true, err
}
return CheckRequireAttachmentMigration(ctx, syncInfoVersion)
}

// CheckRequireAttachmentMigration will return true if current metaVersion < 4.0.0, else false
func CheckRequireAttachmentMigration(ctx context.Context, version *ComparableBuildVersion) (bool, error) {
if version == nil {
AssertfCtx(ctx, "failed to build comparable build version for syncInfo metaVersion")
return true, fmt.Errorf("corrupt syncInfo metaVersion value")
}
minVerStr := "4.0.0" // minimum meta version that needs to be defined for metadata migration. Any version less than this will require attachment migration
minVersion, err := NewComparableBuildVersionFromString(minVerStr)
if err != nil {
AssertfCtx(ctx, "failed to build comparable build version for minimum version for attachment migration")
return true, err
}

if minVersion.AtLeastMinorDowngrade(version) {
return true, nil
}
return false, nil
}
3 changes: 2 additions & 1 deletion base/dcp_common.go
Original file line number Diff line number Diff line change
Expand Up @@ -522,12 +522,13 @@ func dcpKeyFilter(key []byte, metaKeys *MetadataKeys) bool {
}

// Makes a feedEvent that can be passed to a FeedEventCallbackFunc implementation
func makeFeedEvent(key []byte, value []byte, dataType uint8, cas uint64, expiry uint32, vbNo uint16, collectionID uint32, opcode sgbucket.FeedOpcode) sgbucket.FeedEvent {
func makeFeedEvent(key []byte, value []byte, dataType uint8, cas uint64, expiry uint32, vbNo uint16, collectionID uint32, revNo uint64, opcode sgbucket.FeedOpcode) sgbucket.FeedEvent {

// not currently doing rq.Extras handling (as in gocouchbase/upr_feed, makeUprEvent) as SG doesn't use
// expiry/flags information, and snapshot handling is done by cbdatasource and sent as
// SnapshotStart, SnapshotEnd
event := sgbucket.FeedEvent{
RevNo: revNo,
Opcode: opcode,
Key: key,
Value: value,
Expand Down
12 changes: 6 additions & 6 deletions base/dcp_dest.go
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ func (d *DCPDest) DataUpdate(partition string, key []byte, seq uint64,
if !dcpKeyFilter(key, d.metaKeys) {
return nil
}
event := makeFeedEventForDest(key, val, cas, partitionToVbNo(d.loggingCtx, partition), collectionIDFromExtras(extras), 0, 0, sgbucket.FeedOpMutation)
event := makeFeedEventForDest(key, val, cas, partitionToVbNo(d.loggingCtx, partition), collectionIDFromExtras(extras), 0, 0, 0, sgbucket.FeedOpMutation)
d.dataUpdate(seq, event)
return nil
}
Expand All @@ -116,7 +116,7 @@ func (d *DCPDest) DataUpdateEx(partition string, key []byte, seq uint64, val []b
if !ok {
return errors.New("Unable to cast extras of type DEST_EXTRAS_TYPE_GOCB_DCP to cbgt.GocbExtras")
}
event = makeFeedEventForDest(key, val, cas, partitionToVbNo(d.loggingCtx, partition), dcpExtras.CollectionId, dcpExtras.Expiry, dcpExtras.Datatype, sgbucket.FeedOpMutation)
event = makeFeedEventForDest(key, val, cas, partitionToVbNo(d.loggingCtx, partition), dcpExtras.CollectionId, dcpExtras.Expiry, dcpExtras.Datatype, dcpExtras.RevNo, sgbucket.FeedOpMutation)

}

Expand All @@ -131,7 +131,7 @@ func (d *DCPDest) DataDelete(partition string, key []byte, seq uint64,
return nil
}

event := makeFeedEventForDest(key, nil, cas, partitionToVbNo(d.loggingCtx, partition), collectionIDFromExtras(extras), 0, 0, sgbucket.FeedOpDeletion)
event := makeFeedEventForDest(key, nil, cas, partitionToVbNo(d.loggingCtx, partition), collectionIDFromExtras(extras), 0, 0, 0, sgbucket.FeedOpDeletion)
d.dataUpdate(seq, event)
return nil
}
Expand All @@ -154,7 +154,7 @@ func (d *DCPDest) DataDeleteEx(partition string, key []byte, seq uint64,
if !ok {
return errors.New("Unable to cast extras of type DEST_EXTRAS_TYPE_GOCB_DCP to cbgt.GocbExtras")
}
event = makeFeedEventForDest(key, dcpExtras.Value, cas, partitionToVbNo(d.loggingCtx, partition), dcpExtras.CollectionId, dcpExtras.Expiry, dcpExtras.Datatype, sgbucket.FeedOpDeletion)
event = makeFeedEventForDest(key, dcpExtras.Value, cas, partitionToVbNo(d.loggingCtx, partition), dcpExtras.CollectionId, dcpExtras.Expiry, dcpExtras.Datatype, dcpExtras.RevNo, sgbucket.FeedOpDeletion)

}
d.dataUpdate(seq, event)
Expand Down Expand Up @@ -247,8 +247,8 @@ func collectionIDFromExtras(extras []byte) uint32 {
return binary.LittleEndian.Uint32(extras[4:])
}

func makeFeedEventForDest(key []byte, val []byte, cas uint64, vbNo uint16, collectionID uint32, expiry uint32, dataType uint8, opcode sgbucket.FeedOpcode) sgbucket.FeedEvent {
return makeFeedEvent(key, val, dataType, cas, expiry, vbNo, collectionID, opcode)
func makeFeedEventForDest(key []byte, val []byte, cas uint64, vbNo uint16, collectionID uint32, expiry uint32, dataType uint8, revNo uint64, opcode sgbucket.FeedOpcode) sgbucket.FeedEvent {
return makeFeedEvent(key, val, dataType, cas, expiry, vbNo, collectionID, revNo, opcode)
}

// DCPLoggingDest wraps DCPDest to provide per-callback logging
Expand Down
2 changes: 1 addition & 1 deletion base/dcp_feed_type.go
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ func getCbgtCredentials(dbName string) (cbgtCreds, bool) {
return creds, found
}

// See the comment of cbgtRootCAsProvider for usage details.
// setCbgtRootCertsForBucket creates root certificates for a given bucket. If TLS should be used, this function must be called. If tls certificate verification is skipped, then this function should be called with pool as nil. See the comment of cbgtRootCAsProvider for usage details.
func setCbgtRootCertsForBucket(bucketUUID string, pool *x509.CertPool) {
cbgtGlobalsLock.Lock()
defer cbgtGlobalsLock.Unlock()
Expand Down
2 changes: 1 addition & 1 deletion base/dcp_receiver.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ const MemcachedDataTypeRaw = 0

// Make a feed event for a gomemcached request. Extracts expiry from extras
func makeFeedEventForMCRequest(rq *gomemcached.MCRequest, opcode sgbucket.FeedOpcode) sgbucket.FeedEvent {
return makeFeedEvent(rq.Key, rq.Body, rq.DataType, rq.Cas, ExtractExpiryFromDCPMutation(rq), rq.VBucket, 0, opcode)
return makeFeedEvent(rq.Key, rq.Body, rq.DataType, rq.Cas, ExtractExpiryFromDCPMutation(rq), rq.VBucket, 0, 0, opcode)
}

// ShardedImportDCPMetadata is an internal struct that is exposed to enable json marshaling, used by sharded import feed. It differs from DCPMetadata because it must match the private struct used by cbgt.metadata.
Expand Down
1 change: 1 addition & 0 deletions base/dcp_sharded.go
Original file line number Diff line number Diff line change
Expand Up @@ -451,6 +451,7 @@ func (c *CbgtContext) Stop() {

func (c *CbgtContext) RemoveFeedCredentials(dbName string) {
removeCbgtCredentials(dbName)
// CBG-4394: removing root certs for the bucket should be done, but it is keyed based on the bucket UUID, and multiple dbs can use the same bucket
}

// Format of dest key for retrieval of import dest from cbgtDestFactories
Expand Down
2 changes: 2 additions & 0 deletions base/log_keys.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ const (
KeyReplicate
KeySync
KeySyncMsg
KeyVV
KeyWebSocket
KeyWebSocketFrame
KeySGTest
Expand Down Expand Up @@ -87,6 +88,7 @@ var (
KeyReplicate: "Replicate",
KeySync: "Sync",
KeySyncMsg: "SyncMsg",
KeyVV: "VV",
KeyWebSocket: "WS",
KeyWebSocketFrame: "WSFrame",
KeySGTest: "TEST",
Expand Down
28 changes: 23 additions & 5 deletions base/main_test_bucket_pool.go
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,15 @@ func NewTestBucketPoolWithOptions(ctx context.Context, bucketReadierFunc TBPBuck
}
tbp.skipMobileXDCR = !useMobileXDCR

// at least anemone release
if os.Getenv(tbpEnvAllowIncompatibleServerVersion) == "" && !ProductVersion.Less(&ComparableBuildVersion{major: 4}) {
overrideMsg := "Set " + tbpEnvAllowIncompatibleServerVersion + "=true to override this check."
// this check also covers BucketStoreFeatureMultiXattrSubdocOperations, which is Couchbase Server 7.6
if tbp.skipMobileXDCR {
tbp.Fatalf(ctx, "Sync Gateway %v requires mobile XDCR support, but Couchbase Server %v does not support it. Couchbase Server %s is required. %s", ProductVersion, tbp.cluster.version, firstServerVersionToSupportMobileXDCR, overrideMsg)
}
}

tbp.verbose.Set(tbpVerbose())

// Start up an async readier worker to process dirty buckets
Expand Down Expand Up @@ -450,6 +459,7 @@ func (tbp *TestBucketPool) setXDCRBucketSetting(ctx context.Context, bucket Buck

tbp.Logf(ctx, "Setting crossClusterVersioningEnabled=true")

// retry for 1 minute to get this bucket setting, MB-63675
store, ok := AsCouchbaseBucketStore(bucket)
if !ok {
tbp.Fatalf(ctx, "unable to get server management endpoints. Underlying bucket type was not GoCBBucket")
Expand All @@ -459,12 +469,20 @@ func (tbp *TestBucketPool) setXDCRBucketSetting(ctx context.Context, bucket Buck
posts.Add("enableCrossClusterVersioning", "true")

url := fmt.Sprintf("/pools/default/buckets/%s", store.GetName())
output, statusCode, err := store.MgmtRequest(ctx, http.MethodPost, url, "application/x-www-form-urlencoded", strings.NewReader(posts.Encode()))
// retry for 1 minute to get this bucket setting, MB-63675
_, err := RetryLoop(ctx, "setXDCRBucketSetting", func() (bool, error, interface{}) {
output, statusCode, err := store.MgmtRequest(ctx, http.MethodPost, url, "application/x-www-form-urlencoded", strings.NewReader(posts.Encode()))
if err != nil {
tbp.Fatalf(ctx, "request to mobile XDCR bucket setting failed, status code: %d error: %w output: %s", statusCode, err, string(output))
}
if statusCode != http.StatusOK {
err := fmt.Errorf("request to mobile XDCR bucket setting failed with status code, %d, output: %s", statusCode, string(output))
return true, err, nil
}
return false, nil, nil
}, CreateMaxDoublingSleeperFunc(200, 500, 500))
if err != nil {
tbp.Fatalf(ctx, "request to mobile XDCR bucket setting failed, status code: %d error: %v output: %s", statusCode, err, string(output))
}
if statusCode != http.StatusOK {
tbp.Fatalf(ctx, "request to mobile XDCR bucket setting failed with status code, %d, output: %s", statusCode, string(output))
tbp.Fatalf(ctx, "Couldn't set crossClusterVersioningEnabled: %v", err)
}
}

Expand Down
6 changes: 6 additions & 0 deletions base/main_test_bucket_pool_config.go
Original file line number Diff line number Diff line change
Expand Up @@ -53,11 +53,17 @@ const (

tbpEnvUseDefaultCollection = "SG_TEST_USE_DEFAULT_COLLECTION"

// tbpEnvAllowIncompatibleServerVersion allows tests to run against a server version that is not presumed compatible with version of Couchbase Server running.
tbpEnvAllowIncompatibleServerVersion = "SG_TEST_SKIP_SERVER_VERSION_CHECK"

// wait this long when requesting a test bucket from the pool before giving up and failing the test.
waitForReadyBucketTimeout = time.Minute

// Creates buckets with a specific number of number of replicas
tbpEnvBucketNumReplicas = "SG_TEST_BUCKET_NUM_REPLICAS"

// Environment variable to specify the topology tests to run
TbpEnvTopologyTests = "SG_TEST_TOPOLOGY_TESTS"
)

// TestsUseNamedCollections returns true if the tests use named collections.
Expand Down
Loading
Loading