Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CBG-4420: handle rev tree in history on processRev #7245

Closed
wants to merge 74 commits into from
Closed
Changes from 1 commit
Commits
Show all changes
74 commits
Select commit Hold shift + click to select a range
d172158
Rebase of anemone on main, includes:
gregns1 Oct 11, 2023
c99f086
CBG-3209: Add cv index and retrieval for revision cache (#6491) (reba…
gregns1 Oct 26, 2023
2c91620
4.0: Bump SG API version (#6578)
bbrks Nov 15, 2023
c4edc7a
CBG-3503 Update HLV on import (#6572)
adamcfraser Nov 15, 2023
adb697c
CBG-3211: Add PutExistingRev for HLV (#6515)
gregns1 Nov 16, 2023
92454c1
CBG-3355: Add current version to channel cache (#6571)
gregns1 Nov 16, 2023
ec8b0dc
CBG-3607: disable the ability to set shared_bucket_access to false. I…
gregns1 Dec 4, 2023
6270a1c
CBG-3356: Add current version to ChangeEntry (#6575)
gregns1 Dec 7, 2023
e222bbf
Beryllium: Rename `SourceAndVersion` to `Version` / Improve HLV comme…
bbrks Dec 15, 2023
bd4a8d9
CBG-3354 Channel query support for current version (#6625)
adamcfraser Jan 9, 2024
d4209bf
CBG-3212: add api to fetch a document by its CV value (#6579)
gregns1 Jan 18, 2024
c5b4885
beryllium: fix misspell typos (#6648)
bbrks Jan 19, 2024
ecf62be
CBG-3254: CBL pull replication for v4 protocol (#6640)
gregns1 Jan 24, 2024
3f8bb58
CBG-3213 Version support for channel removals (#6650)
adamcfraser Jan 26, 2024
2b08fb4
CBG-3719: convert in memory format of HLV to match XDCR/CBL format (#…
gregns1 Feb 1, 2024
9a1330a
CBG-3788 Support HLV operations in BlipTesterClient (#6689)
adamcfraser Feb 16, 2024
c6466ff
CBG-3255 Replication protocol support for HLV - push replication (#6…
adamcfraser Mar 11, 2024
ec25815
CBG-3808: vrs -> ver to match XDCR format (#6723)
gregns1 Mar 12, 2024
59bb791
CBG-3797 Attachment handling for HLV push replication (#6702)
adamcfraser Mar 13, 2024
f673435
CBG-3764-anemone Correct error type checking (#6810)
adamcfraser May 7, 2024
46f4fec
CBG-3877 Persist HLV to _vv xattr (#6843)
adamcfraser May 25, 2024
c46bf7b
CBG-4177: remove no xattr CI tests (#7074)
gregns1 Aug 14, 2024
d8a404a
CBG-3917: pass revNo from cbgt into feed event (#7076)
gregns1 Aug 16, 2024
9b65f93
CBG-3993: use md5 hash for sourceID in HLV (#7073)
gregns1 Aug 16, 2024
7842a05
CBG-3715: populate pRev on mou (#7099)
gregns1 Sep 10, 2024
a2da319
CBG-4206: read/write attachments to global sync xattr (#7107)
gregns1 Sep 12, 2024
37fc177
CBG-4207: Attachment metadata migration on import (#7117)
gregns1 Sep 20, 2024
05028af
CBG-4209: Add test for blip doc update attachment metadata migration …
gregns1 Sep 20, 2024
37b7419
CBG-4253 create interfaces for integration testing (#7112)
torcolvin Sep 26, 2024
a12a948
Require CBS 7.6 to support anemone (#7138)
torcolvin Sep 30, 2024
5bf5fc6
CBG-3861 support updating vv on xdcr (#7118)
torcolvin Oct 2, 2024
024d333
CBG-4255 expand interface for CRUD operations (#7139)
torcolvin Oct 2, 2024
240c516
CBG-4247: refactor in memory format for hlv (#7136)
gregns1 Oct 2, 2024
ee7013d
CBG-4210: Attachment metadata migration background job (#7125)
gregns1 Oct 7, 2024
8784fef
Update minimum Couchbase Server version (#7145)
torcolvin Oct 8, 2024
7a39ffc
CBG-4271: re enable attachment tests for v4 protocol (#7144)
gregns1 Oct 10, 2024
1c08cf0
CBG-4261 have simple topologies working (#7152)
torcolvin Oct 11, 2024
5295ce4
CBG-3909: use deltas for pv and mv when persisting to the bucket (#7096)
gregns1 Oct 11, 2024
e29801e
CBG-4289 fix import CV value for HLV code (#7146)
torcolvin Oct 16, 2024
0fcdba8
CBG-4254 implement Couchbase Server peer (#7158)
torcolvin Oct 17, 2024
458508a
CBG-4254 implement Sync Gateway peer (#7160)
torcolvin Oct 17, 2024
0153238
CBG-4292 compute mouMatch on the metadataOnlyUpdate before it is modi…
torcolvin Oct 21, 2024
a143e33
CBG-4300 improve rosmar XDCR handling (#7162)
torcolvin Oct 21, 2024
ef289fe
CBG-4263 preserve _sync xattr on the target (#7171)
torcolvin Oct 22, 2024
999cf14
CBG-4212: Trigger attachment migration job upon db startup (#7151)
gregns1 Nov 5, 2024
502d732
CBG-4281 improve rosmar XDCR algorithm (#7177)
torcolvin Nov 5, 2024
e93896b
CBG-4213: add attachment migration api (#7183)
gregns1 Nov 8, 2024
5ebe44e
CBG-4263 create single actor tests (#7187)
torcolvin Nov 15, 2024
0098c6b
CBG-3736: delta sync for cv (#7141)
gregns1 Nov 20, 2024
3d52f7a
CBG-4369 optionally return CV on rest API (#7203)
torcolvin Nov 21, 2024
ba0ab1f
CBG-4365 rosmar xdcr, use _mou.cas for conflict resolution (#7206)
torcolvin Nov 25, 2024
5704a24
CBG-4383 handle no revpos in attachment block (#7210)
torcolvin Nov 26, 2024
fc26223
CBG-4329 use rudimentary backoff to wait for cbl mock version (#7212)
torcolvin Nov 26, 2024
ed2558c
Post-rebase fixes
adamcfraser Nov 28, 2024
69d21de
CBG-4369 add missing API docs
adamcfraser Nov 29, 2024
36cd025
Fix TestResyncMou post-rebase
adamcfraser Nov 29, 2024
14876c7
CBG-4303: conflicting writes muti actor tests, skipping failures (#7205)
gregns1 Dec 2, 2024
a8628e6
Change image for anemone default integration job (#7220)
bbrks Dec 2, 2024
262c23f
CBG-4302: add multi actor, non-conflicting write tests (#7224)
gregns1 Dec 2, 2024
09f5cb6
Cleanup topologytests (#7225)
torcolvin Dec 2, 2024
1a4559c
CBG-4317 uptake fix for TLS without certs for import feed (#7192)
torcolvin Dec 3, 2024
4fc9df0
CBG-4250 Add pv support to rosmar xdcr (#7230)
adamcfraser Dec 5, 2024
f6fb341
CBG-4366 enable resurrection tests (#7229)
torcolvin Dec 5, 2024
2e2afce
CBG-4250 Test fix for docs processed (#7232)
adamcfraser Dec 6, 2024
113a4ef
CBG-4265 avoid panic in rosmar xdcr tests (#7231)
torcolvin Dec 9, 2024
8668774
CBG-4389: extract cv from known revs and store backup rev by cv (#7237)
gregns1 Dec 11, 2024
ac11957
refactor topologytests (#7238)
torcolvin Dec 12, 2024
407a5e0
CBG-4408 disable CBS topologytests by default (#7240)
torcolvin Dec 12, 2024
dcc98f1
CBG-4331: legacy rev handling for version 4 replication protocol (#7239)
gregns1 Dec 13, 2024
cafd49e
refactor topologytests (#7241)
torcolvin Dec 13, 2024
ddc841e
CBG-4417 construct missing CV entry from HLV if not present (#7242)
torcolvin Dec 13, 2024
9da680a
CBG-4410 restructure multi actor non conflict tests (#7243)
torcolvin Dec 13, 2024
6690e7c
CBG-4420: handle rev tree in history on processRev
gregns1 Dec 16, 2024
a28a6f1
updates based off review + new tests
gregns1 Dec 17, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
CBG-4207: Attachment metadata migration on import (#7117)
* CBG-4207: have import feed migrate attachments from sync data to global sync data even if document doesn't need importing

* new comment

* updates for attachment compaction

* remove print line

* move code into else clause
gregns1 authored and bbrks committed Dec 5, 2024
commit 37fc177a6d0447c71d0ddf33872e4976b9c129d7
37 changes: 25 additions & 12 deletions db/attachment_compaction.go
Original file line number Diff line number Diff line change
@@ -193,9 +193,9 @@ type AttachmentsMetaMap struct {
Attachments map[string]AttachmentsMeta `json:"_attachments"`
}

// AttachmentCompactionData struct to unmarshal a document sync data into in order to process attachments during mark
// AttachmentCompactionSyncData struct to unmarshal a document sync data into in order to process attachments during mark
// phase. Contains only what is necessary
type AttachmentCompactionData struct {
type AttachmentCompactionSyncData struct {
Attachments map[string]AttachmentsMeta `json:"attachments"`
Flags uint8 `json:"flags"`
History struct {
@@ -204,29 +204,42 @@ type AttachmentCompactionData struct {
} `json:"history"`
}

// getAttachmentSyncData takes the data type and data from the DCP feed and will return a AttachmentCompactionData
// AttachmentCompactionGlobalSyncData is to unmarshal a documents global xattr in order to process attachments during mark phase.
type AttachmentCompactionGlobalSyncData struct {
Attachments map[string]AttachmentsMeta `json:"attachments_meta"`
}

// getAttachmentSyncData takes the data type and data from the DCP feed and will return a AttachmentCompactionSyncData
// struct containing data needed to process attachments on a document.
func getAttachmentSyncData(dataType uint8, data []byte) (*AttachmentCompactionData, error) {
var attachmentData *AttachmentCompactionData
func getAttachmentSyncData(dataType uint8, data []byte) (*AttachmentCompactionSyncData, error) {
var attachmentSyncData *AttachmentCompactionSyncData
var attachmentGlobalSyncData AttachmentCompactionGlobalSyncData
var documentBody []byte

if dataType&base.MemcachedDataTypeXattr != 0 {
body, xattrs, err := sgbucket.DecodeValueWithXattrs([]string{base.SyncXattrName}, data)
body, xattrs, err := sgbucket.DecodeValueWithXattrs([]string{base.SyncXattrName, base.GlobalXattrName}, data)
if err != nil {
if errors.Is(err, sgbucket.ErrXattrInvalidLen) {
return nil, nil
}
return nil, fmt.Errorf("Could not parse DCP attachment sync data: %w", err)
}
err = base.JSONUnmarshal(xattrs[base.SyncXattrName], &attachmentData)
err = base.JSONUnmarshal(xattrs[base.SyncXattrName], &attachmentSyncData)
if err != nil {
return nil, err
}
if xattrs[base.GlobalXattrName] != nil && attachmentSyncData.Attachments == nil {
err = base.JSONUnmarshal(xattrs[base.GlobalXattrName], &attachmentGlobalSyncData)
if err != nil {
return nil, err
}
attachmentSyncData.Attachments = attachmentGlobalSyncData.Attachments
}
documentBody = body

} else {
type AttachmentDataSync struct {
AttachmentData AttachmentCompactionData `json:"_sync"`
AttachmentData AttachmentCompactionSyncData `json:"_sync"`
}
var attachmentDataSync AttachmentDataSync
err := base.JSONUnmarshal(data, &attachmentDataSync)
@@ -235,21 +248,21 @@ func getAttachmentSyncData(dataType uint8, data []byte) (*AttachmentCompactionDa
}

documentBody = data
attachmentData = &attachmentDataSync.AttachmentData
attachmentSyncData = &attachmentDataSync.AttachmentData
}

// If we've not yet found any attachments have a last effort attempt to grab it from the body for pre-2.5 documents
if len(attachmentData.Attachments) == 0 {
if len(attachmentSyncData.Attachments) == 0 {
attachmentMetaMap, err := checkForInlineAttachments(documentBody)
if err != nil {
return nil, err
}
if attachmentMetaMap != nil {
attachmentData.Attachments = attachmentMetaMap.Attachments
attachmentSyncData.Attachments = attachmentMetaMap.Attachments
}
}

return attachmentData, nil
return attachmentSyncData, nil
}

// checkForInlineAttachments will scan a body for "_attachments" for pre-2.5 attachments and will return any attachments
26 changes: 26 additions & 0 deletions db/crud.go
Original file line number Diff line number Diff line change
@@ -941,6 +941,32 @@ func (db *DatabaseCollectionWithUser) updateHLV(d *Document, docUpdateEvent DocU
return d, nil
}

// MigrateAttachmentMetadata will move any attachment metadata defined in sync data to global sync xattr
func (c *DatabaseCollectionWithUser) MigrateAttachmentMetadata(ctx context.Context, docID string, cas uint64, syncData *SyncData) error {
globalData := GlobalSyncData{
GlobalAttachments: syncData.Attachments,
}
globalXattr, err := base.JSONMarshal(globalData)
if err != nil {
return base.RedactErrorf("Failed to Marshal global sync data when attempting to migrate sync data attachments to global xattr with id: %s. Error: %v", base.UD(docID), err)
}
syncData.Attachments = nil
rawSyncXattr, err := base.JSONMarshal(*syncData)
if err != nil {
return base.RedactErrorf("Failed to Marshal sync data when attempting to migrate sync data attachments to global xattr with id: %s. Error: %v", base.UD(docID), err)
}

// build macro expansion for sync data. This will avoid the update to xattrs causing an extra import event (i.e. sync cas will be == to doc cas)
opts := &sgbucket.MutateInOptions{}
spec := macroExpandSpec(base.SyncXattrName)
opts.MacroExpansion = spec
opts.PreserveExpiry = true // if doc has expiry, we should preserve this

updatedXattr := map[string][]byte{base.SyncXattrName: rawSyncXattr, base.GlobalXattrName: globalXattr}
_, err = c.dataStore.UpdateXattrs(ctx, docID, 0, cas, updatedXattr, opts)
return err
}

// Updates or creates a document.
// The new body's BodyRev property must match the current revision's, if any.
func (db *DatabaseCollectionWithUser) Put(ctx context.Context, docid string, body Body) (newRevID string, doc *Document, err error) {
2 changes: 1 addition & 1 deletion db/import.go
Original file line number Diff line number Diff line change
@@ -97,7 +97,7 @@ func (db *DatabaseCollectionWithUser) ImportDoc(ctx context.Context, docid strin
existingBucketDoc.Xattrs[base.MouXattrName], err = base.JSONMarshal(existingDoc.metadataOnlyUpdate)
}
} else {
existingBucketDoc.Body, existingBucketDoc.Xattrs[base.SyncXattrName], existingBucketDoc.Xattrs[base.VvXattrName], existingBucketDoc.Xattrs[base.MouXattrName], _, err = existingDoc.MarshalWithXattrs()
existingBucketDoc.Body, existingBucketDoc.Xattrs[base.SyncXattrName], existingBucketDoc.Xattrs[base.VvXattrName], existingBucketDoc.Xattrs[base.MouXattrName], existingBucketDoc.Xattrs[base.GlobalXattrName], err = existingDoc.MarshalWithXattrs()
}
}

9 changes: 8 additions & 1 deletion db/import_listener.go
Original file line number Diff line number Diff line change
@@ -190,13 +190,13 @@ func (il *importListener) ImportFeedEvent(ctx context.Context, collection *Datab
}
}

docID := string(event.Key)
// If syncData is nil, or if this was not an SG write, attempt to import
if syncData == nil || !isSGWrite {
isDelete := event.Opcode == sgbucket.FeedOpDeletion
if isDelete {
rawBody = nil
}
docID := string(event.Key)

// last attempt to exit processing if the importListener has been closed before attempting to write to the bucket
select {
@@ -222,6 +222,13 @@ func (il *importListener) ImportFeedEvent(ctx context.Context, collection *Datab
base.DebugfCtx(ctx, base.KeyImport, "Did not import doc %q - external update will not be accessible via Sync Gateway. Reason: %v", base.UD(docID), err)
}
}
} else if syncData != nil && syncData.Attachments != nil {
base.DebugfCtx(ctx, base.KeyImport, "Attachment metadata found in sync data for doc with id %s, migrating attachment metadata", base.UD(docID))
// we have attachments to migrate
err := collection.MigrateAttachmentMetadata(ctx, docID, event.Cas, syncData)
if err != nil {
base.WarnfCtx(ctx, "error migrating attachment metadata from sync data to global sync for doc %s. Error: %v", base.UD(docID), err)
}
}
}

25 changes: 25 additions & 0 deletions db/util_testing.go
Original file line number Diff line number Diff line change
@@ -761,3 +761,28 @@ func RetrieveDocRevSeqNo(t *testing.T, docxattr []byte) uint64 {
require.NoError(t, err)
return revNo
}

// MoveAttachmentXattrFromGlobalToSync is a test only function that will move any defined attachment metadata in global xattr to sync data xattr
func MoveAttachmentXattrFromGlobalToSync(t *testing.T, ctx context.Context, docID string, cas uint64, value, syncXattr []byte, attachments AttachmentsMeta, macroExpand bool, dataStore base.DataStore) {
var docSync SyncData
err := base.JSONUnmarshal(syncXattr, &docSync)
require.NoError(t, err)
docSync.Attachments = attachments

opts := &sgbucket.MutateInOptions{}
// this should be true for cases we want to move the attachment metadata without causing a new import feed event
if macroExpand {
spec := macroExpandSpec(base.SyncXattrName)
opts.MacroExpansion = spec
} else {
opts = nil
docSync.Cas = ""
}

newSync, err := base.JSONMarshal(docSync)
require.NoError(t, err)

// change this to update xattr
_, err = dataStore.WriteWithXattrs(ctx, docID, 0, cas, value, map[string][]byte{base.SyncXattrName: newSync}, []string{base.GlobalXattrName}, opts)
require.NoError(t, err)
}
232 changes: 232 additions & 0 deletions rest/importtest/import_test.go
Original file line number Diff line number Diff line change
@@ -2447,3 +2447,235 @@ func TestPrevRevNoPopulationImportFeed(t *testing.T) {
assert.Equal(t, revNo-1, mou.PreviousRevSeqNo)

}

// TestMigrationOfAttachmentsOnImport:
// - Create a doc and move the attachment metadata from global xattr to sync data xattr in a way that when the doc
// arrives over import feed it will be determined that it doesn't require import
// - Wait for the doc to arrive over import feed and assert even though the doc is not imported it will still get
// attachment metadata migrated from sync data to global xattr
// - Create a doc and move the attachment metadata from global xattr to sync data xattr in a way that when the doc
// arrives over import feed it will be determined that it does require import
// - Wait for the doc to arrive over the import feed and assert that once doc was imported the attachment metadata
// was migrated from sync data xattr to global xattr
func TestMigrationOfAttachmentsOnImport(t *testing.T) {
base.SkipImportTestsIfNotEnabled(t)

rtConfig := rest.RestTesterConfig{
DatabaseConfig: &rest.DatabaseConfig{DbConfig: rest.DbConfig{
AutoImport: true,
}},
}
rt := rest.NewRestTester(t, &rtConfig)
defer rt.Close()
dataStore := rt.GetSingleDataStore()
ctx := base.TestCtx(t)

// add new doc to test a doc arriving import feed that doesn't need importing still has attachment migration take place
key := "doc1"
body := `{"test": true, "_attachments": {"hello.txt": {"data":"aGVsbG8gd29ybGQ="}}}`
rt.PutDoc(key, body)

// grab defined attachment metadata to move to sync data
value, xattrs, cas, err := dataStore.GetWithXattrs(ctx, key, []string{base.SyncXattrName, base.GlobalXattrName})
require.NoError(t, err)
syncXattr, ok := xattrs[base.SyncXattrName]
require.True(t, ok)
globalXattr, ok := xattrs[base.GlobalXattrName]
require.True(t, ok)

var attachs db.GlobalSyncData
err = base.JSONUnmarshal(globalXattr, &attachs)
require.NoError(t, err)

db.MoveAttachmentXattrFromGlobalToSync(t, ctx, key, cas, value, syncXattr, attachs.GlobalAttachments, true, dataStore)

// retry loop to wait for import event to arrive over dcp, as doc won't be 'imported' we can't wait for import stat
var retryXattrs map[string][]byte
err = rt.WaitForCondition(func() bool {
retryXattrs, _, err = dataStore.GetXattrs(ctx, key, []string{base.SyncXattrName, base.GlobalXattrName})
require.NoError(t, err)
_, ok := retryXattrs[base.GlobalXattrName]
return ok
})
require.NoError(t, err)

syncXattr, ok = retryXattrs[base.SyncXattrName]
require.True(t, ok)
globalXattr, ok = retryXattrs[base.GlobalXattrName]
require.True(t, ok)

// empty global sync,
attachs = db.GlobalSyncData{}
err = base.JSONUnmarshal(globalXattr, &attachs)
require.NoError(t, err)
var syncData db.SyncData
err = base.JSONUnmarshal(syncXattr, &syncData)
require.NoError(t, err)

// assert that the attachment metadata has been moved
assert.NotNil(t, attachs.GlobalAttachments)
assert.Nil(t, syncData.Attachments)
att := attachs.GlobalAttachments["hello.txt"].(map[string]interface{})
assert.Equal(t, float64(11), att["length"])

// assert that no import took place
base.RequireWaitForStat(t, func() int64 {
return rt.GetDatabase().DbStats.SharedBucketImportStats.ImportCount.Value()
}, 0)

// add new doc to test import of doc over feed moves attachments
key = "doc2"
body = `{"test": true, "_attachments": {"hello.txt": {"data":"aGVsbG8gd29ybGQ="}}}`
rt.PutDoc(key, body)

_, xattrs, cas, err = dataStore.GetWithXattrs(ctx, key, []string{base.SyncXattrName, base.GlobalXattrName})
require.NoError(t, err)

syncXattr, ok = xattrs[base.SyncXattrName]
require.True(t, ok)
globalXattr, ok = xattrs[base.GlobalXattrName]
require.True(t, ok)
// grab defined attachment metadata to move to sync data
attachs = db.GlobalSyncData{}
err = base.JSONUnmarshal(globalXattr, &attachs)
require.NoError(t, err)

// change doc body to trigger import on feed
value = []byte(`{"test": "doc"}`)
db.MoveAttachmentXattrFromGlobalToSync(t, ctx, key, cas, value, syncXattr, attachs.GlobalAttachments, false, dataStore)

// Wait for import
base.RequireWaitForStat(t, func() int64 {
return rt.GetDatabase().DbStats.SharedBucketImportStats.ImportCount.Value()
}, 1)

// grab the sync and global xattr from doc2
xattrs, _, err = dataStore.GetXattrs(ctx, key, []string{base.SyncXattrName, base.GlobalXattrName})
require.NoError(t, err)
syncXattr, ok = xattrs[base.SyncXattrName]
require.True(t, ok)
globalXattr, ok = xattrs[base.GlobalXattrName]
require.True(t, ok)

err = base.JSONUnmarshal(globalXattr, &attachs)
require.NoError(t, err)
syncData = db.SyncData{}
err = base.JSONUnmarshal(syncXattr, &syncData)
require.NoError(t, err)

// assert that the attachment metadata has been moved
assert.NotNil(t, attachs.GlobalAttachments)
assert.Nil(t, syncData.Attachments)
att = attachs.GlobalAttachments["hello.txt"].(map[string]interface{})
assert.Equal(t, float64(11), att["length"])
}

// TestMigrationOfAttachmentsOnDemandImport:
// - Create a doc and move the attachment metadata from global xattr to sync data xattr
// - Trigger on demand import for get
// - Assert that the attachment metadata is migrated from sync data xattr to global sync xattr
// - Create a new doc and move the attachment metadata from global xattr to sync data xattr
// - Trigger an on demand import for write
// - Assert that the attachment metadata is migrated from sync data xattr to global sync xattr
func TestMigrationOfAttachmentsOnDemandImport(t *testing.T) {
base.SkipImportTestsIfNotEnabled(t)

rtConfig := rest.RestTesterConfig{
DatabaseConfig: &rest.DatabaseConfig{DbConfig: rest.DbConfig{
AutoImport: false, // avoid anything arriving over import feed for this test
}},
}
rt := rest.NewRestTester(t, &rtConfig)
defer rt.Close()
dataStore := rt.GetSingleDataStore()
ctx := base.TestCtx(t)

key := "doc1"
body := `{"test": true, "_attachments": {"hello.txt": {"data":"aGVsbG8gd29ybGQ="}}}`
rt.PutDoc(key, body)

_, xattrs, cas, err := dataStore.GetWithXattrs(ctx, key, []string{base.SyncXattrName, base.GlobalXattrName})
require.NoError(t, err)
syncXattr, ok := xattrs[base.SyncXattrName]
require.True(t, ok)
globalXattr, ok := xattrs[base.GlobalXattrName]
require.True(t, ok)

// grab defined attachment metadata to move to sync data
var attachs db.GlobalSyncData
err = base.JSONUnmarshal(globalXattr, &attachs)
require.NoError(t, err)

value := []byte(`{"update": "doc"}`)
db.MoveAttachmentXattrFromGlobalToSync(t, ctx, key, cas, value, syncXattr, attachs.GlobalAttachments, false, dataStore)

// on demand import for get
_, _ = rt.GetDoc(key)

xattrs, _, err = dataStore.GetXattrs(ctx, key, []string{base.SyncXattrName, base.GlobalXattrName})
require.NoError(t, err)

syncXattr, ok = xattrs[base.SyncXattrName]
require.True(t, ok)
globalXattr, ok = xattrs[base.GlobalXattrName]
require.True(t, ok)

// empty global sync,
attachs = db.GlobalSyncData{}

err = base.JSONUnmarshal(globalXattr, &attachs)
require.NoError(t, err)
var syncData db.SyncData
err = base.JSONUnmarshal(syncXattr, &syncData)
require.NoError(t, err)

// assert that the attachment metadata has been moved
assert.NotNil(t, attachs.GlobalAttachments)
assert.Nil(t, syncData.Attachments)
att := attachs.GlobalAttachments["hello.txt"].(map[string]interface{})
assert.Equal(t, float64(11), att["length"])

key = "doc2"
body = `{"test": true, "_attachments": {"hello.txt": {"data":"aGVsbG8gd29ybGQ="}}}`
rt.PutDoc(key, body)

_, xattrs, cas, err = dataStore.GetWithXattrs(ctx, key, []string{base.SyncXattrName, base.GlobalXattrName})
require.NoError(t, err)
syncXattr, ok = xattrs[base.SyncXattrName]
require.True(t, ok)
globalXattr, ok = xattrs[base.GlobalXattrName]
require.True(t, ok)

// grab defined attachment metadata to move to sync data
attachs = db.GlobalSyncData{}
err = base.JSONUnmarshal(globalXattr, &attachs)
require.NoError(t, err)
value = []byte(`{"update": "doc"}`)
db.MoveAttachmentXattrFromGlobalToSync(t, ctx, key, cas, value, syncXattr, attachs.GlobalAttachments, false, dataStore)

// trigger on demand import for write
resp := rt.SendAdminRequest(http.MethodPut, "/{{.keyspace}}/doc2", `{}`)
rest.RequireStatus(t, resp, http.StatusConflict)

// assert that the attachments metadata is migrated
xattrs, _, err = dataStore.GetXattrs(ctx, key, []string{base.SyncXattrName, base.GlobalXattrName})
require.NoError(t, err)
syncXattr, ok = xattrs[base.SyncXattrName]
require.True(t, ok)
globalXattr, ok = xattrs[base.GlobalXattrName]
require.True(t, ok)

// empty global sync,
attachs = db.GlobalSyncData{}
err = base.JSONUnmarshal(globalXattr, &attachs)
require.NoError(t, err)
syncData = db.SyncData{}
err = base.JSONUnmarshal(syncXattr, &syncData)
require.NoError(t, err)

// assert that the attachment metadata has been moved
assert.NotNil(t, attachs.GlobalAttachments)
assert.Nil(t, syncData.Attachments)
att = attachs.GlobalAttachments["hello.txt"].(map[string]interface{})
assert.Equal(t, float64(11), att["length"])
}