-
Notifications
You must be signed in to change notification settings - Fork 156
Couch db replication not working - nano module unable to create replication job in _replicator database #290
Comments
Looking at the code of v4.1.1 (and also v4.0.0) Nano has even then used the return relax({db: "_replicate", body: opts, method: "POST"}, callback); fyi the Does the replication show up under |
Actually, if I try to use the nano module with this _replicate end point, The replication jobs are not created and even, it is not available in _active_tasks. But if I change the end point to _replicator. It is working as expected. Replication job entry got created under _replicator database. I am able to see the same replication job in _active_task also. Could you please help me with figuring out whats going wrong? |
I have checked again it in current nano v6.1.5 with the following code: var nano = require('nano')('http://localhost:5984')
nano.db.replicate('asd', 'efg', { create_target: true }, function(error, response) {
console.log(error, response)
}) The response was: { ok: true,
session_id: '89a68ff7c703e2cbda76d67108a8326e',
source_last_seq: 4,
replication_id_version: 3,
history:
[ { session_id: '89a68ff7c703e2cbda76d67108a8326e',
start_time: 'Tue, 01 Sep 2015 08:15:45 GMT',
end_time: 'Tue, 01 Sep 2015 08:15:46 GMT',
start_last_seq: 0,
end_last_seq: 4,
recorded_seq: 4,
missing_checked: 2,
missing_found: 2,
docs_read: 2,
docs_written: 2,
doc_write_failures: 0 } ] } indicating a successful replication. |
I have also issued a continuous replication: nano.db.replicate('asd', 'efg', { continuous: true }, function(error, response) {
console.log(error, response)
}) with the response: { ok: true,
_local_id: '2971ed4ac2a08832adb79f26f051c8fa+continuous' } I than checked curl http://localhost:5984/_active_tasks | json_pp and it shows that the continuous replication is running: [
{
"checkpoint_interval" : 5000,
"checkpointed_source_seq" : 4,
"doc_id" : null,
"revisions_checked" : 0,
"started_on" : 1441095695,
"continuous" : true,
"target" : "efg",
"doc_write_failures" : 0,
"source_seq" : 4,
"progress" : 100,
"missing_revisions_found" : 0,
"type" : "replication",
"pid" : "<0.1630.0>",
"updated_on" : 1441095740,
"docs_read" : 0,
"replication_id" : "2971ed4ac2a08832adb79f26f051c8fa+continuous",
"docs_written" : 0,
"source" : "asd"
}
] So everything works as expected :) |
Hi Jo, The issue is not with the instance. Here you are trying to create a replication job on your local couchdb. The major problem what we are facing here is:
Thanks |
I'm not sure that I fully understand the problem. Do you suggest to drop the |
As mentioned, we are facing issue with the replication job which is not created after replication. And I think which should be created for getting the entries in _active_tasks . Could you please try out on cloudant if you are facing the same issue? |
But what does that have yo do with nano? |
I'm implementing proper replication to the "_replicator" database. Follow on: https://github.com/carlosduclos/nano/tree/replicator |
Working on this on #349. |
This issue has been solved by implementing replication using the _replicator database. Before that nano was only using _replicate and not _replicator. |
@jo hello there, i am trying to replicate a document in _users db from one instance to another, but it is not returning any response, but to replicate within a instance it is working for some random db. |
Couch db replication not working - nano module unable to create replication job in _replicator database.
Previously, in nano modules v4.1.1, it was working but for current version its giving positive response but it seems no replication job created.
Is this a new release change for nano module?
For V4.1.1 replicateDb function in nano has return something like "return relax({db: "_replicator", body: opts, method: "POST"}, callback);" But for the current version its showing me "return relax({db: '_replicate', body: opts, method: 'POST'}, callback);" And I am sure this change causes the replication job creation failure.
The text was updated successfully, but these errors were encountered: