-
-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LevelDB read performance degradation #273
Comments
if it takes 30s to finish scanning your db isn't totally empty. if you estimate the size in bites of the data actually stored, what do you come up with? |
the keys begin with numbers, so i guess this should cover the whole domain of my keys: > db.db.approximateSize('', 'z', function () { console.log(arguments); })
{ '0': null, '1': 2445270120 } |
that's 2,4GB, but keep in mind |
How should i inspect the data then? |
ah, I misread your example. hm yeah the db is big then, but my guess is that compaction will eventually get rid of the unused files. you described first that actually performance is your problem, not file size. can you share some benchmark results? |
i've created an example to demonstrate this behavior. The numbers are amplified, in reality it takes a few days for the db to grow, but the same behavior is observable with this example. it writes 1M ~2KB records, then reads them back, deletes them, sleeps for a few seconds, reads the db again (the db is empty in this case), then goes back to step one. 'use strict';
const db = require('levelup')('./db');
const uuid = require('uuid');
const val = 'a'.repeat(2000);
function write(n) {
const s = Date.now();
function put(m) {
if (!m) {
console.log('Appending %d keys took %dms', n, Date.now() - s);
return Promise.resolve();
}
return new Promise((resolve, reject) => {
db.put(Date.now() + '|' + uuid.v4(), val, err => {
if (err) return reject(err);
resolve();
})
}).then(() => {
return put(m - 1);
});
}
return put(n);
}
function proc() {
const s = Date.now();
let count = 0;
const keys = [];
return new Promise((resolve, reject) => {
db.createReadStream()
.on('data', v => {
keys.push(v.key);
count++;
})
.on('end', () => {
db.batch(keys.map(key => ({
type: 'del',
key: key,
})), err => {
if (err) return reject(err);
db.db.approximateSize('', 'z', (err, size) => {
if (err) return reject(err);
console.log('Processing %d keys took %dms (db size after proc: %dMB)', count, Date.now() - s, size / 1024 / 1024);
resolve();
})
});
});
});
}
function sleep(s) {
return () => new Promise(resolve => setTimeout(resolve, s * 1000));
}
function loop() {
return write(1000000).then(proc).then(sleep(10)).then(proc).then(loop);
}
loop().catch(err => console.error(err.stack)); Output:
as you can see, despite the fact that the db is empty, it takes up too much space (this wouldn't be an issue for me) and read performance gets really bad. |
google/leveldb#164 and google/leveldb#83 might be related |
ping @juliangruber, any ideas? |
ping @dominictarr and @maxogden again, sorry for the disturbance... the code above btw works with any number (eg. 1K records instead of 1M), the db still grows constantly. |
@madbence, did you test it with the latest version (1.19) ? |
interesting. I guess this is happening because of compactions. If you really need to delete everything what about literally deleting the entire database and starting a new one? creating a database and deleting everything seems like a architectural issue to be honest. Sorry i missed this before. |
i've tried to run the "benchmark" again like 2 weeks ago, and it seems that it was fixed. i'm going to check again this with an older leveldb version and then i'm going to close this issue. @dominictarr i'm using leveldb for log rotation (upload logs to an external service, then delete from the db after a certain time elapsed or a certain number of rows accumulated), but the log is not rotated globally, but per service, so not the whole db is deleted. i admit, leveldb might be an overkill for this (i could do the same with a simple file for every service) |
@madbence you might consider manually triggering compaction when you delete a significant number of entries and see if performance improves at all. this was added in the last release: |
@bookchin thanks, i didn't know about this method fyi, the issue seems to be fixed, we had no problems with db size in the past months |
i'm not sure if my issue is related to
leveldown
or not, but right now i don't have any ideas. i'm experiencing heavy performance degradation after a huge amount of writes/deletes. The db seems to be huge:$ du -sh db/ 3.3G db/ $ ls -l db | wc -l 1633
But the db is totally empty. Also it takes 30s to finish scanning.
I've tried to run
leveldown.repair
, but the numbers are roughly the same:$ du -sh db/ 2.6G db/ $ ls -l cache | wc -l 1331
I've tried to inspect the db:
Is this behavior something i should expect with leveldb?
The text was updated successfully, but these errors were encountered: