-
-
Notifications
You must be signed in to change notification settings - Fork 162
Mongo
MongoDB >=3.2
RateLimiterMongo creates unique collection for each rate limiter keyPrefix
.
It supports mongodb
native and mongoose
packages.
const { RateLimiterMongo } = require('rate-limiter-flexible');
const mongoose = require('mongoose');
const mongoOpts = {
reconnectTries: Number.MAX_VALUE, // Never stop trying to reconnect
reconnectInterval: 100, // Reconnect every 100ms
};
let mongoConn, mongooseInstance;
const dbName = 'somedb';
// For mongoose version <= 5
mongoConn = mongoose.createConnection(`mongodb://127.0.0.1:27017/${dbName}`, mongoOpts);
// For mongoose version > 5
try {
mongooseInstance = await mongoose.connect(`mongodb://127.0.0.1:27017/${dbName}`);
mongoConn = mongooseInstance.connection;
} catch (error) {
handleError(error);
}
const opts = {
storeClient: mongooseInstance || mongoConn,
points: 10, // Number of points
duration: 1, // Per second(s)
};
const rateLimiterMongo = new RateLimiterMongo(opts);
rateLimiterMongo.consume(remoteAddress, 2) // consume 2 points
.then((rateLimiterRes) => {
// 2 points consumed
})
.catch((rateLimiterRes) => {
// Not enough points to consume
});
/* --- Or with native mongodb package --- */
const { MongoClient } = require('mongodb');
const mongoOpts = {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE, // Never stop trying to reconnect
reconnectInterval: 100, // Reconnect every 100ms
};
const mongoConn = MongoClient.connect(
'mongodb://localhost:27017',
mongoOpts
);
const opts = {
storeClient: mongoConn,
dbName: 'somedb',
points: 10, // Number of points
duration: 1, // Per second(s)
};
const rateLimiterMongo = new RateLimiterMongo(opts);
rateLimiterMongo.consume(remoteAddress, 2) // consume 2 points
.then((rateLimiterRes) => {
// 2 points consumed
})
.catch((rateLimiterRes) => {
// Not enough points to consume
});
Connection to Mongo takes milliseconds, so any method of rate limiter is rejected with Error, until connection established.
Your server should start after a connection is established. Alternatively, insuranceLimiter
can be setup to avoid errors, but all changes won't be written from insuranceLimiter
to RateLimiterMongo
when connection established
if you use mongoose
package, be careful with Operation Buffering. If Operation Buffering is enabled, mongoose connection is not established yet and your limiter consumes points before that, it may prevent creation of index by key
and brake the limiter.
Endpoint is pure NodeJS endpoint launched in node:10.5.0-jessie
and mongo:3.6.5-jessie
Docker containers with 4 workers
Endpoint is limited by RateLimiterMongo
with config:
new RateLimiterMongo({
mongo: mongo,
points: 20, // Number of points
duration: 1, // Per second(s)
});
By bombardier -c 1000 -l -d 30s -r 2000 -t 5s http://127.0.0.1:3000
Test with 1000 concurrent requests with maximum 2000 requests per sec during 30 seconds
Statistics Avg Stdev Max
Reqs/sec 1997.87 429.40 3869.98
Latency 4.75ms 3.32ms 68.21ms
Latency Distribution
50% 4.15ms
75% 5.43ms
90% 6.95ms
95% 8.79ms
99% 18.96ms
HTTP codes:
1xx - 0, 2xx - 15000, 3xx - 0, 4xx - 45014, 5xx - 0
MongoDB rate limiter creates index by key
document attributes by default.
It may be used for sharding.
Example options:
const opts = {
mongo: mongoConn,
dbName: 'app',
tableName: 'users-rate-limit',
keyPrefix: '', // no need to prefix to simplify ranges
points: 5,
duration: 1,
};
dbName is set to app
and tableName
set to users-rate-limit
(in case of MongoDB it is collection name)
Prepare sharded collection and index launching next on router:
sh.addShardTag("shard0000", "OLD")
sh.addShardTag("shard0001", "NEW")
sh.enableSharding("app")
sh.shardCollection( "app.users-rate-limit", { key: 1 }, { unique: true })
We've assigned a tag OLD
and NEW
to examples shards.
Depending on what is used to identify user: ID, IP address, etc; you should choose correct range options. And ranges should be described with the same values types.
sh.addTagRange("app.users-rate-limit", { key: MinKey }, { key: 5000 }, "OLD")
sh.addTagRange("app.users-rate-limit", { key: 5000 }, { key: MaxKey }, "NEW")
Probably, this example doesn't provide a good shard key frequency, may be users with IDs less than 5000 don't use our app anymore. Anyway, you get the idea.
consume
function should receive userId
as number now and it should be routed to correct shard by configured range. userId is saved as it is, as we set keyPrefix
to empty string.
Example options:
const opts = {
mongo: mongoConn,
dbName: 'app',
tableName: 'users-rate-limit',
keyPrefix: '', // no need to prefix to simplify ranges
points: 5,
duration: 1,
indexKeyPrefix: {country: 1},
};
With option indexKeyPrefix
MongoDB creates compound index {country: 1, key: 1}
. (key is added by default)
Prepare sharded collection and add ranges on router:
sh.addShardTag("shard0000", "EU")
sh.addShardTag("shard0001", "AU")
sh.enableSharding("app")
sh.shardCollection( "app.users-rate-limit", { country: 1, key: 1 }, { unique: true })
sh.addTagRange("app.users-rate-limit", { country: 'Netherlands', key: MinKey }, { country: 'Netherlands', key: MaxKey }, "EU")
sh.addTagRange("app.users-rate-limit", { country: 'Australia', key: MinKey }, { country: 'Australia', key: MaxKey }, "AU")
Set additional country
attribute, whenever consume
or any other function is called from limiter
try {
const key = ipAddress;
const country = getCountryByIP(ipAddress);
const options = {attrs: { country }};
const rlRes = await rlTest.consume(key, 1, options);
} catch (err) {
res.status(429).json(err);
}
As country
is a part of unique index now, it always should be set.
MongoDb saves snapshots to disk with fsync and makes journaling by default. It results to extra disk I/O.
If you already use MongoDb as data store and have high traffic like 1000 req/sec or more, you may find it useful to launch the second MongoDb instance with options:
--syncdelay 0 : disable making snapshots to disk
--nojournal : disable journal
--wiredTigerCacheSizeGB 0.25 : set minimum memory
Builtin TTL index automatically deletes expired documents.
Document for one key is 68 bytes
in size.
MongoDb stores information for about 4 million keys in 256Mb.
Here is a small test of MongoDb with different options:
It processes 10k, 100k and 250k writes for 10k random keys for:
10k 926ms
100k 4475ms
250k 13254ms
10k 900ms
100k 4323ms
250k 12407ms
It is about 5% faster with disabled fsync and journaling, but avoiding extra disk I/O is worth.
Get started
Middlewares and plugins
Migration from other packages
Limiters:
- Redis
- Memory
- DynamoDB
- Prisma
- MongoDB (with sharding support)
- PostgreSQL
- MySQL
- BurstyRateLimiter
- Cluster
- PM2 Cluster
- Memcached
- RateLimiterUnion
- RateLimiterQueue
Wrappers:
- RLWrapperBlackAndWhite Black and White lists
Knowledge base:
- Block Strategy in memory
- Insurance Strategy
- Comparative benchmarks
- Smooth out traffic peaks
-
Usage example
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- Block Strategy in memory
- Insurance Strategy
- Third-party API, crawler, bot rate limiting