-
-
Notifications
You must be signed in to change notification settings - Fork 162
In memory Block Strategy
TL;DR: It makes your limiters at least 7 times
faster.
It can be activated with setup inMemoryBlockOnConsumed and sometimes with inMemoryBlockDuration options.
If some actions consume inMemoryBlockOnConsumed
points, a limiter starts using current process memory for blocking keys. Note, it works for consume method only, all other methods still request store.
In-memory Block Strategy can be used against DDoS attacks. We don't want latency to become 3, 5 or more seconds. Any limiter storage like Redis or Mongo provides in-memory block strategy to avoid too many requests to Storage.
Note, block in memory feature slightly postpones the moment of key is unblocked in memory. It depends on latency between your application and your store and overall Event Loop load. This inaccuracy is insignificant, since it usually takes a couple of milliseconds between result from store and block in memory event.
In-memory Block strategy algorithm developed with specificity rate limiter in mind:
- it doesn't use
setTimeout
to expire blocked keys, so doesn't overload Event Loop - blocked keys expired in two cases:
- if
key
is blocked, it launches collect of expired blocked keys. So it slows down only already blocked actions. - on adding a new blocked
key
, when there are more than 999 blocked keys in total.
- if
There is simple Express 4.x endpoint,
which launched in node:10.5.0-jessie
and redis:4.0.10-alpine
Docker containers by PM2 with 4 workers
Note: Benchmark is done in local environment, so production will be much faster.
router.get('/', (req, res, next) => {
rateLimiter.consume((Math.floor(Math.random() * 5).toString()))
.then(() => {
res.status(200).json({}).end();
})
.catch(() => {
res.status(429).send('Too Many Requests').end();
});
});
It creates 5 random keys.
It isn't real situation, but purpose is to show latency and calculate how it helps to avoid too many requests to Redis.
The same benchmarking setting for both tests:
bombardier -c 1000 -l -d 30s -r 2000 -t 1s http://127.0.0.1:3000
- 1000 concurrent requests
- test duration is 30 seconds
- not more than 2000 req/sec
5 points per second to consume
const rateLimiter = new RateLimiterRedis(
{
redis: redisClient,
points: 5,
duration: 1,
},
);
Result:
Statistics Avg Stdev Max
Reqs/sec 1999.05 562.96 11243.01
Latency 7.29ms 8.71ms 146.95ms
Latency Distribution
50% 5.25ms
75% 7.20ms
90% 11.61ms
95% 18.73ms
99% 52.78ms
HTTP codes:
1xx - 0, 2xx - 750, 3xx - 0, 4xx - 59261, 5xx - 0
5 points per second to consume
When 5 points consumed, a key is blocked in current process memory to avoid further requests to Redis in the current duration window.
const rateLimiter = new RateLimiterRedis(
{
redis: redisClient,
points: 5,
duration: 1,
inMemoryBlockOnConsumed: 5,
},
);
Result:
Statistics Avg Stdev Max
Reqs/sec 2011.39 390.04 3960.42
Latency 1.11ms 0.88ms 23.91ms
Latency Distribution
50% 0.97ms
75% 1.22ms
90% 1.48ms
95% 1.91ms
99% 5.90ms
HTTP codes:
1xx - 0, 2xx - 750, 3xx - 0, 4xx - 59268, 5xx - 0
- Seven times faster!
- Less requests to a store (59k less in this case during 30 seconds)
Get started
Middlewares and plugins
Migration from other packages
Limiters:
- Redis
- Memory
- DynamoDB
- Prisma
- MongoDB (with sharding support)
- PostgreSQL
- MySQL
- BurstyRateLimiter
- Cluster
- PM2 Cluster
- Memcached
- RateLimiterUnion
- RateLimiterQueue
Wrappers:
- RLWrapperBlackAndWhite Black and White lists
Knowledge base:
- Block Strategy in memory
- Insurance Strategy
- Comparative benchmarks
- Smooth out traffic peaks
-
Usage example
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- Block Strategy in memory
- Insurance Strategy
- Third-party API, crawler, bot rate limiting