Replies: 25 comments 75 replies
-
Hi Oran, You can cache the DB connection as per docs on Atlas. https://docs.atlas.mongodb.com/best-practices-connecting-to-aws-lambda/ |
Beta Was this translation helpful? Give feedback.
-
No, you don’t need to configure on Atlas, it’s all via code. Connection can be cached between requests. I would follow their strategy exactly rather than your approach. FYI I’ve deployed via now and can confirmed this approach works. You can use ‘serverless’ package to mimic AWS set up and run locally as of on lambda, so same handler function etc. Atlas Docs “Define the MongoDB client connection to the MongoDB server outside the AWS Lambda handler function. This makes the database connection available between invocations of the AWS Lambda function for the duration of the lifecycle of the function” |
Beta Was this translation helpful? Give feedback.
-
BUMPING! Anyone has experienced this connection issues? |
Beta Was this translation helpful? Give feedback.
-
Here is the code I use with success. Originally I used Mongoose, but ripped that all out. I will write and upload some scrips you can use for benchmarking, which may help others.
|
Beta Was this translation helpful? Give feedback.
-
I'm experiencing the same issue in production after following the setup described here: https://vercel.com/guides/deploying-a-mongodb-powered-api-with-node-and-vercel The only difference is that I am using mongoose instead of the mongodb client.
As soon as I start browsing the production site, MongoDB Atlas shows 100 - 200 active connections. When actively browsing the site, connections will ramp up to ±400, but I can't get it higher than that. When logging |
Beta Was this translation helpful? Give feedback.
-
I have a graphql server implementation and when building the graphql server context, I am using the following way, and I do not see a problem of too many connection. During a load test of 5K client requests per minute, the max connections I had open was 21: const options = {
useNewUrlParser: true,
useUnifiedTopology: true,
authSource: "admin",
useFindAndModify: "false",
};
// Create cached connection variable
const connection = {};
const DB_URI = process.env.DB_URI;
const connectDb = async () => {
if (connection.isConnected) {
// use cached connection when available
return;
}
try {
const dbConnection = await mongoose.connect(DB_URI, options);
connection.isConnected = dbConnection.connections[0].readyState;
} catch (err) {
logger.error(`error connecting to db ${err.message || err}`);
}
};
export { connectDb }; |
Beta Was this translation helpful? Give feedback.
-
So I did some benchmarking, and I was wrong about my setup. Connection pooling is not in effect. I suspect the below is needed, but I don't think the Lambda scope is available. I'll post an issue. For the Node.js driver, declaring the connection variable in global scope will also do the trick. However, there is a special setting without which the connection pooling is not possible. That parameter is callbackWaitsForEmptyEventLoop which belongs to Lambda’s context object. Setting this property to false will make AWS Lambda freeze the process and any state data. This is done soon after the callback is called, even if there are events in the event loop. |
Beta Was this translation helpful? Give feedback.
-
https://github.com/nucleo-org/edge-next seems to use MongoDB, so it may be worth checking out how they implemented it. |
Beta Was this translation helpful? Give feedback.
-
Bump Happens in development mode but not in production mode. |
Beta Was this translation helpful? Give feedback.
-
Just me..
…On Sat, 13 Jun 2020 at 17:48, Dave ***@***.***> wrote:
@sanderfish <https://github.com/sanderfish> how many people accessing the
website?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#12229 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXXXXGRBO757JCMGWNSJCLRWOUVRANCNFSM4MRUEAEQ>
.
|
Beta Was this translation helpful? Give feedback.
-
I think it hasn't been quoted, but there's a doc from Mongo: https://developer.mongodb.com/how-to/nextjs-building-modern-applications I hit the same problem and I think I've mostly solved it by following this doc. Will provide more info later. |
Beta Was this translation helpful? Give feedback.
-
I've used nearly the same solution from the Mongo docs. But I've added a little touch. import '@/types';
import mongoose from 'mongoose';
import { NextApiHandler, NextApiRequest, NextApiResponse } from 'next';
import { Maybe } from '@/types';
import models from '../models';
declare module 'http' {
interface IncomingMessage {
models: Maybe<typeof models>;
}
}
const readyStates = {
disconnected: 0,
connected: 1,
connecting: 2,
disconnecting: 3,
};
let pendingPromise: Maybe<Promise<typeof mongoose>> = null;
// https://hoangvvo.com/blog/migrate-from-express-js-to-next-js-api-routes/
const withDb = (fn: NextApiHandler) => async (
req: NextApiRequest,
res: NextApiResponse,
) => {
const next = () => {
req.models = models;
return fn(req, res);
};
const { readyState } = mongoose.connection;
// TODO: May need to handle concurrent requests
// with a little bit more details (disconnecting, disconnected etc).
if (readyState === readyStates.connected) {
return next();
} else if (pendingPromise) {
// Wait for the already pending promise if there is one.
await pendingPromise;
return next();
}
pendingPromise = mongoose.connect(process.env.DB, {
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true,
});
try {
await pendingPromise;
} finally {
pendingPromise = null;
}
next();
};
export default withDb; |
Beta Was this translation helpful? Give feedback.
-
During development, you may need to prevent connections leaking due to hot-reload by using |
Beta Was this translation helpful? Give feedback.
-
tldr: A good (best?) solution that I've found to integrate a database has been to just create a separate node process for the API (aka a microservice). I've found the provided Next API to be woefully inflexible when you start using Problems integrating with a databaseProblem 1.) Duplicate connections. Using the If your project is only using Problem 2.) You can't build a project when you use a custom express server for Next/API. While using a custom server solves the duplicate connections issue (since you're establishing a connection before loading the app), it makes it impossible to build the project for production because you can't make API requests to an API that isn't running during build-time: const app = next({ dev })
const handle = app.getRequestHandler()
(async () => {
try {
await connectToDB();
await app.prepare();
const server = express();
middlewares(server);
server.use("/api", routes);
server.all('*', (req: Request, res: Response) => handle(req, res));
server.listen(PORT, (err?: Error) => {
if (err) throw err;
console.log(`Serving app on: \x1b[1m${HOST}\x1b[0m`);
});
} catch (err) {
console.log(err.toString());
process.exit(1);
}
})(); In addition, if you use Problem 3.) Middlewares. Perhaps you want to run some sort of middlewares before each DB request, while it's possible to do in a Next API route, its rather clunky (such as having to wrap each route with several callback handlers). This results in a lot of WET code. Problem 4.) All your eggs in one basket. If the process crashes, all services become unavailable. In addition, since all the resources share the same process, multiple complex DB queries could hog resources and result in slower initial page requests. RecommendationSave yourself the unnecessary headache and create a separate node process for the API and establish a connection before spinning up the servers. This eliminates having to cache your connection and do connection checks for every API request. This also allows your API to be more flexible in terms of hosting options and resource management. It also spreads out point of failures (if the API crashes, it doesn't take down your entire app and vice-versa). This is also easy enough to be used as a standalone microservice within a monorepo (with some semi-complex setup). ExampleClick to expand monorepo configuration
Example statsDevelopment changes in relation to database connections (triggering both Next and API reloads): Production requests in relation to database connections (creating dynamic iSSG pages): ConclusionWhile this may not be an all-in-one solution (only utilizing what's available in Next), it solves all the problems above while keeping the monorepo structure, and most importantly, the amount of connections should stay the same despite the number of dynamic pages and visitors increasing over time. Unless I'm missing something, the only drawback would be the time it takes to resolve requests to an API. |
Beta Was this translation helpful? Give feedback.
-
Is there anything new about this issue ? Even with caching the connection object, I'm hitting around 50-80 connections for only one user browsing my website... |
Beta Was this translation helpful? Give feedback.
-
mattcarlott's is a great solution. Thanks for that example. i really did consider going down this route, but I'd already invested time wrestling with it, and was determined to see it through. I also wanted to keep the API logs within Next's dashboard. For anyone struggling with lingering connections and wanted to keep API within, they can be kept to an absolute minimum with these key changes:
A lot of trial, error and swearing has gone into this, but i am now very happy. I hope it helps somebody else. next-connect - 0.9.1 (For middleware) |
Beta Was this translation helpful? Give feedback.
-
Don't know if that's going to be of help to anyone as this is a very old question, but this post helped me solve my original issue: just using the filesystem to cache an accessToken is doing the trick for me: let accessToken: string | null | undefined;
try {
accessToken = fs.readFileSync(
path.join("/tmp", ACCESS_TOKEN_CACHE_PATH),
"utf8"
);
} catch (error) {
console.log("AccessToken not initialized");
}
export const generateAuthHeader = async (refresh: boolean) => {
const then = Date.now();
if (!accessToken || accessToken === "" || refresh) {
console.log("logging in again... refresh? ", refresh);
await app.logIn(credentials);
accessToken = app.currentUser?.accessToken;
try {
fs.writeFileSync(
path.join("/tmp", ACCESS_TOKEN_CACHE_PATH),
accessToken || "",
"utf8"
);
console.log("Wrote to members cache");
} catch (error) {
console.log("ERROR WRITING ACCESS TOKEN TO FILE");
console.log(error);
}
console.log(`-> ${Date.now() - then}`);
} else {
console.log("already logged in");
}
// Get a valid access token for the current user
// Set the Authorization header, preserving any other headers
return {
Authorization: `Bearer ${accessToken}`,
};
}; For the record, I am using Realm with GraphQL queries |
Beta Was this translation helpful? Give feedback.
-
Wish me luck here please: THIRD TIME EDITED !! The goal of this comment/solution is:
I am using NEXTjs to build my API routes. Hence, my routes on production will be dealt with using LABDA Functions. This means i do not have See this helper below. It opens new connection to a database every time i call. Now, I don't want to get two open connections at the same time for the same databse:
I created this helper that does the following:
NOW PLEASE, Correct me if i am wrong, Educatme if you have a better way to do this. |
Beta Was this translation helpful? Give feedback.
-
We are following this example: https://github.com/vercel/next.js/tree/canary/examples/with-mongodb-mongoose but hit the connection limit with getStaticProps and having a few thousand pages from getStaticPaths Has anyone from the @vercel-infra @dav-is @matheuss Team actually checked if this is the right way to go? |
Beta Was this translation helpful? Give feedback.
-
Has anyone able to come up with any proper solution ? I am using Next's native API handlers to make API calls and my app is deployed on Vercel itself. So I use a DB middleware before each API call like below // pages/api/sample.js
import db from 'middleware/db';
export default nextConnect().use(db).get(somehandler); // db.js
let cached = global.mongoose;
if (!cached)
cached = global.mongoose = { conn: null, promise: null };
export default async function (request, response, next) {
if (cached.conn && next)
return next();
if (cached.conn)
return cached.conn;
if (!cached.promise) {
const options = {
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false,
bufferCommands: false
};
cached.promise = mongoose.connect(MONGODB_URI, options).then(mongoose => {
return mongoose;
});
}
cached.conn = await cached.promise;
return next ? next() : cached.conn;
} I am reusing the above middleware function also for the ISR purpose (getStaticPaths & getStaticProps) to persist the cached connection but even after that I am constantly receiving emails from mongodb atlas that my connections reached insane 500. Below is a sample function called by getStaticProps and getStaticPaths. // helpers.js
import dbConnect from './db.js';
export async function getPosts() {
await dbConnect();
const posts = await Post.find({});
return posts;
} |
Beta Was this translation helpful? Give feedback.
-
Small up on this: has anyone tried to connect to Mongo not in an API route, but in a middleware? |
Beta Was this translation helpful? Give feedback.
-
This is a known issue in nextjs api routing. See the path forward they provide here (albeit for a different db) https://www.prisma.io/docs/guides/database/troubleshooting-orm/help-articles/nextjs-prisma-client-dev-practices |
Beta Was this translation helpful? Give feedback.
-
Has anyone found a suitable solution. This issue is causing sporadic issues with my users causing my project to stall. I do not appear to ever come close to 500 connections (as seen by MongoDB) but have at least 10-20 UnhandledPromiseRejection errors a day (with pretty low traffic). |
Beta Was this translation helpful? Give feedback.
-
It's 2024 and I think there still isn't a proper solution. In the postgres word, |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
I saw a lot of posts and articles about this alert on MongoDB Atlas ("Connections % of configured limit has gone above 80"), but couldn't figure out how to solve it in my Next.js application.
I create my db connection outside the handler function. I used a middleware withDatabase.js:
This middleware wraps the API endpoint handler.
Now, if I close the connection on every API handler when it finishes, like this:
const { connection } = req; if (connection) { connection.close(); }
Than, I'm getting an error on the second request to the same api handler:
MongoError: Topology is closed, please connect
And if i'm not closing the connection, i'm getting this alert (after a short time of use) to my email:
Connections % of configured limit has gone above 80
What is the best practices to work with MongoDB Atlas in a Next.js application?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions