-
-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"getport" sometimes throwing an error #827
Comments
i dont know how port acquisition and keeping is a fickle process, because you cant just "give me a free port and keep it on-hold until used" and also cannot be marked as "used for later", like it is tested that it is free with a small http server, which is closed to free the port so that it can be used by mongod, but in that small time another process may just get that port |
I just got here because I have the same issue. Since there's currently no solution, I'm thinking maybe https://www.npmjs.com/package/promise-retry will do the trick to create a server in case the port fails. I haven't tried this yet but that might work at least as a temporary fix. /edit:
I've done a few runs now and it seems I no longer have port issues. No guarantees here but out of 5 runs, they all passed... Before every 2nd run or so failed. |
I seem to getting the same port issue sometimes. Oddly enough also on macOS 14.1. but mongodb-memory-server v8.13.0. Will try newer version. |
We've had the same issue with We ended up with this workaround: import getPort from 'get-port';
import retry from 'async-retry';
const port = await getPort();
const mongo = await retry(
() =>
MongoMemoryReplSet.create({
...this.testEnvironmentOptions,
instanceOpts: [{ port }],
replSet: { count: 2 },
}),
{
retries: 3,
},
); |
Well, it seems I have interesting results. Updating to |
@JacksonUtsch @trevorah, what exact errors are you seeing, are you seeing a i also dont know what is so much different from the |
In the code referenced I think we need a lock of sorts for |
do you mean some kind of mutex or do you mean a cache file with lock? also what is your use-case, replset or just many severs? |
A mutex yes. Running many e2e tests from separate files. |
what is your testing environment? (example jest) the problem with many concurrent tests and each having their own server is that those modules are isolated and cannot speak to each-other without some kind of cache file (so a normal mutex would not work), but i didnt want to increase complexity compared to the old |
Yes, using jest for testing here. Hmm, I thought there was only a single instance of |
yes it is a singleton, but the problem is that they are isolated instances, each jest worker can be seen as its own nodejs process, so a singleton is not shared across those processes (at least to my knowledge). |
Ahh, good to know. Then a temp file might be the way to go. |
this changes the behavior to use "net.listen(0)" to create a random port re #827
there might be 1 thing that is definitely different from for the next version i have done some slight refactors regarding the port implementation, and i also have cobbled something together to use if that does not work, i will try to make a more complex solution similar to locking a binary download place (via our LockFile implementation) to only allow one server to generate a port, use it and then unlock it - which might make tests longer, so it will likely be a non-default option - it will also not prevent other unrelated applications using those ports in the meantime and they will have to have the same lockfile path (download dir?) |
@hasezoey This seems to work for me. I've not seen the error show up when using the tag and flag. 😃 |
because port problems are somewhat sporadic, i have not been able to actually test much about it, but today just run into a case where i consistently got failed tests because of "port already in use" (even after multiple tries), adding i am thinking about making it the default in some future version (but not 9.1.0 yet) to get some more testing, and push the old behavior behind a flag. just keep in mind that this will just lessen the port conflicts, but not entirely avoid them. |
Great. Agreed it should be default behavior. Good to have a reproducible test with the issue here. |
I dont quite know what you mean, there still is not reproduce-able case i know of, i just ran into it - if i would reboot it would likely be gone again (or after many retries), port testing is a very fickle process. The best would be if mongod would implement something to generate their own port. |
It only sounded like this was reproducible is all. |
it sounded like it, but it was just sporadic as said in the earlier comment, i dont have a reproduce-able case for this, i dont even know if there is one - the best i have to test "port already in use" falky-ness is just re-running the MMS tests over and over |
At Tv2 Danmark one of our repositories uses version 9.0.0 for tests. We have found the same bug as desribed here - but only on unix environments. Two of the developers are using Mac, which has the error along with during GitHub action, however the two other developers (me included) using windowns have not been able to reproduce the error. It appears to happen quite often on GitHub actions and for the Mac users, so it would appear to be reproducable in the right environment. Going to test now if the v9.1.0-beta.5 fixes it for us. At the very least i hope it helps to know that it is very likely a unix related issue. |
At the 4th run of our GitHub check action that runs our test using this, the usual tests failed again with the same error.
Unsure if perhabs we are missing an option for the create method, for changes introduced to have an effect. export class MongoTestDatabase {
private mongoServer: MongoMemoryServer
private client: MongoClient
...
public async setupDatabase(): Promise<void> {
this.mongoServer = await MongoMemoryServer.create()
this.client = await MongoClient.connect(this.mongoServer.getUri())
}
public async teardownDatabase(): Promise<void> {
if (this.client) {
await this.client.close()
}
if (this.mongoServer) {
await this.mongoServer.stop()
}
}
...
}
|
@andr9528 i dont see a problem with the code provided
are those runs using the experimental option? |
@hasezoey The 4 runs i did, where the last threw the error again, was with version 9.1.0-beta.5. Is there anything special that needs to be configued to use the experimental option? { // package.json
...
"devDependencies": {
...
"mongodb-memory-server": "9.1.0-beta.5",
},
"dependencies": {
...
"mongodb": "^6.1.0",
}
} |
yes config option |
I am sorry that i overlooked that section. As mentioned in the section linked to, it
Immediatly after adding the missing config section to the The cutout of our { // package.json
...
"config": {
"mongodbMemoryServer": {
"expNet0Listen": true
}
},
"devDependencies": {
...
"mongodb-memory-server": "9.1.0-beta.5",
},
"dependencies": {
...
"mongodb": "^6.1.0",
}
} As i mentioned earlier it might be related to something unix based. We are yet to see it on any of our Windows PC, while seeing it commonly on unix based enviroment. Has been seen on both MacOS and Linux, in the form of Github Actions. |
it should theoretically happen on all systems, because at the time the port is generated, another process could come in and use it before actually being used by the mongod server (in the downtime in-between), to confirm, is that the only package.json or are there more? could you maybe try via a environment variable too? (i did not add a log to indicate whether the experiment is active or not, but maybe i should) |
version |
for now maybe try using the |
Thanks for the suggestion, but that's what we were already using. Since it always starts with the same port, on the first try it would try to use that port and fail (see https://github.com/nodkz/mongodb-memory-server/blob/master/packages/mongodb-memory-server-core/src/util/getport/index.ts#L73C7-L73C46) |
what exactly do you mean with "fail", as in "failed with error" or "failed and tried another port"? PS: forgot Date.now was used to get a initial port as a workaround you could likely use |
This comment was marked as outdated.
This comment was marked as outdated.
The current implementation of `getPort()` relies on using `Date.now()` to get the first port to try to launch mongo on, even when the `EXP_NET0LISTEN` flag is set. This causes a couple issues: - If the `Date` module is mocked, it can result in `getFreePort()` returning the same first port every time. - If multiple mongos are being started at once in parallel, it's possible for the same first port to be picked leading to port conflicts In order to better randomize the initial port selection so that port conflicts can be avoided, instead of using `Date.now()` for an initial seed, use `crypto.randomInt()` which should provide more randomness and hopefully avoid some of these race conditions. Fixes typegoose#827
The current implementation of `getPort()` relies on using `Date.now()` to get the first port to try to launch mongo on, even when the `EXP_NET0LISTEN` flag is set. This causes a couple issues: - If the `Date` module is mocked, it can result in `getFreePort()` returning the same first port every time. - If multiple mongos are being started at once in parallel, it's possible for the same first port to be picked leading to port conflicts In order to better randomize the initial port selection so that port conflicts can be avoided, instead of using `Date.now()` for an initial seed, use `crypto.randomInt()` which should provide more randomness and hopefully avoid some of these race conditions. Fixes typegoose#827
* fix(getport): Randomize first port using crypto The current implementation of `getPort()` relies on using `Date.now()` to get the first port to try to launch mongo on, even when the `EXP_NET0LISTEN` flag is set. This causes a couple issues: - If the `Date` module is mocked, it can result in `getFreePort()` returning the same first port every time. - If multiple mongos are being started at once in parallel, it's possible for the same first port to be picked leading to port conflicts In order to better randomize the initial port selection so that port conflicts can be avoided, instead of using `Date.now()` for an initial seed, use `crypto.randomInt()` which should provide more randomness and hopefully avoid some of these race conditions. re #827 * fix(getport): change crypto.randomInt "max" to be inclusive --------- Co-authored-by: hasezoey <hasezoey@gmail.com>
Setting |
|
@JacksonUtsch if it is in the CI, this likely means a lot of MMS instances have been created and not on the same node process instance, is this the case in your tests? |
|
🎉 This issue has been resolved in version 10.0.0-beta.1 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
with version |
🎉 This issue has been resolved in version 10.0.0 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
I just got the same issue after upgrading to v10.0.0, only once for now (out of 10+ builds I guess). https://github.com/mikro-orm/mikro-orm/actions/runs/10226440366/job/28296683190 |
This comment was marked as outdated.
This comment was marked as outdated.
@B4nan sorry, forgotten that this was part of this issue (was only thinking about |
Versions
package: mongo-memory-server
What is the Problem?
When I'm trun to run test using "vitest@0.34.6" in concurrency mode I have an error that port already in use. No always, but pretty often
Want to notice that there is no error on v8.16.0
Code Example
Debug Output
Do you know why it happenes?
Looks like there is a race condition after replacing "get-port" to own implementation
The text was updated successfully, but these errors were encountered: