-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Portable resolver test suite #49448
Comments
|
Yes please, that would be awesome 👍 |
I'm a big fan of data driven conformance tests for this kind of thing. If somebody has a Rust/Java/C++ implementation, they can use it without having to worry about JS bindings. A test suite that takes a JS resolution function could be implemented on top of it but I could imagine that some projects would prefer to pull the conformance repo and integrate into their own test suite and runner instead. |
If you provided a JS function (that could be integrated into an existing JS test suite) and then also provided a hook for someone to provide a CLI tool, then that would enable both patterns. |
Possible straw API: import RESOLUTIONS from '@nodejs/resolver-conformance-tests';
describe('node resolver-conformance', () => {
for (const resolutionTest of RESOLUTIONS) {
// Don't want to support non-import resolution.
if (!resolutionTest.environments.includes('import')) continue;
it(resolutionTest.id, async () => {
for (const file of resolutionTest.files) {
if (file.type === 'symlink') { /* ... */ }
testFS.writeFileSync(file.relativePath, file.content);
}
assertEquals(
resolutionTest.resolvedURL,
await resolve(resolutionTest.specifier, resolutionTest.referrerURL)
);
});
}
}); Some of that is boilerplate but a lot of it is really handy for people who want it to work nicely in their test suite (e.g. rerunning failing tests only). E.g. we could have dedicated support for writing the files that belong to the test case to a given FS implementation. |
The content of RESOLUTIONS is a data structure hinted at in the code consuming it. It’s a collection of conformance test cases with the file layout they require, the resolution they test, and the expected URL after resolution finishes. The test case is writing files to realize the file layout for the active test case because it’s not a given that the resolver under test directly interacts with the file system. |
That looks about right. Just want to point out that this was just a rough pseudo-code example of the kind of data required for a conformance data set. I assume gaps would pop up as one would actually try to implement one.
I wanted to make sure that virtualized file systems could use the test suite. Another approach for this would be that the test case references a fixture directory. That way resolvers that use anything but the actual disk could at least copy the directory into whatever data structure they use.
One important note: The test cases would hopefully exist as plain JSON. That way a Java implementation for some IDE could use it without having to execute JavaScript just to get the data. |
Can you explain what you mean by duplicates? I think it would be upon the maintainers of the conformance tests to ensure that the files aren't duplicated within a test. Using potentially shared fixture directories would make that easier than to put the file contents inside of the data structure though.
I think there's a misunderstanding about what One example for "why have files at all" would be {
"id": "self-reference-from-pkg-root",
"specifier": "enclosing-pkg-name",
"resolvedURL": "./pkg-root/lib/pkg.mjs",
"referrerURL": "./pkg-root/ref.mjs",
"environments": ["import"],
"files": [
{
"type": "file",
"relativePath": "./pkg-root/package.json",
"content": "{\"name\":\"enclosing-pkg-name\",\"exports\":\"./lib/pkg.mjs\"}",
}
]
} Alternatively, with fixture directories it may be: {
"id": "self-reference-from-pkg-root",
"specifier": "enclosing-pkg-name",
"resolvedURL": "./pkg-root/lib/pkg.mjs",
"referrerURL": "./pkg-root/ref.mjs",
"environments": ["import"],
"fixturesDir": "self-reference"
} Where |
The test suite should be based on So I would worry less about testing syntax / REPL etc and more about effectively getting "coverage" on the specification (https://nodejs.org/dist/latest-v13.x/docs/api/esm.html#esm_resolver_algorithm). Then, yes all of these that you have listed are cases that need to be covered:
As well as things like symlinks, exports, self resolution, exports edge cases, errors and conditions, file extension lookups on the CJS legacy paths, path encoding (eg emojis in file names, and avoiding URL injections). All of these cases are covered with the resolutions done by the tests over all of the |
I would strongly prefer "literally anything but yaml". TOML, even. |
Why does escaping make it untenable? The JSON doesn't need to be hand-edited. |
Only to a human reading the raw JSON file - is that a valuable audience? |
It’d probably be better for the json to have a file path pointing to an mjs file, so we can link and evaluate the js more easily. |
This was brought up in nodejs/modules#451 (comment) for resolver stability, that having tests available for tools and userland resolvers would be a great help in ensuring that they are compliant with the Node.js ES module resolver.
If we could move the resolution tests into something resembling an independent test suite that could easily be run against any async resolver function that could assist the ecosystem in this transition process.
So -
The text was updated successfully, but these errors were encountered: