Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Needs tests. #5

Closed
jamestalmage opened this issue Dec 8, 2015 · 6 comments
Closed

Needs tests. #5

jamestalmage opened this issue Dec 8, 2015 · 6 comments
Assignees
Labels

Comments

@jamestalmage
Copy link

Before this can even be considered for adoption, it needs a much more thorough and easy to understand test suite.

If I run into an issue with a library, the first place I look to make sure I am using it correctly is the tests. A single test, that relies on confusing process forking and and pulls in half a dozen fixtures at once just doesn't cut it.

@novemberborn had the great idea of not doing actual source transforms, but simple text once. That is the idea behind fake-module-system. I think it makes the tests in capture-require really easy to understand.

@ariporad ariporad added the test label Dec 8, 2015
@ariporad
Copy link
Collaborator

ariporad commented Dec 8, 2015

Agreed. The current test was designed to be quick and dirty. My focus now is improving the test suite.

Forking is done so that the test process's require doesn't get messed up, although that might not be necessary. fake-module-system doesn't actually mock the existing module system, it creates a new one. This makes no sense to me, but I feel like I'm missing something. What I'd like to do is try to use mock-fs, since the require system just uses fs. (I'm pretty sure that will work).

I think the best way to go about this would be to have one file in test/ for each feature, which mocks the file system, registers some loaders, and checks only that feature. Then just have mocha run each file in it's own process. Another way would be to scrub the require cache and the require.extensions object, before each test. That seems a little more iffy though. I'll look into it, and start getting it fixed. I also want to get code coverage working.

@jamestalmage
Copy link
Author

mock-fs would probably work, but you are then forced to use only transforms that create useable code. While you definitely should have a number of full stack tests in your test suite, they are just much harder to write, and equally hard to read and understand.

fake-module-system just makes it much easier to write quick and easy to run tests. You install extensions as you normally would, but to mockSystem.extensions, instead of require.extensions. This does mean you need a way to swap out require.extensions during testing. (I did it with an undocumented third parameter, but there may be a better way using proxyquire or similar). You then call mockSystem.load(fileName). What you get back isn't an actual compiled object, just one that has the result of all those text transforms.

Let's say I want to write a test confirming the order transforms are applied.

Using Nodes module system

I create a fixture

module.exports = "foo bar"

Then I create two transforms:

let transform1 = code => code.replace(/foo/g, 'bar')
let transform2 = code => code.replace(/bar/g, 'baz')

I then use pirates to apply 1 and 2 to require.extensions, and require my fixture file:

  • If I get back "baz baz", that means 1 was applied first, then 2.
  • If I get back "bar baz", that means 2 was applied first, then 2.
  • If I get back "foo baz", that means only 2 was applied.
  • If I get back "bar bar", that means only 1 was applied.

That's all fine and good, I have verified my library works. But will I remember all that when I come back in 6 months? 2 years? Will new contributors get it?

Another big draw back is that you need to make sure you wipe out the require cache and all applied transforms between each test. I am sure this is why you decided to employ forking. It makes it easier to guarantee a pristine state.

With fake-module-system

Same goal. But watch how much easier everything gets. My file contents can just be three characters: foo. That isn't valid Javascript, but we don't care, because it will never be evaluated as such.

var system = new MockSystem();
system.contents['./foo.js'] = 'foo';

Now I add three transforms:

pirates.setModuleSystem(system); // or however you decide to inject it

pirates.addTransform(code => code + ' 1');
pirates.addTransform(code => code + '2');

const module = system.load('./foo.js');

console.log(module.code);
  • If I get back "foo 1 2", that means 1 was applied first, then 2.
  • If I get back "foo 2 1", that means 2 was applied first, then 2.
  • If I get back "foo 2", that means only 2 was applied.
  • If I get back "foo 1", that means only 1 was applied.

That just makes sense, and I can see it all in just a few lines of code. I am not hopping around looking up fixtures.

@ariporad
Copy link
Collaborator

OK, @jamestalmage. I've been doing some work on this. You can checkout the better-tests branch.

I decided to go with mocking out the file system, as using a mock module system seemed wrong to me. Along those lines, I wrote some helpers, that make writing tests really simple and easy. You can find more info on them here. Basically, this is a test:

import test from 'ava';
import { makeTest } from './_utils';

test('something', makeTest( // This returns a function, which accepts `t`, then invokes `doTest`.
  {
    // These are mappings of filenames to content. The filenames a relative to test/fixture, and will get '.js' added 
    // to the end. Since it is only possible to mock files that exist, it is recommended that you only use the 
    // names of files in test/fixture, which are currently `foo`, `bar`, `baz`, and `qux`.
    foo: 'module.exports = @@a require(\'./bar\');'
    bar: 'module.exports = 5 @@b',
  },
  [
    // This is an array of arrays of arguments. pirates.addHook will be invoked with each array as its argument.
    // For example, for the following:
    code => code.replace('@@a', '\'aa\' + '),
    [code => code.replace('@@b', '* 2'), { matcher: path => path.indexOf('bar') !== -1 }],
    // Will be converted to:
    // pirates.addHook(code => code.replace('@@a', '\'aa\' + '));
    // pirates.addHook(code => code.replace('@@b', '* 2'), { matcher: path => path.indexOf('bar') !== -1 });
  ],
  {
    // These are mappings of filenames to their expected *export*. Each file will be required and the export 
    // compared with `t.same` (deep equality). The keys should be the same as above.
    foo: '10aa',
    bar: 10,
  }
));

I'll try to add some more tests this week, although I'm not sure if I'll have a chance. I'll defiantly get some time the week after, though. And you're welcome to contribute some tests, if you'd like.

@jamestalmage
Copy link
Author

As for the code-coverage issue.

See how I handled it here: jamestalmage/nyc@d633bbf.

Yours would not even need to be that complex. At the top of your tests:

var pirates;

if (process.env.PIRATES_COVERAGE) {
  pirates = require('../lib/index.covered.js');
  require('signal-exit')(function() {
    fs.writeFileSync(pathToCoverageOutput, global.__cov__);
  });
} else {
  pirates = require('../lib/index.js');
}

If you want to use AVA, then you need to either:

  1. limit yourself to a single test file, so there is ever only one process.
  2. use the process id (process.pid I think) as part of the coverage output filename.

Either method will require use of the istanbul report CLI command. (see the scripts section of package.json in istanbuljs/nyc#89). You will need to make sure existing coverage files get deleted before your next test run, or you will have problems.

@ariporad ariporad self-assigned this Dec 18, 2015
@ariporad
Copy link
Collaborator

Ok, I added a few more tests. I think it should be covered pretty well by now, but if there's something missing let me know and I'll add some more.

@jdalton
Copy link
Contributor

jdalton commented Oct 10, 2017

Can this issue be closed? Looks like there are tests now.

@danez danez closed this as completed Nov 10, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants