-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to zip.js for tests #698
Update to zip.js for tests #698
Conversation
e3e3b9a
to
d7210d9
Compare
✔️ No visual differences introduced by this PR. View Playwright Report (note: open the "playwright-report" artifact) |
Good question! My memory is fuzzy, but I recall that I was testing a couple different options from https://github.com/gildas-lormeau/zip.js/tree/master/dist#built-scripts-of-zipjs, which is why I was importing stuff from within |
@toasted-nutbread I guess it's okay to merge this regardless of the above concerns? As long as there is no risk of this breaking loading of actually compressed dictionaries in the extension somehow. |
d7210d9
to
56518bc
Compare
So I think I have figured out why this is/was done, and I've updated the comments accordingly. It seems to be in order to only import the subset of functionality that the extension application needs, namely decompression and processing via web workers. Since this does not work in node, it tries to fall back, but it has nothing to fall back to since the node functionality isn't present, so it silently returns the compressed data. For posterity, here's some code I was using for testing. This doesn't exhibit the issue since it imports all of the node stuff when it is writing the zip, but the two functions can be used in successive runs and a file on disk. /**
* @param {string} fileName
* @param {unknown} json
* @param {number} level
* @returns {Promise<ArrayBuffer>}
*/
async function createZipData(fileName, json, level) {
const {BlobWriter, TextReader, ZipWriter} = await import('@zip.js/zip.js');
const zipFileWriter = new BlobWriter();
const zipWriter = new ZipWriter(zipFileWriter, {level});
await zipWriter.add(fileName, new TextReader(JSON.stringify(json, null, 0)));
const blob = await zipWriter.close();
return await blob.arrayBuffer();
}
/**
* @param {string} fileName
* @param {ArrayBuffer} data
* @returns {Promise<unknown>}
*/
async function readZipData(fileName, data) {
const {Uint8ArrayReader, ZipReader, TextWriter} = await import('@zip.js/zip.js/lib/zip.js');
const zipFileReader = new Uint8ArrayReader(new Uint8Array(data));
const zipReader = new ZipReader(zipFileReader);
const entries = await zipReader.getEntries();
const entry = entries.find((item) => item.filename === fileName);
return await entry.getData(new TextWriter());
}
/** */
async function main() {
const fileName = 'index.json';
const json = {test: 'value'};
const data = await createZipData(fileName, json, 1);
const json2 = await readZipData(fileName, data);
console.log(json, json2);
}
await main(); |
This change updates the test code to use zip.js instead of jszip.
I ran into one curiosity while doing this: allowing
createDictionaryArchiveData
to generate compressed data fails for some reason, and I call this out here:https://github.com/themoeway/yomitan/blob/e3e3b9a912a895b6cde0702e87280e8a3475d8f4/dev/dictionary-archive-util.js#L32-L35
This seems to stem from the test scripts importing directly from
'@zip.js/zip.js'
, whereas the build application uses'@zip.js/zip.js/lib/zip.js'
. If I change either of them to match the other, the tests work as expected. This change stems from #307.@djahandarie Is there a reason why
'@zip.js/zip.js/lib/zip.js'
is used here vs just'@zip.js/zip.js'
? Comparing the generated files after running build-libs, it does add a lot of additional code, but I haven't completely looked into what's causing the difference.https://github.com/themoeway/yomitan/blob/4aaa9f15d97668203741c1731f15e710ae8b8294/dev/lib/zip.js#L18
For example of what happens when level is not 0, which seems to appear like the data isn't being decompressed:
Expand/collapse log