Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Individual lockfile per workspace #1223

Open
1 of 2 tasks
migueloller opened this issue Apr 20, 2020 · 69 comments
Open
1 of 2 tasks

Individual lockfile per workspace #1223

migueloller opened this issue Apr 20, 2020 · 69 comments
Labels
enhancement New feature or request

Comments

@migueloller
Copy link

migueloller commented Apr 20, 2020

  • I'd be willing to implement this feature
  • This feature can already be implemented through a plugin

Describe the user story

I believe the best user story are the comments in this issue: yarnpkg/yarn#5428

Describe the solution you'd like

Add support for generating a lockfile for each workspace. It could be enabled via configuration option like lockfilePerWorkspace?

Describe the drawbacks of your solution

As this would be opt-in and follow well known semantics, I'm not sure there are any drawbacks to implementing the feature. One could argue if it should be in the core or implemented as a plugin. I personally don't have a preference of one over the other.

Describe alternatives you've considered

This could be implemented in the existing workspace-tools plugin. I'm unsure if the hooks provided by Yarn would allow for this, though.

Additional context

yarnpkg/yarn#5428

@migueloller migueloller added the enhancement New feature or request label Apr 20, 2020
@arcanis
Copy link
Member

arcanis commented Apr 21, 2020

I believe the best user story are the comments in this issue: yarnpkg/yarn#5428

It's best to summarize it - there are a lot of discussions there 🙂

I've seen your comment about cache layers, but I wonder if what you're looking for isn't just a way to compute a "cache key" for a given workspace?

@migueloller
Copy link
Author

migueloller commented Apr 21, 2020

I'll gladly do that 😄

Current state of affairs

Yarn workspaces has a lot of benefits for monorepos, one of them being the ability to hoist third-party dependencies to reduce installation times and disk space consumed. This works by picking a dependency version that fits the most dependency requirements as specified by package manifest files. If a single dependency can't be found that matches all requirements, that's OK, the dependency is kept in a package's node_modules folder instead of the top level and the Node.js resolution algorithm takes care of the rest. With PnP, I'm assuming the Node.js resolution algorithm is patched in a similar way to make it work with multiple versions of dependencies.

I'm assuming that because all of these dependencies are managed by a single yarn install the Yarn team opted to have a single lock file at the top-level where the workspace root is defined.

What's the problem?

In various monorepos, it is desirable to treat a workspace as an independent deployable entity. Most deployment solutions out there will look for manifest and lock files to set up required dependencies. In addition to this some tools, like Docker, can leverage the fact that versions are immutable to implement caching and reduce build and deployment times.

Here's the problem: because there is a single lock file at the top-level, one can't just take a package (i.e., workspace) and deploy it as one would when not using Yarn workspaces. If there was a lock file at the package level then this would not be an issue.


I've seen your comment about cache layers, but I wonder if what you're looking for isn't just a way to compute a "cache key" for a given workspace?

It's not just computing a "cache key" for caching, but also having a lock file to pin versions. For example, if you're deploying a workspace as a Google Cloud Function you would want the lock file to be there so that installation of dependencies was pinned as the lock file specifies. One could copy the entire lock file to pin versions but then the caching mechanism breaks. So the underlying thing we're working with here is that deployment platforms use lock files as a cache key for the third-party dependencies.

@arcanis
Copy link
Member

arcanis commented Apr 21, 2020

Let's see if I understand this properly (consider that I don't have a lot of experience with Docker - I've played with docker-compose before, but there are many subtleties I'm still missing):

  • You create a Docker image by essentially passing it a folder path (your project path)
  • Docker will compare the content of the folder with what's on the latest image, and if it didn't change it won't create a new layer
  • So when deploying a monorepo, the layer cache for each workspace gets busted at each modification in each workspace
  • To solve that, you'd like to deploy a single folder from the monorepo, independent from its neighbours. This way, Docker wouldn't see its content change unless the files inside it actually change.
  • But since there is no lockfile in this folder, you cannot run an install there.

Did I understand correctly? If so, a few questions:

  • Workspaces typically cross-reference each others. For example, frontend and backend will both have a dependency on common. How does this fit in your vision? Even if you only deploy frontend, you'll still need common to be available as well.
  • Do you run yarn install before building the image, or from within the image? I'd have thought that the image was compiled with the install artifacts (by which point you don't really need to have the lockfile at all?), but maybe that's an incorrect assumption.
  • Specifically, what prevents you from copying the global lockfile into your workspace, then run a yarn install to prune the unused entries? You'd end up with a deterministic lockfile that would only change when the workspace dependencies actually change.

@migueloller
Copy link
Author

migueloller commented Apr 21, 2020

Did I understand correctly?

I think so, but let me add a bit more context to how the layer caching mechanism works in Docker.

When building a docker image (i.e., docker build) a path to a folder is provided to Docker. This folder is called the build context. While the image builds, Docker can only access files from the build context.

In the Dockerfile, the specification to build the Docker image, there are various commands available. The most common one is COPY, it will copy files from the build context to the image's filesystem, excluding patterns from .dockerignore. This is where caching comes in. Every time a Docker command is ran, an image layer is created. These layers are identified by a hash of the filesystem's content. What this means is that usually, you'll have people have a Dockerfile somewhat like this:

# layer 1
FROM node
# layer 2
COPY package.json yarn.lock .
# layer 3
RUN yarn install
# layer 4
COPY . .

Again, this might be an oversimplification but it gets the point across. Because we first copy the package.json and the lockfile and then run yarn install. If next time we build this image, package.json and yarn.lockfile didn't change, only layer 4 has to be rebuilt. If nothing changed, then nothing gets rebuilt, of course. One could make the build context the entire monorepo but the lock file will still be changing a lot even if the dependencies of the package we're trying to build have not changed.

Workspaces typically cross-reference each others. For example, frontend and backend will both have a dependency on common. How does this fit in your vision? Even if you only deploy frontend, you'll still need common to be available as well.

This is a good question. After much thought, our solution is going to be a private NPM registry. This will not only work for building Docker images but also for using tools like GCP Cloud Functions or AWS Lambda. If Docker was the only tool we were using we could use the entire monorepo as the build context but still just COPY the dependencies and Docker layer caching will still work. This time, instead of the cache key being a single lock file, it would be the lock file of the package and all it's transitive dependencies that live in the monorepo. That's still not the entire repo's lock file. But since Docker isn't the only deployment platform we use that expects a yarn.lock to be there, this solution doesn't work for us.

Do you run yarn install before building the image, or from within the image? I'd have thought that the image was compiled with the install artifacts (by which point you don't really need to have the lockfile at all?), but maybe that's an incorrect assumption.

It's a best practice to do it within the image. This guarantees native dependencies are built in the appropriate OS and has the benefit of caching to reduce build times in CI. In our current workaround we actually have to build everything outside, move it into Docker, and run npm rebuild. It's very very hacky though and we're now at a point where the lack of caching is slowing us down a lot.

Specifically, what prevents you from copying the global lockfile into your workspace, then run a yarn install to prune the unused entries? You'd end up with a deterministic lockfile that would only change when the workspace dependencies actually change.

This might be a good workaround for now, perhaps in a postinstall script. Would this keep the hoisting benefits of Yarn workspaces?

@migueloller
Copy link
Author

Specifically, what prevents you from copying the global lockfile into your workspace, then run a yarn install to prune the unused entries? You'd end up with a deterministic lockfile that would only change when the workspace dependencies actually change.

I tried this out with and unfortunately it's not as straightforward since running yarn install anywhere within the repository uses the workspaces feature.

@migueloller
Copy link
Author

Linking a comment to a related issue here: yarnpkg/yarn#4521 (comment)

@Larry1123
Copy link
Contributor

If having an independent deployable entity is the main reason for this. I currently have a plugin that is able to do this. I need to work with my employer to get it released however.
It's quite simple in how it does what it does and likely could be made better. It copies the lock and workspace into a new location edits out devdep from the workspace and runs a normal install. It keeps everything pinned where it was. It reuses the yarn cache, keeps the yarnrc, plugins, and yarn version installed in the output.

@tabroughton
Copy link

If having an independent deployable entity is the main reason for this. I currently have a plugin that is able to do this. I need to work with my employer to get it released however.
It's quite simple in how it does what it does and likely could be made better. It copies the lock and workspace into a new location edits out devdep from the workspace and runs a normal install. It keeps everything pinned where it was. It reuses the yarn cache, keeps the yarnrc, plugins, and yarn version installed in the output.

@Larry1123 I think your plugin could be very useful to quite a few folks, will your employer allow you to share it?

@Larry1123
Copy link
Contributor

I got the ok be able to release it, will have to do it when I have the time.

@samarpanda
Copy link

@Larry1123 Wondering how are you handling yarn workspace. Does your plugin creates yarn.lock for each package in the workspace?

@Larry1123
Copy link
Contributor

In a way yes, it takes the project's lock and reruns the install of the workspace as if it was the only one in the project in a new folder after also removing devDependencies. That way the resulting lock matches the project but for only what is needed for that workspace. It also currently hardlinks the cache, and copies what it can keep from the project's .yarn files.

@borekb
Copy link
Contributor

borekb commented Jun 19, 2020

The backend + frontend + common scenario is a good one, we have something similar and it took me a while to realize that we sort of want two sets of workspaces. Let's say the repo looked like this:

.
├── common/
│   └── package.json
│
├── frontend/
│   └── package.json
│
├── backend/
│   └── package.json
│
├── package.json
└── yarn.lock

We're building two Docker images from it:

  1. frontend-app, where the Docker build context contains:
    • common/
    • frontend/
    • yarn.lock
  2. backend-app, where the Docker build context contains:
    • common/
    • backend/
    • yarn.lock

This can be done, and is nicely described in yarnpkg/yarn#5428 (comment) (we furthermore utilize tarball context as a performance optimization), but the issue with a single lockfile stays: a change in frontend dependencies also affects the backend build.

(We also have other tooling that is affected by this, for example, we compute the versions of frontend-app and backend-app from Git revisions of the relevant paths, and a change to yarn.lock currently affect both apps.)

I don't know what the best solution would be, but one idea I had was that workspaces should actually be a two-dimensional construct in package.json, like this:

{
  "workspaces": {
    "frontend-app": ["frontend", "common"],
    "backend-app": ["backend", "common"]
  }
}

For the purposes of module resolution and installation, Yarn would still see this as three "flat" workspaces, frontend, backend and common, and the resulting node_modules structure (I don't know how PnP does this) would be identical to today, but Yarn would understand how these sets of workspaces are intended to be used together and it would maintain two additional files, yarn.frontend-app.lock and yarn.backend-app.lock (I'm not sure if the central yarn.lock would be necessary or not but that's a relative detail for this argument's sake).

When we'd be building a Docker image for frontend-app (or calculating a version number), we'd involve these files:

  • common/
  • frontend/
  • yarn.frontend-app.lock

It would be awesome if this could work but I'm not sure if it's feasible...


As a side note, I previously thought that I wanted to have yarn.lock files in our workspaces, i.e., backend/yarn.lock and frontend/yarn.lock, but I now mostly agree with this comment:

I think the idea of Yarn 1.x monorepo is a little bit different. It isn't about independent projects under one roof, it is more about a singular big project having some of its components exposed (called workspaces).

In our case, frontend and backend workspaces are not standalone – they require common to work. The Yarn Workspaces is a great mechanism to link them together, to de-duplicated dependencies etc., we "just" need to have multiple sets of workspaces at Docker build time.

@migueloller
Copy link
Author

I've changed where I stand on this issue and shared my thoughts here: yarnpkg/yarn#5428 (comment).

@borekb
Copy link
Contributor

borekb commented Jul 12, 2020

@arcanis I'm reading your Yarn 2.1 blog post and there's a section on Focused Workspaces there. I don't have experience with this from either 2.x or 1.x Yarn but is it possibly solving the backend + frontend + common scenario & Docker builds?

Like, could I create a build context that contains the main yarn.lock file and then just packages/frontend + packages/common (omitting packages/backend), then focus the workspace on frontend and run the Docker build from there?

Or is it still not enough and something like named sets of workspaces would be necessary?

@arcanis
Copy link
Member

arcanis commented Jul 12, 2020

I think it would, yes. The idea would be to run yarn workspaces focus inside frontend, which will install frontend+common, then to mount your whole repo inside the Docker image.

I encourage you to try it out and see whether there are blockers we can solve by improving this workflow. I'm not sold about this named workspace set idea, because I would prefer Yarn to deduce which workspaces are needed based on the main ones you want. It's too easy to make a mistake otherwise.

@borekb
Copy link
Contributor

borekb commented Jul 12, 2020

I'm not sold about this named workspace set idea, because I would prefer Yarn to deduce which workspaces are needed based on the main ones you want. It's too easy to make a mistake otherwise.

Agree; if the focus mode works, then it's probably better.

Do you have a suggestion on how to construct the common/frontned/backend dependencies to make it the most tricky for Yarn? Like, request some-dep@1.x from common, @2.x from frontend and @3.x from backend? The harder the scenario, the better 😄.

@migueloller
Copy link
Author

I don't know if this makes a difference in your reasoning @arcanis but I thought it would be worth mentioning in case there's something about Yarn's design that would lend itself for this... this issue could also be solved by having a lockfile per worktree instead of per workspace. For example, each deployable workspace can itself be a worktree and specify which workspaces from the project it depends on.

Here's an example repo: https://github.com/migueloller/yarn-workspaces

It would be fine to have a lockfile for app1 and app2.

That being said, based on what I had commented before (#1223 (comment)), one could just have multiple yarn projects in the same repo and have them all share the same Yarn cache. While it wouldn't be as nice as running yarn at the top of the repo if it were a single project, it will help with disk size if Yarn PnP is being used.

I'm taking the definition of project > worktree > workspace from here.

@migueloller
Copy link
Author

Another thought is that yarn workspaces focus app1 could be called with an option so that it modified the top-level lockfile, perhaps this could be used to generate the "trimmed-down" lockfile for the Docker image.

I also wanted to add another use case in addition to Docker images. If one has a large monorepo where CI jobs are started depending on whether a certain "package" changed, having a shared lockfile makes that a bit hard for the same reasons it's hard on Docker's cache. If we want to check if a workspace changed, including its dependencies, we would also want to check the lockfile. For example, some security update could've been added that changed the patch version being used but not the version range in package.json. If the top-level lockfile is used, then the CI job would run for every change on any package. Having a single lockfile per workspace would alleviate this issue by simply using that lockfile instead of the top-level one.

@borekb
Copy link
Contributor

borekb commented Jul 21, 2020

If one has a large monorepo where CI jobs are started depending on whether a certain "package" changed, having a shared lockfile makes that a bit hard for the same reasons it's hard on Docker's cache.

That is a good point, and we have similar use case. Not just for CI but we also e.g. calculate the app versions ("apps" are e.g. frontend and backend) from their respective paths and "youngest" Git commits; a single shared yarn.lock makes this problematic.

@gntract
Copy link

gntract commented Aug 7, 2020

yarn workspaces focus is a great command/plugin 👍

I'm currently using it within our Dockerfile - one question about determinism (which may expose my misunderstanding of yarn's install process:

Is it possible to run focus such that it should fail if the yarn.lock would be modified (i.e. --frozen-lockfile or --immutable but allow .pnp.js to be modified)?

@arcanis
Copy link
Member

arcanis commented Aug 7, 2020

No (because the lockfile would effectively be pruned from extraneous entries, should it be persisted, so it wouldn't pass the immutable check) - I'd recommend to run the full yarn install --immutable as a CI validation step.

@Larry1123
Copy link
Contributor

@tabroughton @samarpanda I have gotten the plugin I was working on public https://gitlab.com/Larry1123/yarn-contrib/-/tree/master/packages/plugin-production-install.
I hope it works for your needs.

@andreialecu
Copy link
Contributor

andreialecu commented Oct 20, 2020

I have a slightly different use case in mind for this feature. Originally wrote it on Discord, but copying it here for posterity:

One of the downsides of monorepos seems to be that once you add new developers, you have to give them access to the whole code base, while with single repos you could partition things in a way so they have access to smaller bits and pieces.

Now, this could probably be solved with git submodules, putting each workspace in its own git repo. Only certain trusted/senior devs could then have access to the root monorepo, and work with it as one.

The only problem holding this back seems to be the lack of a dedicated yarn.lock per workspace.

With a yarn.lock per workspace it seems that the following workflow would be possible:

  1. Add new dev to team, give them access to only a limited set of workspaces (separate git repos)
  2. They can run yarn install, and it would install any workspace dependencies from a private package repository (verdaccio, or github packages, private npm, etc)
  3. They can just start developing on their own little part of the project, and commit changes to it in isolation. The top level monorepo root yarn.lock is not impacted.
  4. CI can still be set up to test everything before merging

Seems like there would also be a need to isolate workspace dependencies to separate .yarn/cache in workspace subdirs if this approach was supported.

I'm not concerned about pushing, more concerned about pulling. I don't want any junior dev to simply pull all the company intellectual property as one simple command.

How do you guys partition projects with newer, junior (not yet established/trusted) devs, now that everyone works from home?

@Larry1123
Copy link
Contributor

This is something that has been a pain point for my work also. I have been wanting to work out a solution to this just have not had the time to truly work it out. Once I understand yarn better I had intended to try to work out a plan of action. A holistic approach I feel would have various integrations into things like identity providers, git, git(hub|lab)/bitbucket, yarn, and tooling or zero-trust coronation of internal dependencies and resolutions throughout the super repo. The integration into the git host would be to be able to handle cross project things but not sure what level it would need.
I feel that a tool like this is sorely needed however hard to get right and time consuming to produce. Also I feel that a larger scope could be covered by creating something ment for cross organization cooperation as it would have open source use also then.
It would likely take RFC style of drafting and planning to build. As current tool just doesn't support such workflows well.
With how things go now my work tends to lean to don't trust new/Junior devs with wide access and if they work on a project it has to be in it's own scoped repos and projects.

@andreialecu
Copy link
Contributor

andreialecu commented Oct 20, 2020

I have created a pretty simple Yarn 2 plugin that will create a separate yarn.lock-workspace for each workspace in a monorepo:

https://github.com/andreialecu/yarn-plugin-workspace-lockfile

I haven't yet fully tested it, but it seems to create working lockfiles.

I would still recommend @Larry1123's plugin above for production deployment scenarios: #1223 (comment), but perhaps someone will find this useful as well.

@jakebailey
Copy link

I'll mirror my comment from yarnpkg/yarn#5428 (comment) here:

My need for this behavior (versioning per workspace, but still have lockfiles in each package) is that I have a nested monorepo, where a subtree is exported to another repo entirely, so must remain independent. Right now I'm stuck with lerna/npm and some custom logic to attempt to even out versions. It would be nice if yarn could manage all of them at once, but leave the correct subset of the "entire workspace pinning" in each. (Though, I'm really not sure how this nested workspace will play out if I were to switch to berry, when berry needs to be committed to the repo, so needs to be committed twice?)

@andreialecu That plugin looks interesting; it's almost what I'm looking for, though appears to be directed towards deployment (and not just general development). But it does give me hope that what I'm looking for might be prototype-able in a plugin.

@andreialecu
Copy link
Contributor

@jakebailey do note that there are two plugins:

for deployment: https://gitlab.com/Larry1123/yarn-contrib/-/tree/master/packages/plugin-production-install
for development: https://github.com/andreialecu/yarn-plugin-workspace-lockfile

Feel free to take either of them and fork them. If you end up testing mine and improving it, feel free to contribute changes back as well.

@eric-burel
Copy link

eric-burel commented Jul 7, 2022

Hi folks, I gave a shot at generate-lockfile but it couldn't read the root yarn.lock: varsis/generate-lockfile#4 (comment)

Will try yarn-plugin-entrypoints-lockfiles.

Edit: this seems to work much better, see my example monorepo: https://github.com/VulcanJS/vulcan-npm/pull/132/files.
I've written a README for this package as well: JanVoracek/yarn-plugin-entrypoint-lockfiles#2

Last issue: I hit error Your lockfile needs to be updated, but yarn was run with --frozen-lockfilein CI. The yarn.lock seems not totally up to date, the diff between theyarn.lockupdated afteryarn, and what the yarn.vulcan-remix.lock` outputed automatically by the plugin is like this:

diff yarn.lock yarn.vulcan-remix.lock 
1663,1665c1663,1665
< "@types/react-dom@npm:<18.0.0, @types/react-dom@npm:^17.0.14":
<   version: 17.0.17
<   resolution: "@types/react-dom@npm:17.0.17"
---
> "@types/react-dom@npm:^17.0.16":
>   version: 17.0.16
>   resolution: "@types/react-dom@npm:17.0.16"
1668c1668
<   checksum: 23caf98aa03e968811560f92a2c8f451694253ebe16b670929b24eaf0e7fa62ba549abe9db0ac028a9d8a9086acd6ab9c6c773f163fa21224845edbc00ba6232
---
>   checksum: 2f41a45ef955c8f68a7bcd22343715f15e1560a5e5ba941568b3c970d9151f78fe0975ecf4df7f691339af546555e0f23fa423a0a5bcd7ea4dd4f9c245509936
1672,1674c1672,1674
< "@types/react@npm:^17, @types/react@npm:^17.0.43":
<   version: 17.0.47
<   resolution: "@types/react@npm:17.0.47"
---
> "@types/react@npm:^17.0.16":
>   version: 17.0.44
>   resolution: "@types/react@npm:17.0.44"
1679c1679
<   checksum: 2e7fe0eb630cb77da03b6da308c58728c01b38e878118e9ff5cd8045181c8d4f32dc936e328f46a62cadb56e1fe4c5a911b5113584f93a99e1f35df7f059246b
---
>   checksum: ebee02778ca08f954c316dc907802264e0121c87b8fa2e7e0156ab0ef2a1b0a09d968c016a3600ec4c9a17dc09b4274f292d9b15a1a5369bb7e4072def82808f
5949,5952c5949,5952
< "graphql@npm:^16.3.0, graphql@npm:^16.4.0":
<   version: 16.5.0
<   resolution: "graphql@npm:16.5.0"
<   checksum: a82a926d085818934d04fdf303a269af170e79de943678bd2726370a96194f9454ade9d6d76c2de69afbd7b9f0b4f8061619baecbbddbe82125860e675ac219e
---
> "graphql@npm:^15.6.2":
>   version: 15.8.0
>   resolution: "graphql@npm:15.8.0"
>   checksum: 423325271db8858428641b9aca01699283d1fe5b40ef6d4ac622569ecca927019fce8196208b91dd1d8eb8114f00263fe661d241d0eb40c10e5bfd650f86ec5e
11725c11725
< "vulcan-remix@workspace:.":
---
> "vulcan-remix@workspace:starters/remix":
11727c11727
<   resolution: "vulcan-remix@workspace:."
---
>   resolution: "vulcan-remix@workspace:starters/remix"

To fix this I have to drop frozen-lockfile in my CI during yarn install but this is a bad practice.

Also @borekb : YARN_LOCKFILE_FILENAME is not documented anywhere, is that custom to your setup? For now I just rename the file in my CI to yarn.lock after copying to the right place.

@eric-burel
Copy link

eric-burel commented Jul 11, 2022

Hi, just to rephrase what I think is needed now to close this issue:

  1. we need a way to run yarn on a custom file, like YARN_LOCKFILE_FILENAME=yarn.remix.lock yarn
  2. we need a way to run yarn that generates the lockfile, but not node_modules (or whatever solution used for modules).

The idea is that you could "trick" yarn into generating a lockfile, but without actually installing packages. Since this lockfile is NOT named yarn.lock, it won't break package hoisting for workspaces when you do a normal yarn in the monorepo root.

The process could be as follow:

  • Install yarn-plugin-entrypoint-lockfiles
  • Run yarn
  • Copy the generated yarn.my-entrypoint.lock files into relevant workspaces
  • In each relevant workspaces, run YARN_LOCKFILE_NAME=yarn.my-entrypoint.lock yarn => this step will update the generated yarn.lock to file to fix some potential issue.

It could even be simplified like this:

  • Create my-workspace/yarn.my-workspace.lock
  • In this workspace, run YARN_LOCKFILE_FILENAME=yarn.my-workspace.lock yarn --only-generate-lockfile

Maybe those options kinda exist today? But I couldn't find anything like that in the docs.

@lukemovement
Copy link

lukemovement commented Oct 12, 2022

Why not use a generic name for the lock files in individual workspaces, the same way package-lock.json is used as a fallback when yarn.lock is not present? This can then be generated after an install is ran from the root workspace.

/workspace 
yarn.lock
package.json
    - /packages
        - /my-app
        - package.json
        - yarn.workspace.lock

Executing yarn install from within /my-app when the parent workspace is not present would then result in yarn.workspace.lock being used as a fallback.

@zaro
Copy link

zaro commented Feb 9, 2023

Here is my approach to this. Since we don't exactly have monorepo, but rather a repo with other repos as submodules this makes for quite problematic deployment strategy w/o individual lockfiles. So my solution is simply a fork of yarn-plugin-entrypoint-lockfiles but the lockfiles are generated next to the package.json with a fixed name yarn.deploy.lock. This makes Dockerfiles really simple. Also workspace: resolutions are replaced with actual version.

Link to the plugin https://github.com/zaro/yarn-plugin-deploy-lockfiles .

@asgeirn
Copy link

asgeirn commented Feb 11, 2023

This is the workaround we use for a metarepository:

Meta root:

.yarnrc.yml
package.json
yarn-workspace.lock

Individual projects:

package.json
yarn.lock

Contents of root .yarnrc.yml:

lockfileFilename: yarn-workspace.lock

Running yarn install in the workspace root uses the yarn-workspace.lock file, as well as the workspaces definition and workspace:* resolutions in the workspace package.json file.

The individual projects are separate Git repositories and are built separately from each other using the local yarn.lock file.

There is one caveat - updating the per-project yarn.lock file. We have delegated this task to a GitHub action triggered as part of the pull request flow, running yarn install --mode=update-lockfile and committing an updated yarn.lock file.

@JasonMan34
Copy link

JasonMan34 commented May 2, 2023

@arcanis Is there any plan to implement this in the future? The inability to have a lockfile (and individual .yarn folder) per workspace is such a huge downside it overshadows any upside using a workspace can provide us

@manoj-r
Copy link

manoj-r commented May 4, 2023

I just started using Yarn Workspaces for some of my projects and have used nohoist/nmHoistLimits: workspaces, Does Yarn enforce the lock file to be generated only in the root repository even for noHoist? This is the behavior I see in my project. Can someone please confirm that the lock file will be generated only in the root project, regardless of the hoist configuration?

@lukemovement
Copy link

lukemovement commented May 4, 2023 via email

@bertho-zero
Copy link

bertho-zero commented Oct 24, 2023

lockfileFilename no longer exists since v4 and it breaks the solutions that some people have built here

@borekb
Copy link
Contributor

borekb commented Nov 4, 2023

True what @bertho-zero said above ☝️, unfortunately we can't upgrade to Yarn 4 because of the removal of lockfileFilename in #5604.

I understand why the team wanted to remove it but there's no other solution to the use cases described in this issue that I would be aware of.

@arcanis
Copy link
Member

arcanis commented Nov 4, 2023

I'm interested to try to figure out a proper integrated solution for the 4.1. Would it solve your use cases if yarn workspaces focus was updating the lockfile to remove the dependencies from unfocused packages?

@borekb
Copy link
Contributor

borekb commented Nov 4, 2023

I'm not quite sure. Separate lockfiles, as implemented by https://github.com/JanVoracek/yarn-plugin-entrypoint-lockfiles, are admittedly a bit wasteful (the contents of lockfiles is partly duplicated and commands like yarn add are slower) but they also have several advantages, like being simple to reason about, our Docker build is straightforward (we pass a file like yarn.docusaurus.lock to the build context), we can view history of separate lockfiles in Git, we can trigger GitHub Actions workflows based on changes in those separate lockfiles (for example, rebuild our docs image if dependencies slightly changed), our scripts can calculate the "latest Git commit touching anything related to our Docusaurus docs" which should include yarn.docusaurus.lock but couldn't have included yarn.lock because it is shared, etc.

It's hard for me to imagine how to support all of this (and those are real use cases BTW) with a single shared lockfile. I have to admit that the idea of separate lockfiles was quite controversial in our team initially and it's still relatively weird to see several yarn.<something>.lock files in our repo but on the upside, it handles every use case we threw at it well.

@borekb
Copy link
Contributor

borekb commented Nov 4, 2023

BTW it's great that you're thinking how to implement this for 4.1!

@valleywood
Copy link

valleywood commented Nov 13, 2023

We are also having an issue with this and it prevents us from stepping up to Yarn 4 as we have a workspace solution that without lockfileFilename will lead to conflicting lockfiles. @arcanis sounds great if you could figure out an alternative way of solving this now when the lockfileFilename option has been removed! 🙏

The use case is that we are having a yarn workspace containing a number of npm package repos. W
hen running yarn install on one of these packages (having a yarn.lock lockfile in the workspace root) the yarn.lock file generated in the workspace context will differ from what the lockfile would have looked like if yarn install was run in an environment when the package code is located outside the workspace.

If we commit this lockfile when running yarn install inside the workspace it will be a problem when running yarn install with a frozen lockfile as a part of our CI/CD process with our GitHub actions as that will throw an error as it will be indicating that we are trying to modify the lockfile as it is then generated outside the workspace and wont have the sam structure.

Being able to rename the lockfile in the workspace directory solved this so that the yarn.lock file in each subrepo was unaffected by running in a workspace environment thereby causing no issues running yarn install with frozen lockfile in the CI/CD processes of each subrepo.

Example
Works (yarn lockfile generated in subrepo is the same when running yarn install inside/outside the workspace)

Workspace lockfile                  Sub-repo lockfile           Sub-repo CI/CD lockfile
yarn-workspace.lock                 yarn.lock                   yarn.lock

Doesn't work (yarn lockfile generated in subrepo differs when running yarn install inside/outside the workspace)

Workspace lockfile                  Sub-repo lockfile           Sub-repo CI/CD lockfile
yarn.lock                           yarn.lock                   yarn.lock

@trusktr
Copy link

trusktr commented Apr 28, 2024

My workspaces are also git submodules in my super repo (for example). Any of the workspaces can be cloned separately on their own. I'd like those workspaces to have their own lock files for when they are cloned separately. The workspace lockfiles would be ignored when installing using the top-level workspace, of course (the top level would ensure they are in sync, and could even throw an error if they are not in sync, and maybe even provide an option to force-overwrite the workspace lock file to fix any issue)

@jakebailey
Copy link

@trusktr FWIW what you're describing is what I implemented in https://github.com/jakebailey/yarn-plugin-workspace-lockfile via a plugin (modified from someone else's attempt); I ended up not needing it personally (the team I was on didn't end up adopting yarn), but it did seem to work well a the time those many years ago. It's possible it still works, or could with modification for Yarn 4.

@FezVrasta
Copy link

Would a pnpm deploy equivalent work to cover this use case?

@KevinEdry
Copy link

Has anyone found a solution for this?
We are managing a monorepo with a centralized lockfile, and because of that when we want to build a docker for one for our services, it requires us to copy all of the monorepo in order to run the install command with the --immutable flag.

@akwodkiewicz
Copy link
Contributor

@KevinEdry, yes and no.

Your issue could be also solved if Yarn introduced the --immutable flag to the workspace focus command. However, as you can see in this closed issue, this is not happening.

The suggestion (and I believe the original philosophy behind these features) from arcanis (here) is to run yarn install --immutable on an earlier stage to verify packages before you do any next steps.

@KevinEdry
Copy link

@akwodkiewicz Thanks for the quick reply!
I was wondering if implementing your suggestion wouldn't require us to perform a lengthy install process twice in CI? I'm happy to run a command with the --immutable flag before I build the dockers to check the lockfile integrity, but I don't want to install the dependencies if I would need to run another install with the focus tag.
This means we would need to run a yarn install three times during CI:

  1. Before all the Dockers start to build with the workspace focus.
  2. When the Dockerfile builds/bundles the actual app/service in dev mode with all of the dev dependencies with a regular yarn install.
  3. When copying just the production files and dependencies with the --production flag.

Is there a way to minimize this install process so the --immutable will only check the lockfile and wouldn't actually install anything?
Thanks!

@akwodkiewicz
Copy link
Contributor

AFAIK no, there is no way to strip this process. I totally understand the issue (I encountered the similar scenario as you) and yeah, until this individual lockfile option is implemented by someone, then we're stuck with:

  1. In CI pipeline: running yarn install --immutable
  2. In Dockerfile:
    1. running yarn focus in one stage to be able to build something
    2. running yarn focus --production in a different stage, to have all runtime deps installed

The only thing I can suggest is fine-tuning Dockerfiles with the multi-stage builds and relying on registry caching to avoid as much installation as possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests