Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch containers to unpriviliged service user #357

Merged
merged 8 commits into from
Aug 19, 2023

Conversation

danielhass
Copy link
Contributor

Dear @almarklein, I just started to use your awesome timetagger app for my personal time tracking needs. Great project I really like it!

I noticed that the ghcr published images and also the other container variants make use of the root user with uid and gid of 0.

In order to make container escapes harder and follow security best-practices I introduced an unprivileged service user named timetagger with uid and gid of 1000 to the repo.Dockerfile.

This might be considered as a breaking change as existing self hosted installations probably need to alter their persistent data in order to be able to use the unprivileged variant.

Before going forward with this change, maybe also on the other container variants, I wanted to get your feedback first if this contribution is something you are interested in before spending more time on it?

Changes included so far:

  • switch to unprivileged service user named timetagger with uid and gid of 1000
  • moved application source from /root to /opt/timetagger
  • pip install is know running as service user too and not as root anymore
  • removed itemdb version constraint as older versions than 1.1.1 would not be installed by default

Looking forward to your feedback!

@almarklein
Copy link
Owner

Hi! I think these changes are great, however, I am a bit worried about suddenly breaking user's apps. I think a good approach could be to introduce a new package alongside the current one. Then we can nudge people to use the new image, and eventually deprecate the old one. But maybe there is another approach?

@almarklein
Copy link
Owner

@danielhass
Copy link
Contributor Author

@almarklein thanks for the fast feedback. I totally get your concerns regarding breaking the users existing installations.

During the development of the unprivileged sample I already thought in the same direction as your proposal above. One easy step would be to release the unprivileged container images alongside the existing ones as opt-in option for security concerned users.

Afterwards I currently see to options to make the default installation more secure by default:

  • either the unprivileged containers could be introduced by a new major version requiring users to change their persistent file permissions beforehand
  • another option would be to introduce a custom entrypoint script that ships with the privileged version for a few releases which checks the permissions of the persisted application data and migrates it automatically to the service user ownership during the application start-up. After that the entrypoint script would then start timetagger as the unprivileged user. After a few releases this entrypoint script would be replaced by a simple startup check that complains about wrong file permissions during start-up and maybe outputs a useful docs link on how the fix the permissions manually. At this point the default image could be moved to the unprivileged version.

The second "graceful" automatic migration approach should (at least in theory) provide a zero-effort way to on-board existing installations to the unprivileged variant over time.

@almarklein
Copy link
Owner

One easy step would be to release the unprivileged container images alongside the existing ones as opt-in option for security concerned users.

Yeah, let's start there 👍

@danielhass
Copy link
Contributor Author

danielhass commented May 7, 2023

@almarklein I started with my work on the repo variant of the projects Dockerfiles.

In 1a0d9e6 I splitted the repo Dockerfile into the "old" and a new "nonroot" variant. I also removed the pip install that was carried out directly in the Dockerfile as later on one pip install is run against the projects requirements.txt. Many of the dependencies where duplicated there. A quick smoke test on my side looked good. However can you maybe help me out here and tell me why these "extra dependencies" have been added in the first place? Are they still needed?

b05fe86 includes the adoption of the CI to build the default and the nonroot variant. You can view the results on my fork here.

The -nonroot suffix on the container images is more a first draft. Maybe you have other/better ideas on how to distinguish the to variants?


After an OK from your side on these changes I can also include some docs that cover the two variants.

@almarklein
Copy link
Owner

However can you maybe help me out here and tell me why these "extra dependencies" have been added in the first place? Are they still needed?

The uvloop, and httptools are optional dependencies to make the server faster. The rest is an artifact of how I manage the dockerfiles for timetagger.app; I have base docker image with all dependencies, and on a deploy I only install the latest timetagger into that. That way I don't have surprises of dependencies getting a new version and breaking stuff. For the Dockerfile here we can drop the duplication, but I'd keep the uvicorn uvloop httptools for speed.

b05fe86 includes the adoption of the CI to build the default and the nonroot variant.

Could you split the build up at a higher level, so instead of one docker build, we have docker-root and docker-nonroot? Now there is one build that creates the image, uploads it, then creates the second image using the same name, and upload that. That feels a bit fragile - building each in a separate build duplicates some lines, but its easier to manage, I think.

The -nonroot suffix on the container images is more a first draft. Maybe you have other/better ideas on how to distinguish the to variants?

That sounds fine 👍

@almarklein
Copy link
Owner

Sorry for the late reply, btw :)

@danielhass
Copy link
Contributor Author

@almarklein no worries about the late answer. Its open source and I understand that also maintainers have other things to do ;)

I will try to incorporate your suggestions over the next couple of days!

@danielhass
Copy link
Contributor Author

@almarklein - I have reached a state that is ready for another round of review.

Changes included:

  • split the nonroot and root container image variant build into two separate CI jobs
  • refactored the Dockerfiles based on the provided feedback
  • incorporated the arm64 build that was introduced on main

Let me know what you think! If the technical side is fine I can start on working on some docs for these changes.

@almarklein
Copy link
Owner

@danielhass Looks perfect!

There was a merge conflict with the main branch. Was easy enough to resolve, but you will have to pull that last commit.

@almarklein
Copy link
Owner

If the technical side is fine I can start on working on some docs for these changes.

Just a gentle ping that the technical side is fine now 👍

@danielhass
Copy link
Contributor Author

@almarklein sorry for my late reply! Thanks for resolving the merge conflicts.

Regarding the docs: I didn't find any mention of the containerized deployment in the actual docs. However it is mentioned in the project README. What do you think about the following changes:

  • including the two available image variants in the project README
  • adding a second docker-compose.yml that uses the non-root variant
  • I was also thinking about providing the few steps needed to migrate an existing compose based installation from the root to the non-root variant. Where do you think this would fit? Also in the README?

@almarklein
Copy link
Owner

@almarklein sorry for my late reply!

No problem at all. I just noticed that this is still open.

I didn't find any mention of the containerized deployment in the actual docs.

Good point. Actually, since this part is good as it is, I'll merge it as-is. You can then add a new pr for better documentation on using Docker.

@almarklein almarklein merged commit 69aedda into almarklein:main Aug 19, 2023
9 checks passed
@almarklein
Copy link
Owner

What do you think about the following changes:

  • including the two available image variants in the project README
  • adding a second docker-compose.yml that uses the non-root variant
  • I was also thinking about providing the few steps needed to migrate an existing compose based installation from the root to the non-root variant. Where do you think this would fit? Also in the README?

Yes to all, I suppose. I reckon that the text will become much more than what we currently have in the readme, so perhaps we can add it as a new page to the docs, and refer to it from the readme?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants