Skip to content

Commit

Permalink
wip3
Browse files Browse the repository at this point in the history
  • Loading branch information
Tobi-De committed Sep 3, 2024
1 parent 75b4b14 commit 161f5c2
Show file tree
Hide file tree
Showing 2 changed files with 101 additions and 60 deletions.
159 changes: 100 additions & 59 deletions docs/the_cli/start_project/deploy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,19 +18,6 @@ Common Setup

Let's first discuss the few common points between the two options.

Entry point
************

There is a ``__main_.py`` file inside your project directory, next to your ``settings.py`` file, this is the main entry point of your app, any command in the docker container is run through it
and it also serve a the CLI entry point of the binary that is build for your app and use for the VPS deployment.
It contains function to setup the project, meaning run migrations, createsuperuser, etc..., anything that should run once before the app starts serving requests, and also the function that run the wsgi server (gunicorn and the moment)
and the qcluster command for background tasks.

.. admonition:: Example of the __main__.py file
:class: note dropdown

.. literalinclude:: ../../../demo/demo/__main__.py


Static files
************
Expand All @@ -57,34 +44,50 @@ This is what I personally use, to make use of this instead of the filesystem sto
AWS_STORAGE_BUCKET_NAME=your_bucket_name
AWS_S3_REGION_NAME=your_region_name
This `guide <https://testdriven.io/blog/storing-django-static-and-media-files-on-amazon-s3/>`_ is an excellent resource to help you setup an s3 bucket for your media files.

.. warning::

If you are using docker and the filesystem storage make sure to add a volume to the container to persist the media files and
If you are using caprover check out this `guide <https://gist.github.com/Tobi-De/7751c394570cbf0d7beb852304394046>`_ on how
to serve the media files with nginx

.. todo::

Tutoriel to create a bucket on s3 and get the access key and secret key.

Datababse
**********
*********

There is no specific setup done for the database, you deal with this how you want. For my personal projects (which is target audience of falco, solo developer and really small teams) I usually
have a vps with caprover installed and a single postgres database for which I continouly backup the data, volume. Recently I fell in love with sqlite setup with django and I've being running
litestream as a replication tool, this is currently my go to for personal projets, there is branch for this on the `tailwind falco blueprint <https://github.com/Tobi-De/falco_tailwind/pull/67>`_, you can
also setup the `django-litestream <https://github.com/Tobi-De/falco_tailwind/pull/67>`_ package directly in your project. This is evenetually going come to come to the default falco setup.
If you go with postgres and are using a managed database solution like `aiven <https://aiven.io/postgresql>`_ they usually provide a backup solution, if not there is `django-db-backup <https://github.com/jazzband/django-dbbackup>`_
that you can use easily backup postgres using cron jobs.
There is no specific setup done for the database, you deal with this how you want. If you go with a managed solution like `aiven <https://aiven.io/postgresql>`_ they usually provide a backup solution,
it you also go with a PaaS to host your plateform fo your solution they usually provide a database service with automatic backup.
If you are using postgresql in production, a simple solution is to use `django-db-backup <https://github.com/jazzband/django-dbbackup>`_ with ``django-q2`` for the task that automatically backups, if you are using sqlite
I recommend `django-litestream <https://github.com/Tobi-De/django-litestream>`_.
I've being ridin myserlf the sqlite bandwagon recently and for my personal projects my go to has being ``sqlite + litestream``, I even have a branch for this on the `tailwind falco blueprint <ttps://github.com/Tobi-De/falco_tailwind/pull/67>`_.
This is evenetually going come to come to the default falco setup.


Cache
*****

The cache backend is configure to use `diskcache <https://github.com/grantjenks/python-diskcache>`_.

DiskCache is an Apache2 licensed disk and file backed cache library, written in pure-Python, and compatible with Django.

-- diskcache github page

By default it's not enable, all you have to do to enable it is to add the ``LOCATION`` environment variable with the path to the cache directory.

.. code-block:: bash
:caption: Example
CACHE_LOCATION=.diskcache
If you are running in docker, make sure to add a volume to the container to persist the cache files.

Sending Emails
**************


`django-anymail <https://anymail.dev/en/stable/>`_ is what is used by the project for emails sending, they support a lot of emails, provider,


Environment Variables
*********************

Expand All @@ -102,41 +105,51 @@ VPS Stack
---------


When you push the build binary file to a vps, you can use it as the example above shows if you move it to a folder on your path, just strip out the ``just run`` part.

.. code-block:: shell
:caption: Example of pushing the binary to a vps
curl -L -o /usr/local/bin/myjourney https://github.com/user/repo/releases/download/latest/myjourney-x86_64-linux
chmod +x /usr/local/bin/myjourney
myjourney # Run at least once with no argument so that it can install itself
myjourney self metadata # will print project name and version
.. todo:: Reminder for self

- merger
- merge this

Docker Based
------------

The dockerfile is located at ``deploy/Dockerfile`` and there is no ``compose.yml`` file. The setup is a bit unorthodox and use `s6-overlay <https://github.com/just-containers/s6-overlay>`_ to run everything needed for
the project in a single container. `s6 <https://skarnet.org/software/s6/overview.html>`_ is an `init <https://wiki.archlinux.org/title/Init>`_ system, think like `sytstemd <https://en.wikipedia.org/wiki/Systemd>`_ and
``s6-overlay`` is a set of tools and utilities to make it easier to run ``s6`` in a container environnment. A common linux tool peope often use in the django eocsytem that could serve as a replacement for example is `supervisord <http://supervisord.org/>`_.
the project in a single container.

.. admonition:: s6-overlay
:class: note dropdown

``deploy/etc/s6-overlay``
`s6 <https://skarnet.org/software/s6/overview.html>`_ is an `init <https://wiki.archlinux.org/title/Init>`_ system, think like `sytstemd <https://en.wikipedia.org/wiki/Systemd>`_ and
``s6-overlay`` is a set of tools and utilities to make it easier to run ``s6`` in a container environnment. A common linux tool peope often use in the django eocsytem that could serve as a replacement for example is `supervisord <http://supervisord.org/>`_.
The ``deploy/etc/s6-overlay`` file contains the all the ``s6`` configuration, it will be copied in the the container in the ``/etc/s6-overlay`` directory. When the containers starts it will run a one-shot script (in the ``s6-overlay/scrips`` folder)
that will run the setup function in the ``__main__.py`` file, then two lon process wil be run, one for the gunicorn server and one for the django-q2 worker.

Contains the all the ``s6`` configuration, it will be copied in the the container in the ``/etc/s6-overlay`` directory. When the containers starts it will run a one-shot script (in the ``s6-overlay/scrips`` folder)
that will run the setup function in the ``__main__.py`` file, then two lon process wil be run, one for the gunicorn server and one for the django-q2 worker.

``.github/workflows/cd.yml``

This is the github action file that will do the actual deployment. This action is run everytime a new `git tag is pushed to the repository </the_cli/start_project/packages.html#project-versioning>`_.
There is the github action file at ``.github/workflows/cd.yml`` that will do the actual deployment. This action is run everytime a new `git tag is pushed to the repository </the_cli/start_project/packages.html#project-versioning>`_.

CapRover
********

The part is important for us in this section is the first job, ``deploy-to-caprover``
Assuming you already have a caprover instance setup, all you have to do here is update you gihub repository with the correct credentials.
Here is an example of the content of the ``deploy-to-caprover`` job.

.. literalinclude:: ../../../demo/.github/workflows/cd.yml
:lines: 9-21
:language: yaml

This job use the `build-docker-and-deploy-to-caprover <https://github.com/adamghill/build-docker-and-deploy-to-caprover>`_ action to deploy to caprover, you can checkout the readme for further instructions on how to do it,
but there is essentially two things to do, add two secrets to your github repository:
Checkout the the `action readme <https://github.com/adamghill/build-docker-and-deploy-to-caprover>`_ for more informations, but there is essentially two things to do, add two secrets to your github repository:

``CAPROVER_SERVER_URL``: The url of your caprover server, for example ``https://caprover.example.com``

``CAPROVER_APP_TOKEN``: This can be generated on the ``deployment`` page of your caprover app, there should be an ``Enable App Token`` button.

If you are deploying from a private repository, the is also `instructions <https://github.com/adamghill/build-docker-and-deploy-to-caprover?tab=readme-ov-file#unauthorized-error-message-on-caprover>`_ to allow caprover to pull from your private repository.
Expand All @@ -146,32 +159,60 @@ And tha't basically it, if you are a caprover user, you know the rest of the dri
Other PAAS
**********

Assuming you are using a PAAS that support docker, the process should be quite similar, there sould be somewhere to specify the image to pull from and create a container than
associate ressources like databaes and add environment variables.
If you want to push your image on the `github registry <https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry>`_ the straightforward way would
be to the place the job `deploy-to-caprover` in the `.github/workflows/cd.yml` file with somthing like the one below
If you using a PAAS solution that support docker, the first thing you need to do is update the ``.github/workflows/cd.yml`` action file to build your image and push it to a registry.
If you want the `github registry <https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry>`_ your new job might look something like
this using `this action <https://github.com/docker/build-push-action>`_:

.. code-block:: yaml
jobs:
You can also build the image locally with ``just build-docker-image`` and then push it on the registry of your choice.



.. deploying the project to caprover what is confugured by default, but you are free to change this, mode details on the `deployment guide </the_cli/start_project/deploy.html>`_.
.. build python wheel of your project, these a
.. create binary of your project using `pyapp <https://github.com/ofek/pyapp>`_ only for x86_64 linux, but you can easily add more platforms if needed.
build-and-push-image:
runs-on: ubuntu-latest
steps:
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/tobi-de/reminders
# generate Docker tags based on the following events/attributes
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v6
with:
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
I put up the action above based on these examples, yo can look them up to adjust the action to your needs:

- https://docs.docker.com/build/ci/github-actions/push-multi-registries/
- https://docs.docker.com/build/ci/github-actions/manage-tags-labels/

.. note::

You can also build the image locally with ``just build-docker-image`` and then push it manually on the registry of your choice.

.. The ``deploy`` folder contains some files that are needed for deployment, mainly docker related. If Docker isn't part of your deployment plan, this directory can be safely removed.
.. However, you might want to retain the ``gunicorn.conf.py`` file inside that directory, which is a basic Gunicorn configuration file that could be useful regardless of your chosen deployment strategy.

.. The project comes for docker and s6-overlay configuration for deployment. All deployment related files are in the ``deploy``folder.
.. s6-overay is an init service, uses for processes supervisation meant for
.. container. It is build around the s6 system. For more details on how s6-overlay check the dedicated guide on it.
.. All you need to known is that the container produced by the image, is meant to run your django project using gunicorn and django-q2 for background tasks
.. and scheduling feature. For more details on django-q2 checkout the guides on task quues and schedulers in django.
At this point the process will be plateform dependent, but usually you should be abloe to specify the Image to pull from and that's should be it.
Eventually in the future I might add more specific guides for some of the most popular PAAS solutions.
2 changes: 1 addition & 1 deletion docs/the_cli/start_project/packages.rst
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,7 @@ This file can essentially replace your ``manage.py`` file, but the ``manage.py``

The binary file that ``pyapp`` builds is a script that bootstraps itself the first time it is run, meaning it will create its own isolated virtual environment with **its own Python interpreter**.
It installs the project (your falco project is setup as a python package) and its dependencies. When the binary is built, either via the provided GitHub Action or the ``just`` recipe / command,
you also get a wheel file (the standard format for Python packages). If you publish that wheel file on PyPI, you can use the binary's ``self update`` command to update itself.
you also get a wheel file (the standard format for Python packages). If you publish that wheel file on PyPI, you can use the binary's ``self update`` command to update the project.

Let's assume you generated a project with the name ``myjourney``:

Expand Down

0 comments on commit 161f5c2

Please sign in to comment.