Code for www.pepysdiary.com.
Pushing to main
will run the commit through this GitHub Action to run tests. If it passes, it will be deployed automatically to the website.
When changing the python version, it will need to be changed in:
.github/workflows/test.yml
.pre-commit-config.yaml
.python-version
(for pyenv)pyproject.toml
(ruff's target-version)docker/web/Dockerfile
For local development we use Docker. The live site is on an Ubuntu 22 VPS.
Copy .env.dist
to .env
and alter any necessary settings.
Open your /etc/hosts
file in a terminal window by doing:
$ sudo vim /etc/hosts
Enter your computer's password. Then add this line somewhere in the file and save:
127.0.0.1 www.pepysdiary.test
Download, install and run Docker Desktop.
In same directory as this README, build the containers:
$ docker compose build
Then start up the web, assets and database containers:
$ docker compose up
There are four containers:
pepysdiary_web
: the webserverpepysdiary_db
: the postgres serverpepysdiary_assets
: the front-end assets builderpepysdiary_redis
: the redis server (for optional caching)
All the repository's code is mirrored in the web and assets containers in the /code/
directory.
Once that's running, showing the logs, open another terminal window/tab.
There are two ways we can populate the database. First we'll create an empty one, and second we'll populate it with a dump of data from the live site.
The build
step will create the database and run the initial Django migrations.
Then create a superuser:
$ ./run manage createsuperuser
(See below for more info on the ./run
script.)
Log into postgres and drop the current (empty) database:
$ ./run psql -d postgres
# DROP DATABASE pepys WITH (FORCE);
# CREATE DATABASE pepys;
# GRANT ALL PRIVILEGES ON DATABASE pepys TO pepys;
# \q
On the VPS, create a backup file of the live site's database:
$ pg_dump -h localhost -U username -d dbname -Fc -b -f ~/dump.sql
Then scp it to your local machine:
$ scp username@your.vps.domain.com:/home/username/dump.sql .
Then copy the dump into Docker and load it into the database:
$ docker cp dump.sql pepys_db:/tmp/
$ docker exec -i pepys_db pg_restore -h localhost -U pepysdiary -d pepysdiary -j 2 /tmp/dump.sql
Then go to http://www.pepysdiary.test:8000 and you should see the site.
Log in to the Django Admin, go to the "Sites" section and change the one Site's Domain Name to www.pepysdiary.test:8000
and the Display Name to "The Diary of Samuel Pepys", if it's not already.
Whenever you come back to start work you need to start the containers up again by doing this from the project directory:
$ docker compose up
When you want to stop the server, then this from the same directory:
$ docker compose down
You can check if anything's running by doing this, which will list any Docker processes:
$ docker ps
See details on the ./run
script below for running things inside the containers.
Adding and removing python depenencies is most easily done with a virtual environment on your host machine. This also means you can use that environment easily in VS Code.
Set up and activate a virtual environment on your host machine using virtualenv:
$ virtualenv --prompt . venv
$ source venv/bin/activate
We use pip-tools to generate requirements.txt
from requirements.in
, and install the dependencies. Install the current dependencies into the activated virtual environment:
(venv) $ python -m pip install -r requirements.txt
To add a new depenency, add it to requirements.in
and then regenerate requirements.txt
:
(venv) $ pip-compile --upgrade --quiet --generate-hashes
And do the pip install
step again to install.
To remove a dependency, delete it from requirements.in
, run that same pip-compile
command, and then:
(venv) $ python -m pip uninstall <module-name>
To update the python dependencies in the Docker container, this should work:
$ ./run pipsync
But you might have to do docker compose build
instead?
Install pre-commit to run .pre-commit-config.yml
automatically when git commit
is done.
Gulp is used to build the final CSS and JS file(s), and watches for changes in the pepysdiary_assets
container. Node packages are installed and upgraded using yarn
(see ./run
below).
The ./run
script makes it easier to run things that are within the Docker containers. This will list the commands available, which are outlined below:
$ ./run
Run any command in the web container. e.g.
$ ./run cmd ls -al
Starts a Shell session in the web container.
Run the Django manage.py
file with any of the usual commands, within the pipenv virtual environment. e.g.
$ ./run manage makemigrations
The development environment has django-extensions installed so you can use its shell_plus
and other commands. e.g.:
$ ./run manage shell_plus
$ ./run manage show_urls
Runs all the Django tests. If it complains you might need to do ./run manage collecstatic
first.
Run a folder, file, or class of tests, or a single test, something like this:
$ ./run tests tests.core
$ ./run tests tests.core.test_views
$ ./run tests tests.core.test_views.HomeViewTestCase
$ ./run tests tests.core.test_views.HomeViewTestCase.test_response_200
Run all the tests with coverage. The HTML report files will be at htmlcov/index.html
.
Run ruff check .
over the code to check Python formatting:
$ ./run ruff
$ ./run ruff --fix
Conects to PosgreSQL with psql. Add any required arguments on the end. Uses the hines
database unless you specify another like:
$ ./run psql -d databasename
Update the installed python depenencies depending on the contents of requirements.txt
.
List any installed Node packages (used for building front end assets) that are outdated.
Update any installed Node packages that are outdated.
What I did to create a version of the database without personal information. After exporting the live database and importing into the local Docker database:
$ ./run psql
pepysdiary=# TRUNCATE django_comments, django_comment_flags, auth_user, annotations_annotation, membership_person;
pepysdiary=# UPDATE diary_entry SET comment_count=0;
pepysdiary=# UPDATE encyclopedia_topic SET comment_count=0;
pepysdiary=# UPDATE indepth_article SET comment_count=0;
pepysdiary=# UPDATE letters_letter SET comment_count=0;
pepysdiary=# UPDATE news_post SET comment_count=0;
pepysdiary=# \q
$ ./run manage createsuperuser
Then logged into Django Admin and changed the Site "Domain name" to www.pepysdiary.test:8000
.
Then dumped that modified database:
$ docker exec -i pepys_db pg_dump pepysdiary -U pepysdiary -h localhost | gzip > pepys_dump.gz
The complete set-up of an Ubuntu VPS is beyond the scope of this README. Requirements:
- Local postgresql
- Local redis (for caching)
- pipx, virtualenv and pyenv
- gunicorn
- nginx
- systemd
- cron
username$ sudo su - postgres
postgres$ createuser --interactive -P
postgres$ createdb --owner pepys pepys
postgres$ exit
username$ sudo mkdir -p /webapps/pepys/
username$ sudo chown username:username /webapps/pepys/
username$ mkdir /webapps/pepys/logs/
username$ cd /webapps/pepys/
username$ git clone git@github.com:philgyford/pepysdiary.git code
username$ pyenv install --list # All those available to install
username$ pyenv versions # All those already installed and available
username$ pyenv install 3.10.8 # Whatever version we're using
Make the virtual environment and install pip-tools:
username$ cd /webapps/pepys/code
username$ virtualenv --prompt pepys venv -p $(pyenv which python)
username$ source venv/bin/activate
(pepys) username$ python -m pip install pip-tools
Install dependencies from requirements.txt
:
(pepys) username$ pip-sync
(pepys) username$ cp .env.dist .env
Then fill it out as required.
Either do ./manage.py migrate
and ./manage.py createsuperuser
to create a new database, or import an existing database dumbp.
Symlink the files in this repo to correct location for systemd:
username$ sudo ln -s /webapps/pepys/code/conf/systemd_gunicorn.socket /etc/systemd/system/gunicorn_pepys.socket
username$ sudo ln -s /webapps/pepys/code/conf/systemd_gunicorn.service /etc/systemd/system/gunicorn_pepys.service
Start the socket:
username$ sudo systemctl start gunicorn_pepys.socket
username$ sudo systemctl enable gunicorn_pepys.socket
Check the socket status:
username$ sudo systemctl status gunicorn_pepys.socket
Start the service:
username$ sudo systemctl start gunicorn_pepys
Symlink the file in this repo to correct location:
username$ sudo ln -s /webapps/pepys/code/conf/nginx.conf /etc/nginx/sites-available/pepys
Enable this site:
username$ sudo ln -s /etc/nginx/sites-available/pepys /etc/nginx/sites-enabled/pepys
Remove the default site if it's not already:
username$ sudo rm /etc/nginx/sites-enabled/default
Check configuration before (re)starting nginx:
username$ sudo nginx -t
Start nginx:
username$ sudo service nginx start
username$ crontab -e
Add this:
# pepys - fetch some content from Wikipedia
10 3,4,5,6 * * * /webapps/pepys/code/venv/bin/python /webapps/pepys/code/manage.py fetch_wikipedia --num=30 > /dev/null 2>&1
- Allow service restarts without a password, so that GitHub Actions autodeploy works
- Set up Let's Encrypt for the domain
W're using Boostrap SASS to generate a custom set of Bootstrap CSS. Comment/uncomment lines in assets/sass/_bootstrap_custom.scss
to change which parts of Bootstrap's CSS are included in the generated CSS file.
### Bootstrap's JavaScript
We must manually download a custom version of Bootstrap's JavaScript file.
Instead of steps 2 and 3 you can hopefully upload the assets/js/vendor/bootstrap_config.json
file.
-
Toggle off all of the checkboxes.
-
Under the "jQuery plugins" section check the boxes next to these plugins:
- Linked to components
- Alert dismissal
- Dropdowns
- Tooltips
- Popovers
- Togglable tabs
- Magic
- Collapse
- Linked to components
-
Scroll to the bottom of the page and click the "Compile and Download" button.
-
Copy the two .js files into
assets/js/vendor/
, replacing the existing files.
## Wikipedia content
To fetch content for all Encyclopedia Topics which have matching Wikipedia pages, run this:
./manage.py fetch_wikipedia --all --verbosity=2
It might take some time. See encyclopedia/management/commands/fetch_wikipedia.py
for more options.
Whether in local dev or Heroku, we need an S3 bucket to store Media files in (Static files are served using Whitenoise).
-
Go to the IAM service, Users, and 'Add User'.
-
Enter a name and check 'Programmatic access'.
-
'Attach existing policies directly', and select 'AmazonS3FullAccess'.
-
Create user.
-
Save the Access key and Secret key.
-
On the list of Users, click the user you just made and note the User ARN.
-
Go to the S3 service and 'Create Bucket'. Name it, select the region, and click through to create the bucket.
-
Click the bucket just created and then the 'Permissions' tab. Add this policy, replacing
BUCKET-NAME
andUSER-ARN
with yours:
{
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::BUCKET-NAME/*"]
},
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::BUCKET-NAME", "arn:aws:s3:::BUCKET-NAME/*"],
"Principal": {
"AWS": ["USER-ARN"]
}
}
]
}
- Click on 'CORS configuration' and add this:
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
-
Upload all the files to the bucket in the required location.
-
Update the server's environment variables for
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
andAWS_STORAGE_BUCKET_NAME
.