This is the JustFix Tenant Platform.
In addition to this README, please feel free to consult the project wiki, which contains details on the project's principles and architecture, development tips, and more.
Note: It's highly recommended you follow the Developing with Docker instructions, as it makes development much easier. But if you'd really rather set everything up without Docker, read on!
You'll need Python 3.8.2 and pipenv, as well as Node 12, yarn, and Git Large File Storage (LFS). You will also need to set up Postgres version 10 or later, and it will need the PostGIS extension installed.
If you didn't have Git LFS installed before cloning the repository,
you can obtain the repository's large files by running git lfs pull
.
First create an environment file and optionally edit it as you see fit:
cp .justfix-env.sample .justfix-env
Since you're not using Docker, you will want to set DATABASE_URL
to your Postgres instance, using the Database URL schema.
Then set up the front-end and configure it to continuously re-build itself as you change the source code:
yarn
yarn start
Then, in a separate terminal, you'll want to instantiate your Python virtual environment and enter it:
pipenv install --dev --python 3.8
pipenv shell
(Note that if you're on Windows and have bash
, you
might want to run pipenv run bash
instead of
pipenv shell
, to avoid a bug whereby command-line
history doesn't work with cmd.exe
.)
Then start the app:
python manage.py migrate
python manage.py runserver
Then visit http://localhost:8000/ in your browser.
You'll want to create an admin user account to access the App's "Admin Site." Django has this functionality preset, so just navigate to the root directory and use the Django command for creating a super user:
python manage.py createsuperuser
The following prompts on your terminal window will set up the account for you. Once created, visit http://localhost:8000/admin and log in with your new credentials to access the Admin Site.
Some of this project's dependencies are cumbersome to install on some platforms, so they're not installed by default.
However, they are present in the Docker development environment (described below), and they are required to develop some functionality, as well as for production deployment. They can be installed via:
pipenv run pip install -r requirements.production.txt
These dependencies are described below.
WeasyPrint is used for PDF generation. If it's not installed during development, then any PDF-related functionality will fail.
Instructions for installing it can be found on the WeasyPrint installation docs.
To run the back-end Python/Django tests, use:
pytest
To run the front-end Node/TypeScript tests, use:
yarn test
You can also use yarn test:watch
to have Jest
continuously watch the front-end tests for changes and
re-run them as needed.
We use Prettier to automatically format some of our non-Python code. Before committing or pushing to GitHub, you may want to run the following to ensure that any files you've changed are properly formatted:
yarn prettier:fix
Note that if you don't either use this or some kind of editor plug-in before pushing to GitHub, continuous integration will fail.
Black is a formatting tool similar to Prettier, but for Python code.
Before committing or pushing to GitHub, you may want to run the following to ensure that any files you've changed are properly formatted:
black .
Note that if you don't either use this or some kind of editor plug-in before pushing to GitHub, continuous integration will fail.
For help on environment variables related to the Django app, run:
python manage.py envhelp
Alternatively, you can examine project/justfix_environment.py.
For the Node front-end:
NODE_ENV
, can be set toproduction
for production or any other value for development.- See frontend/webpack/webpack-defined-globals.d.ts for more values.
Some data that is shared between the front-end and back-end is
in the common-data/
directory. The
back-end generally reads this data in JSON format, while the
front-end reads a TypeScript file that is generated from
the JSON.
A utility called commondatabuilder
is used to generate the
TypeScript file. To execute it, run:
node commondatabuilder.js
You will need to run this whenever you make any changes to the underlying JSON files.
If you need to add a new common data file, see
common-data/config.ts
, which
defines how the conversion from JSON to TypeScript occurs.
The communication between server and client occurs via GraphQL and has been structured for type safety. This means that we'll get notified if there's ever a mismatch between the server's schema and the queries the client is generating.
To manually experiment with GraphQL queries, use the interactive in-browser
environment called GraphiQL, which is built-in to the development
server. It can be accessed via the "Developer" menu at the top-right of
almost any page on the site, or directly at http://localhost:8000/graphiql
.
The server uses Graphene-Django for its GraphQL needs. It also
uses a custom "schema registry" to make it easier to define new
fields and mutations on the schema; see
project/schema_registry.py
for
documentation on how to use it.
The JSON representation of the schema is in schema.json
and
is automatically regenerated by the development server,
though developers can manually regenerate it via
python manage.py graphql_schema
if needed.
Client-side GraphQL code is generated as follows:
-
Raw queries are in
frontend/lib/queries/
and given a.graphql
extension. Currently, they must consist of one query, mutation, or fragment that has the same name as the base name of the file. For instance, if the file is calledSimpleQuery.graphql
, then the contained query should be calledSimpleQuery
, e.g.:query SimpleQuery($thing: String) { hello(thing: $thing) }
-
Some GraphQL queries are automatically generated based on the configuration in
frontend/lib/queries/autogen-config.toml
. -
The querybuilder, which runs as part of
yarn start
, will notice changes to any of these raw queries orautogen-config.toml
or the server'sschema.json
, and do the following:-
It automatically generates any GraphQL queries that need generating.
-
It runs Apollo Code Generation to validate the raw queries against the server's GraphQL schema and create TypeScript interfaces for them.
-
For queries and mutations, it adds a function to the TypeScript interfaces that is responsible for performing the query in a type-safe way.
-
The resultant TypeScript interfaces and/or function is written to a file that is created next to the original
.graphql
file (e.g.,SimpleQuery.ts
).
If the developer prefers not to rely on
yarn start
to automatically rebuild queries for them, they can also manually runnode querybuilder.js
. -
At this point the developer can import the final TS file and use the query.
You can alternatively develop the app via Docker, which means you don't have to install any dependencies. However, Docker takes a bit of time to learn how to use.
After you install Docker on your machine, open up Settings(gear icon) > Resources > Advanced
and make sure you give it at least 8 GB of memory to play with.
If you don't, you might get an out-of-memory error when attempting
to build and/or run the Docker image.
You'll also need Git Large File Storage (LFS). On a Mac with Homebrew, that's
brew install git-lfs
git lfs install
If you didn't have Git LFS installed before cloning the repository,
you can obtain the repository's large files by running git lfs pull
.
As with the non-Docker setup, you'll first want to create an environment file and optionally edit it as you see fit:
cp .justfix-env.sample .justfix-env
Then, to set everything up, run:
bash docker-update.sh
Then run:
docker-compose up
This will start up all services and you can visit http://localhost:8000/ to visit the app.
Whenever you update your repository via e.g. git pull
or
git checkout
, you should update your containers by running:
bash docker-update.sh
If your Docker setup appears to be in an irredeemable state
and bash docker-update.sh
doesn't fix it--or
if you just want to free up extra disk space used up by
the app--you can destroy everything by running:
docker-compose down -v
Note that this may delete all the data in your instance's database.
At this point you can re-run bash docker-update.sh
to set
everything up again.
To access the app container, run:
docker-compose run app bash
This will run an interactive bash session inside the main app container.
In this container, the /tenants2
directory is mapped to the root of
the repository on your host; you can run any command, like python manage.py
or pytest
, from there. Specifically, within this bash session is where you can create an Admin User to access the App's Admin Site.
Development, production, and our continuous integration
pipeline (CircleCI) use a built image from the
Dockerfile
on Docker Hub as their base to ensure
dev/prod parity.
Changes to Dockerfile
should be pretty infrequent, as
they define the lowest level of our application's software
stack, such as its Linux distribution. However, changes
do occasionally need to be made.
Whenever you change the Dockerfile
, you will need to
push the new version to Docker Hub and change the
tag in a few files to correspond to the new version you've pushed.
To push your new version, you will need to:
-
Come up with a unique tag name; preferably one that isn't already taken. (While you can use an existing one, it's recommended that you create a new one so that other pull requests using the existing one don't break.)
For the rest of these instructions we'll assume your new tag is called
0.1
. -
Run
docker build -t justfixnyc/tenants2_base:0.1 .
to build the new image. -
Run
docker push justfixnyc/tenants2_base:0.1
to push the new image to Docker Hub. -
In
Dockerfile.web
,docker-services.yml
,.circleci/config.yml
, and.devcontainer/Dockerfile
, edit the references tojustfixnyc/tenants2_base
to point to the new tag.
See the wiki section on Deployment.
The app uses the twelve-factor methodology, so deploying it should be relatively straightforward.
At the time of this writing, however, the app's runtime environment does need both Python and Node to execute properly, which could complicate matters.
A Python 3 script, deploy.py
, is located in the
repository's root directory and can assist with
deployment. It has no dependencies other than
Python 3.
It's possible to deploy to Heroku using their Container Registry and Runtime. To build and push the container to their registry, run:
python3 deploy.py heroku
You'll likely want to use Heroku Postgres as your ndatabase backend.
This project uses the PO file format to store most of its
localization data in the locales
directory.
The back-end uses the Django translation framework for internationalization. To extract messages for localization, run:
yarn django:makemessages
One .po
files have been updated, the catalogs can be compiled with:
yarn django:compilemessages
The front-end uses Lingui for internationalization. To extract messages for localization, run:
yarn lingui:extract
Once .po
files have been updated, the catalogs can be compiled to JS
with:
yarn lingui:compile
When internationalizing a piece of code, it can be difficult to tell if one missed any strings because of how our internationalization frameworks fall back to English when our strings haven't been translated yet.
To compensate for this, we provide a way to "garble" message catalogs with nonsense localizations, which makes it easier to tell whether all our strings have been properly internationalized. When enabled, it looks like this:
To activate the garbling, run:
yarn l10n:garble
Be careful about making commits while the message catalogs are garbled!
Because garbling changes the actual .po
files, and because those files
are version-controlled, any commits you make while garbling is active may
accidentally commit garbled message catalogs.
When you're done using the garbled codebase, you can un-garble the message catalogs by running:
yarn l10n:ungarble
The codebase has a number of optional integations with third-party services
and data sources. Run python manage.py envhelp
for a listing of all
environment variables related to them.
You can load all the NYCHA offices into the database via:
python manage.py loadnycha nycha/data/Block-and-Lot-Guide-08272018.csv
Once imported, any users from NYCHA who file a letter of complaint will automatically have their landlord address populated.
Note that the CSV loaded by this command was originally generated by the JustFixNYC/nycha-scraper tool. It can be re-used to create new CSV files that may be more up-to-date than the one in this repository.
The tenant assistance directory, known within the project as findhelp
, needs
shapefiles of New York City geographic regions to allow staff to define
the catchment areas of tenant resources. These shapefiles can be loaded via
the following command:
python manage.py loadfindhelpdata
The shapefile data is stored within the repository using Git LFS and has the following provenance:
findhelp/data/ZIP_CODE_040114
- https://data.cityofnewyork.us/Business/Zip-Code-Boundaries/i8iw-xf4ufindhelp/data/Borough-Boundaries.geojson
- https://data.cityofnewyork.us/City-Government/Borough-Boundaries/tqmj-j8zmfindhelp/data/Community-Districts.geojson
- https://data.cityofnewyork.us/City-Government/Community-Districts/yfnk-k7r4findhelp/data/ZillowNeighborhoods-NY
- https://www.zillow.com/howto/api/neighborhood-boundaries.htmfindhelp/data/nys_counties.geojson
- http://gis.ny.gov/gisdata/inventories/details.cfm?DSID=927 (reprojected into the WGS 84 CRS and converted to GeoJson via QGIS)
You can optionally integrate the app with Celery to ensure that some long-running tasks will not cause web requests to time out.
If you're using Docker, Celery isn't enabled by default. To enable it, you need
to extend the default Docker Compose configuration with docker-compose.celery.yml
.
For details on this, see Docker's documentation on Multiple Compose files.
For example, to start up all services with Celery integration enabled, you can run:
docker-compose -f docker-compose.yml -f docker-compose.celery.yml up
The codebase can also serve an entirely different website, NoRent.org.
To view this alternate website, you'll need to either add a new Django Site model or modify the built-in default one to have a name that includes the text "NoRent" somewhere in it (the match is case-insensitive, so it can be "norent" or "NORENT", etc).
To do this:
- Edit your
/etc/hosts
file to map localhost.norent to 127.0.0.1. Your file should have the following line:
127.0.0.1 localhost.norent
- Add an additional Site model (in addition to the default one).
You can do this by going to http://localhost:8000/admin/sites/site/ and clicking "add site". Set
domain
to localhost.norent:8000 and setname
to norent. It should look like this:
DOMAIN NAME DISPLAY NAME
localhost.norent:8000 NoRent
localhost.laletterbuilder:8000 LaLetterBuilder
This will allow you to access NoRent at http://localhost.norent:8000/
and
LA Letter Builder at http://localhost.laletterbuilder:8000/
.
In general, if you add a new Django Site model, you'll need to make sure it
has a domain that matches whatever domain you're visiting the
site at, or else the code won't be able to map your request to
the new Site you added. The display name matters too - it will be matched by
a regular expression in site_util.py
, so make sure that regex will match your
display name. Best practice is not to include spaces.