Service | Development | Staging | Production |
---|---|---|---|
Website | https://localhost:3040 | https://dev.podkrepi.bg | https://podkrepi.bg |
Rest API | https://localhost:5010/api/v1 | https://dev.podkrepi.bg/api/v1 | https://podkrepi.bg/api/v1 |
Swagger | https://localhost:5010/swagger | https://dev.podkrepi.bg/swagger | https://podkrepi.bg/swagger |
- API
- Database
- Workspace
To run and develop the module NodeJS 20 is required. In this section 2 ways of configuring a development environment are described.
The following prerequisites are required in order to be able to run the project:
- Node.js v20
- Yarn v3.x
- Docker with Docker Compose (to easily run a local database instance)
If you wish to keep your host clean, it is also possible to develop the module in a Docker container. You can do that by using the Visual Studio Code's Remote Containers extension and read how to initialize your dev container.
- Make sure you have the extension installed
- Open the folder of the module in VS Code
- Hit
Ctrl
/Cmd
+Shift
+P
-> Remote-Containers: Reopen Folder in Container
git clone git@github.com:podkrepi-bg/api.git
cd api
yarn set version berry
yarn
Run the below command in your terminal:
docker compose up -d pg-db keycloak
This will start the following services in your local docker:
- Local Postgres DB on default port 5432 for your personal development
- Local Keycloak Identity server Admin UI on http://localhost:8180 with config coming from
./manifests/keycloak/config
:- Keycloak Local Admin User:
admin
with pass:admin
- Podkrepi Local Admin users:
- coordinator@podkrepi.bg, reviewer@podkrepi.bg, admin@podkrepi.bg,
- all with pass:
$ecurePa33
- Keycloak Local Admin User:
This is needed first time only. We use Prisma as Database management and versioning tool the following migration command will init the database from the schema.prisma file. See Database Development Guidelines below for further details.
# Create db schema
yarn prisma migrate deploy
# Generate the prisma clients
yarn prisma generate
# Seed initial test data
yarn prisma db seed
Copy the provided .env.example
to .env
cp .env.example .env
Note: To avoid modifying the original file, you can create .env.local
and add overrides for the variables that are specific to your local environment. This approach allows you to keep your customizations separate from the default values.
Testing the initialization is done correctly.
yarn test
yarn dev
and the backend API server will listen on http://localhost:5010/api/v1
First build the images locally and start the containers. Then iterate on the code and changes will be picked up through the mounted folders.
docker-compose up --build -d
After starting your dev server visit:
To shut down the dev server use:
docker-compose down
Available at http://localhost:5010/swagger/
Run nx dep-graph
to see a diagram of the dependencies of your projects.
We recommend using Nestjs generators to create different nestsj components in generic way.
yarn nest # will print all generators
Use the Nest resource generator to create all interfaces for CRUD operations around a new entity/resource
yarn nest generate resource [name]
Run yarn build-all
to build the project. The build artifacts will be stored in the dist/
directory. Use the --prod
flag for a production build.
Make sure you run auto-formatting before you commit your changes.
yarn format
For the database layer we're using Prisma. In order to get familiar with the concept please read What is Prisma? and watch some intro videos on YouTube.
The project already contains the database shema in shema.prisma file and initialization "seed" scripts for inital data are in db/seed folder.
Initialize the database using these commands. It will initialize the database using the schema.prisma, the migration scripts and the db/seed scripts to insert data records for the API to work.
yarn prisma migrate deploy
yarn prisma seed
Prisma offers a nice Web Client to review and edit the database:
yarn prisma studio
There are two ways to work with the database:
- schema first - make changes in schema.prisma and update the database
- db first - make changes directly in the database and introspect to update the schema.prisma
After initializing the database, feel free to edit the schema.prisma file in the main folder. When done with changes execute to update the database:
yarn prisma migrate dev
The command will ask you to name your changes and will generate a migration script that will be saved in ./migrations folder.
Run the tests again yarn test
to ensure all ok.
If you don't want to generate small migrations for every change, after finishing the work on your branch, delete the migration files manually and run again yarn prisma migrate dev
to create one single feature level migration.
Read more about Team development with Prisma Migrate here.
After initializing the database, open prisma stidio or your favorite DB Management IDE and feel free to make db changes. When done with changes, execute:
yarn prisma db pull
This will read all changed from you db instance and will update the schema.prisma file with nessary translations.
Now that the schema file is updated, we need to update the prismajs client which is used by our app by running:
yarn prisma generate
This process is called Prisma DB Introspection.
If things go bad, there is a way to reset your database to the original state. This will delete the database and will create it from the schema, executing also the seeding.
yarn prisma migrate reset
We use S3 for storing the uploaded files in buckets. The code expects the buckets to be created on prod or dev environment. We are hosting S3 ourselves using ceph https://ceph.io/en/discover/technology/.
The creation of the buckets can happen using s3cmd client https://s3tools.org/s3cmd or any other S3 client and using the S3 secrets for the respective environment.
To configure S3cmd run
s3cmd --configure
All settings are self descriptive, however pay attention to these:
- The default region is not a Country code but "object-store-dev" for development and "object-store" for prod
- S3 endpoint: cdn-dev.podkrepi.bg
- When asked for DNS-style bucket use: cdn-dev.podkrepi.bg
- When asked for encryption password just press 'Enter' for leaving it empty
Then bucket creation is like this:
s3cmd ls
s3cmd mb s3://bucket-name
s3cmd ls
For enabling sign-in with existing gmail account we use the token-exchange feature of Keycloak as per the great description in: https://medium.com/@souringhosh/keycloak-token-exchange-usage-with-google-sign-in-cd9127ebc96d
The logic is the following:
- The frontend acquires a token from Google Sign-in
- The frontend sends the token to the backend API requesting a login with external provider (see: auth.service.ts issueTokenFromProvider)
- The backend sends the token-exchange request to Keycloak passing the Google Token for Permission to Login
- Keycloak server grants permission and returns the access token
- Backend creates the new user in the database and returns the access token for use from Frontend
Setting | Description | Default value |
---|---|---|
PORT |
The address on which the module binds. | 5010 |
GLOBAL_PREFIX |
Registers a prefix for every HTTP route path | api/v1 |
APP_VERSION |
The version of the application | "unknown" |
APP_ENV |
Application runtime environment | development |
NODE_ENV |
Node build environment | development |
TARGET_ENV |
Docker multi-stage target | development |
TARGET_APP |
Run specific application from the image. | api |
DATABASE_URL |
Database connection string. | postgres://postgres:postgrespass@localhost:5432/postgres?schema=api |
S3_ENDPOINT |
Endpoint for S3 interface. | https://cdn-dev.podkrepi.bg |
S3_REGION |
The S3 region | us-east-1 |
S3_ACCESS_KEY |
The S3 access key. | ****** |
S3_SECRET_ACCESS_KEY |
The S3 secret access key. | ****** |
KEYCLOAK_URL |
Keycloak authentication url | http://localhost:8180 |
KEYCLOAK_REALM |
Keycloak Realm name | webapp |
KEYCLOAK_CLIENT_ID |
Keycloak Client name | jwt-headless |
KEYCLOAK_SECRET |
Secret to reach Keycloak in headless mode | DEV-KEYCLOAK-SECRET |
KEYCLOAK_USER |
Master user for Keycloak Server | admin |
KEYCLOAK_PASSWORD |
Master user's password for Keycloak Server | admin |
STRIPE_SECRET_KEY |
Stripe secret key | ****** |
STRIPE_WEBHOOK_SECRET |
Stripe webhook secret key | ****** |
SENTRY_DSN |
Sentry Data Source Name | https://58b71cdea21f45c0bcbe5c1b49317973@o540074.ingest.sentry.io/5707518 |
SENTRY_ORG |
Sentry organization | podkrepibg |
SENTRY_PROJECT |
Sentry project | rest-api |
SENTRY_AUTH_TOKEN |
Sentry build auth token | ****** |
SENTRY_SERVER_ROOT_DIR |
App directory inside the docker image | /app |
SENDGRID_API_KEY |
SendGrid API key | "" - emails disabled if not set |
SENDGRID_SENDER_EMAIL |
SendGrid sender email | info@podkrepi.bg |
SENDGRID_INTERNAL_EMAIL |
Internal notification email from contact form request | info@podkrepi.bg (Prod), qa@podkrepi.bg (Dev), dev@podkrepi.bg (localhost) |
SENDGRID_CONTACTS_URL |
Endpoint to receive newsletter subscriptions | /v3/marketing/contacts |
CREATE SCHEMA api;
CREATE USER postgres WITH ENCRYPTED PASSWORD 'postgrespass';
GRANT ALL PRIVILEGES ON SCHEMA api TO postgres;
docker build -f Dockerfile.migrations .
docker run --env-file .env --network host <image-id>
Overall procedure:
- Ensure a local connection to the k8s cluster
- Start a new
migrate-database
container manually in the proper namespace (podkrepibg-dev
orpodkrepibg
).
kubectl run manual-migrate-db \
-it --rm \
-n podkrepibg-dev \
--image=ghcr.io/podkrepi-bg/api/migrations:master \
-- /bin/sh
- Check migration status with
yarn prisma migrate status
Following migration have failed: 20220605165716_rename_bank_hash_to_payment_reference
- Rollback or apply migrations (suggested commands are printed from the status)
The failed migration(s) can be marked as rolled back or applied:
- If you rolled back the migration(s) manually:
yarn prisma migrate resolve --rolled-back "20220605165716_rename_bank_hash_to_payment_reference"
- If you fixed the database manually (hotfix):
yarn prisma migrate resolve --applied "20220605165716_rename_bank_hash_to_payment_reference"
- Run migration deployment
yarn prisma migrate deploy
- At this point you can re-deploy the
api-headless
deployment to trigger the standard flow of operation
If you'd like to use Postman to query the API - see postman doc