A web app for live recording batting data and leveraging that data to optimize lineups for Softball teams. And walkup songs too ☻!
Live at https://softball.app/
- Install yarn
[sudo] npm install -g yarn
- From this repo's root directory, run
install.sh
. - From this repo's root directory run
start.sh
. - Visit http://localhost:8889 in your browser.
- Setup any optional features using the sections below if desired (not necessary).
Check for format errors:
yarn fmt:check
Fix format errors:
yarn fmt:fix
Check for lint errors:
yarn lint:check
Fix lint errors:
yarn lint:fix
By default vitest runs in watch mode. If you want to run tests with a "pass"/"fail" then run the :prod version.
# run all tests
yarn test
# run client tests
cd client
yarn test
#run server tests
cd server
yarn test
You can run tests one module at a time with npx:
cd client
npx vitest run <file-name>
TODO: Figure out how to do this directly from vscode
Dev mode starts its own web server to serve client assets and proxies and app server requests to the app server.
use yarn start
if you want to start both dev server and softball.app server at the same time in the same terminal.
Alternatively, with two terminals you can run yarn start:client
and yarn start:server
or go into the respective directories and run yarn start
.
Production uses the app server to serve all client web assets.
If you would like to run the prod build do the following.
# build client js code, which produces <git root>/build/*
`yarn build`
# run server in prod mode which serves from <git root>/build/*
`yarn start:prod`
# All together
`yarn build && yarn start:prod`
From scratch:
# Create a new GCP compute instance (debian):
sudo apt-get install -y git-core
sudo apt-get install curl
git clone lone https://github.com/thbrown/softball-sim.git
curl -sL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install screen
screen
yarn
yarn build # or `./gcp-build.sh` if you encounter get memory limitations with local build
yarn start:prod
# Detach screen session (ctrl+A, ctrl+D)
On an already running machine:
- Login to Google Cloud Platform
- Open a web ssh session on the compute instance the application is running on
- Type
screen -r
- Kill the app/web server (ctrl+c)
git pull
- Update any config (uncommon)
yarn build
or./gcp-build.sh
if you encounter get memory limitations with local build && yarn start:prod`yarn start:prod
- Detach screen session (ctrl+A, ctrl+D)
The app will run without any of these enabled, but you can enable these for a production-like experience:
- Cloud storage for persistent storage (uses file system storage by default - see
./database
after using the app) - Nginx as a reverse proxy to enable TLS and rate limiting (no reverse proxy by default, un-encrypted, runs on port 8888, no rate limiting)
- Cloud compute for running optimizations
- Email via mailgun
- Acquire a Google Cloud Platform (GCP) account.
- Install the google cloud SDK (https://cloud.google.com/sdk/docs/install-sdk)
- In the server
config.js
of this app, specifyGcpBuckets
mode and bucket names, like so:
database: {
mode: 'GcpBuckets',
bucketNames: {
data: 'sba-data',
emailLookup: 'sba-email-lookup',
tokenLookup: 'sba-token-lookup',
publicIdLookup: 'sba-public-id-lookup',
},
},
-
Edit the bucket names, bucket names must be globally unique and these ones will be taken.
-
Auth your machine and set proper permissions so the storage calls will succeed:
If you are running from a gcp compute instance, you can set "access scope" on instance create. The "Access Scope" required to use GCP storage is read/write or full.
If you are running on a local developer instance, you can auth with your Google credentials using the following command (using gcloud command line tools):
gcloud auth application-default login
You'll need to add the storage admin or editor rolls to the account you log in as.
If you still get errors about permissions after setting IAM, you'll need to check to make sure the bucket names in the config are globally unique.
For details and other ways to authenticate, see https://cloud.google.com/docs/authentication/provide-credentials-adc
sudo apt-get install nginx
sudo nano /etc/nginx/nginx.conf
Running on port 80, comment out all https related things TODO: publish nginx configsudo systemctl restart nginx
sudo apt-get install certbot python-certbot-nginx
sudo certbot certonly --nginx
sudo openssl dhparam -dsaparam 4096 -out /etc/ssl/certs/dhparam.pem
Generate dhparam.pem to improve security score, make sure this exists after you create it or Nginx will fail to start. I had to manually create/etc/ssl/certs/dhparam.pem
and then run the command.sudo nano /etc/nginx/nginx.conf
Add back all the commented out https stuffsudo systemctl restart nginx
- Shut down the server, move it to port 8888, restart it
- Optionally enable automatic renewals
sudo certbot renew --dry-run
TODO
TODO
Get an api key from mailgun then you put that API key in the server config file (src-srv/config.js
) which is generated from config-template.js
when start the app server for the first time.
email: {
apiKey: 'yourapikeygoeshere',
domain: 'mg.softball.app',
restrictEmailsToDomain: 'softball.app', // Only allow emails to softball.app (in development we don't want to email randos by accident, set to null in production)
},
Data is passed to the backend via JSON and database implementations are responsible for persisting it.
The JSON schemas for this application are defined in /shared/schema
and are defined using JSON Shema (https://json-schema.org/specification.html)
The JSON schema files are named with the following suffixes. We can mix and match these in other schema files to get the validations we need.
- public (or no suffix) - Client has read/write access to the field.
- private - This filed will never be sent ot the client.
- read-only - This field cen be read by the client but can not be updated by the client via sync (the patch(..) method in the db files).
These are the schema files we actually do the validation against, they reference the other schema files in the schema directory.
- Full - All data associate with an account. This is what gets sent to the db layer.
- Client - Excludes private fields (e.g. password hashes). This schema is used to validate the JSON document stored by the browser.
- Export - Excludes the account node. Also excludes all private and all read-only fields. This schema is used to validate data handled by the export/import feature.
Note: JSON schema allows for the specification of a "readOnly" keyword. We don't use it because it doesn't have any affect on validation and the recommendation is to use the readOnly property to perform pre-processing (ajv-validator/ajv#909) and generate READ or WRITE schemas accordingly. I don't want to write a JSON parser that does this, so we'll just define our read-only fields in their own files.
Each of the top-level schemas described above contain a metadata property at their root. The metadata node consists of two properties
- Version - serial integer number, used in schema migration
- Scope - what top-level schema the document should be validated against [full, client, or export]
- Make your changes to the json schema files located in
./shared/schema
. - Define how existing JSON documents should be updated to match your new schema in
./shared/schema/schema-migration.js
- Increment
CURRENT_VERSION
at the top of./shared/schema/schema-migration.js
- If you've added read-only or private fields you may need to write code to convert between different schema types in
./shared/schema/schema-validation.js
- If you've added read-only or private fields you may need to write code to prevent insecure patches in
./src-srv/patch-manager.js
- Write your code to use your new schema!
This app contains a service worker that's used to enable offline access. The service worker is only generated and used for production builds of the app.
You can enabled debugging (of the production code) by un-commenting //mode: 'develop',
in the client vite config.
Note: this is broken, because the file structure has changed. Should be fixable, but isn't as important because the new GCP free instance manage memory better and can build the app just fine.
Cloud build:
cd scripts
./gcp-build.sh
cd ..
yarn start