This is the pipeline service that implements the Precision Feedback Pipeline. The underlying model of precision feedback is captured in this conceptual model.
Read through our wiki pages for more detail on testing. Please note that this wiki might not be completely up to date.
The roadmap for the next version of the PFP can be found in the wiki.
This is a Python software project and running the pipeline requires some familiarity with Python and virtual environments. This quick start gives directions using python's built in virtual environment tool venv and Poetry.
git clone https://github.com/Display-Lab/precision-feedback-pipeline.git
cd precision-feedback-pipeline
Using venv
and pip
python --version # make sure you have python 3.11
python -m venv .venv
.venv\Scripts\activate.bat
source .venv/bin/activate
pip install -r requirements.txt # this will take a while, so go get a cup of coffee
pip install uvicorn # not installed by default (needed for running locally)
Alternative: Using Poetry (for developers)
poetry env use 3.11 # optional, but makes sure you have python 3.11 available
poetry install # creates a virtual environment and install dependencies
poetry shell # activates the enviroment
Clone the knowledge base repository in a separate location
cd ..
git clone https://github.com/Display-Lab/knowledge-base.git
Change back to the root of precision-feedback-pipeline
cd precision-feedback-pipeline
Update the .env.local
file and change path/to/knowledge-base
to point to the local knowledge base that you just checked out. (Don't remove the file://
for preferences and manifest.)
# .env.local
preferences=file:///Users/bob/knowledge-base/preferences.json
mpm=/Users/bob/knowledge-base/prioritization_algorithms/motivational_potential_model.csv
manifest=file:///Users/bob/knowledge-base/mpog_local_manifest.yaml
...
Run the pipeline
ENV_PATH=.env.local uvicorn main:app
# Expect to see a server start message like this "INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)"
You can use Postman or your favorite tool to send a message and check the results. There is a sample input message file located at tests/test_cases/input_message.json
. Here is a sample curl
request:
curl --data "@tests/test_cases/input_message.json" http://localhost:8000/createprecisionfeedback/
Local file path or URL (see .env.remote for github URL formats). All are required.
manifest: Path to the manifest file that includes differend pieces of the base graph including (causal pathways, message templates, measures and comparators). See manifest configuration for more detail
- default: 6
generate_image: If set to true and the display type is bar chart or line chart, then the pipeline will generate the images and include them as part of the response
- default: True
- default:
WARNING
(this is the production defauslt) - note: The PFP must be run with
log_level=INFO
in order to generate the candidate records in the output.
- default: None
These control the elements of the scoring algorithm.
- default: True
- default: True
- default: True
- default: True
The manifest file includes all different pieces that should be loaded to the base graph including causal pathways, message templates, measures and comparators. It is a yaml file which specifies a directory structure containing JSON files for all those different categories.
Each entry consists of a key which is a URL (file:// or https:// or relative, see Uniform Resource Identifier (URI)) and a value which is a file path relative to the url. See manifest examples in the knowledge base.
If the key is a relative path, it must end with a '/'. In that case the key is going to be resolved towards the location of the manifest file by the pipeline.
ENV_PATH=/user/.../dev.env log_level=INFO use_preferences=True use_coachiness=True use_mi=True generate_image=False uvicorn main:app --workers=5
👉
uvicorn
can be run with multiple workers. This is useful when testing with a client that can send multiple requests.