This repository contains examples of how to use the FhirProto platform at github.com/google/fhir. This repo contains a generate-synthea.sh
script for using Synthea to create a synthetic FHIR JSON dataset, and then shows some examples of parsing, printing, validating, profiling and querying. Some of these examples are left intentionally incomplete, to leave exercises to go along with this guide.
The rest of this README contains instructions for setting up an environment for working with FhirProto. For instructions on running the examples, check out EXAMPLES.md.
For a more comprehensive explanation of the platform, see the User Guide at the main repo
FhirProto uses Bazel as its dependency management/build tool. This is a declarative build system used by Google, Tensorflow, and many others. We require a minimum Bazel version of 2.2.0. Follow steps here to download and run the install script. Pro-tip: make sure not to drop the --user
flag when running the script. Verify that bazel is installed correctly by running bazel --version
.
You can always check what version of Bazel you have by running bazel --version
This repository also provides an example of Gradle integration with the published Java library on Google's Maven repostitory. See build.gradle
for how to set it up and ParsePatients.java
for how to run it.
Next, we’ll clone the example repository into a git directory. If you don’t have a directory you already use for git code, ~/git
is perfectly reasonable:
mkdir ~/git
cd ~/git
Then, clone this repo using
git clone https://github.com/google/fhir-example.git
cd fhir-example
Next, we’ll generate a synthetic FHIR JSON dataset, using Synthea, via the generate-synthea.sh
script. We’ll need a workspace directory for this dataset - here we’ll use ~/fhirdata/
WORKSPACE=~/fhirdata
./generate-synthea.sh $WORKSPACE
This will create three data directories:
$WORKSPACE/bundles/
contains 1000 patient bundles, each in its own JSON file$WORKSPACE/ndjson/
contains one NDJSON file per resource type, where each line represents a record of that type$WORKSPACE/analytic/
contains two files per resource type for use with Analytic SQL-on-FHIR: a.schema.json
file containing the analytic schema for each resource, and a.analytic.ndjson
file containing the resources printing according to the analytic schema. For more on this, see the Analytic Printing and BigQuery section in the User Guide.
At this point, we can validate that our bazel environment is set up correctly and our dataset is generated by running a simple test example:
bazel build //cc/google/fhir_examples:ParsePatients
bazel-bin/cc/google/fhir_examples/ParsePatients $WORKSPACE
This should parse all 1000 patients we generated into FhirProto, and print one out as an example.
Generating custom profiles and protos makes use of a couple of scripts defined by the FhirProto library. To add these to your bin
, run
curl https://raw.githubusercontent.com/google/fhir/v0.5.0/bazel/generate_protos_utils.sh > ~/bin/generate_protos_utils.sh && \
curl https://raw.githubusercontent.com/google/fhir/v0.5.0/bazel/generate_protos.sh > ~/bin/generate_protos.sh && \
curl https://raw.githubusercontent.com/google/fhir/v0.5.0/bazel/generate_definitions_and_protos.sh > ~/bin/generate_definitions_and_protos.sh && \
chmod +x ~/bin/generate_protos.sh && chmod +x ~/bin/generate_definitions_and_protos.sh
Finally, some examples show how to use Analytic SQL-on-FHIR with BigQuery, which is free to set up and provides a sandbox environment with pretty good quotas. There are some examples in shell directory that show how to upload data to BigQuery using th bq command line tool. Once it's there, you can either query it from the Cloud Console, or use the bq
tool as is done in run_queries.sh.
FHIR® is the registered trademark of HL7 and is used with the permission of HL7. Use of the FHIR trademark does not constitute endorsement of this product by HL7.