This repository has been archived by the owner on Sep 15, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
6 changed files
with
215 additions
and
12 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,90 @@ | ||
# Host Your Dataset on GitHub Using Releases | ||
|
||
GitHub Releases allow you to host pre-built assets, source code, etc. for download. This recipe takes advantage of GitHub Releases using a fixed release tag name to build your dataset and make it available at a fixed download URL. | ||
|
||
## This recipe is the foundation of nearly all the other things we do with cueblox | ||
|
||
## Prerequisites | ||
|
||
* content repository hosted on GitHub | ||
* blox.cue configuration is complete | ||
|
||
## Presumptions | ||
|
||
Since your configuration can vary drastically based on your needs, we'll be working with the following assumptions: | ||
|
||
Directory structure: | ||
|
||
```bash | ||
$ tree -L 1 | ||
. | ||
├── README.md | ||
├── data | ||
├── unrelated_directory | ||
└── other_unrelated_directory | ||
``` | ||
|
||
Our CueBlox managed data lives in the `data` directory, and the configuration file is in that directory as well. | ||
|
||
blox.cue contents: | ||
```json | ||
{ | ||
data_dir: "." | ||
schemata_dir: "schemata" | ||
build_dir: ".build" <-- Take note of this | ||
template_dir: "tpl" | ||
static_dir: "static" | ||
} | ||
``` | ||
|
||
Important to note: the `data_dir` is set to `.` -- the current directory. This isn't required for this recipe to work, it simply allows our directory structure to be a little more flat. The important piece is `build_dir` which is set to `.build`. These directories are relative to the location of the `blox.cue` file, so in our example, the output from `blox build` will be at `$REPO_ROOT/data/.build/data.json`. | ||
|
||
## Releasing with GitHub Actions | ||
|
||
Create a new GitHub Action by placing a file in `$REPO_ROOT/.github/workflows`. You can call it anything, if you don't specify a name in the Action's YAML definition, the file name will be used. We will use `data.yaml` in this recipe, to give us a workflow called `data`. | ||
|
||
data.yaml: | ||
|
||
```yaml | ||
on: | ||
push: | ||
paths: | ||
- .github/** | ||
- data/** <-- run when files change here | ||
|
||
jobs: | ||
build: | ||
runs-on: ubuntu-latest | ||
name: Publish CueBlox | ||
steps: | ||
- name: Checkout | ||
uses: actions/checkout@v2 | ||
|
||
- name: Build & Validate Blox Data | ||
id: build | ||
uses: cueblox/github-action@v0.0.8 | ||
with: | ||
directory: data <-- location of blox.cue | ||
|
||
- uses: marvinpinto/action-automatic-releases@latest | ||
with: | ||
repo_token: "${{ secrets.GITHUB_TOKEN }}" | ||
automatic_release_tag: "blox" | ||
prerelease: true | ||
title: "CueBlox" | ||
files: | | ||
data/.build/data.json <-- build_dir + data.json | ||
``` | ||
This workflow uses the `cueblox/github-action` action to compile and validate your dataset, passing in the working directory `data` to tell `blox` where to look for your `blox.cue` file. | ||
|
||
The last step uses `marvinpinto/action-automatic-releases` to create a release of your dataset. Because it specifies `prerelease: true` and `automatic_release_tag: "blox"`, the release will always have the tag `blox`, which means it will always be available at the same URL. | ||
|
||
If you follow this pattern your releases will be available at a URL that looks like this: | ||
|
||
``` | ||
https://github.com/you/reponame/releases/download/blox/data.json | ||
``` | ||
|
||
Now you have a fixed location to download your dataset which will be updated automatically every time you push new files to your content repository. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
# Serving Data as GraphQL | ||
|
||
## Prerequisites | ||
|
||
* content repository hosted on GitHub | ||
* blox.cue configuration is complete | ||
* [dataset hosted on GitHub](/recipes/github-releases), or another fixed URL | ||
|
||
## json-server | ||
|
||
Use the awesome [json-graphql-server](https://www.npmjs.com/package/json-graphql-server) to serve your dataset as GraphQL. | ||
|
||
The implementation will vary based on how you choose to run the service, but we've really enjoyed using `serverless`/Function hosting platforms like Azure Functions, Vercel, AWS Lambda, and Netlify to host this service. Here's the core of the recipe: | ||
|
||
```javascript | ||
const fetch = require("sync-fetch"); | ||
|
||
const jsonGraphqlExpress = require("json-graphql-server").default | ||
const express = require("express"); | ||
|
||
const data = fetch( | ||
"https://github.com/bketelsen/bkml/releases/download/blox/data.json" | ||
).json(); | ||
const app = require("express")(); | ||
|
||
|
||
app.use("/api/graphql", jsonGraphqlExpress(data)); | ||
|
||
|
||
const port = process.env.PORT || 3000; | ||
|
||
module.exports = app.listen(port, () => | ||
console.log(`Server running on ${port}, http://localhost:${port}`) | ||
); | ||
|
||
``` | ||
|
||
We're using `json-graphql-server` to serve the dataset as an `express` application at the `/api/graphql` route. When you run this, you can view the GraphIQL endpoint with your web browser, or make programmatic requests with a graphql client. | ||
|
||
See the [documentation](https://www.npmjs.com/package/json-graphql-server) for complete details. | ||
|
||
This recipe won't typically work without some modification of the source. The key is to figure out how to adapt an `express` endpoint to your hosting provider's serverless hosting. Vercel is happy to serve up a function running `express` without modification. Azure Functions requires an adapter like [azure-function-express](https://www.npmjs.com/package/azure-function-express). | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
|
||
# Common Patterns | ||
|
||
CueBlox can be the foundation of an impressive git-based automation workflow. | ||
|
||
The following recipes show some suggestions on ways to consume your CueBlox dataset. | ||
|
||
## Building and Hosting Datasets | ||
|
||
* [Host dataset on GitHub](/recipes/github-releases) Start HERE! | ||
* [Serve data with REST](/recipes/rest) | ||
* [Serve data over GraphQL](/recipes/graphql) | ||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,45 @@ | ||
|
||
# Serving Data over REST | ||
|
||
## Prerequisites | ||
|
||
* content repository hosted on GitHub | ||
* blox.cue configuration is complete | ||
* [dataset hosted on GitHub](/recipes/github-releases), or another fixed URL | ||
|
||
## json-server | ||
|
||
Use the awesome [json-server](https://www.npmjs.com/package/json-server) to serve your dataset as a REST API. | ||
|
||
The implementation will vary based on how you choose to run the service, but we've really enjoyed using `serverless`/Function hosting platforms like Azure Functions, Vercel, AWS Lambda, and Netlify to host this service. Here's the core of the recipe: | ||
|
||
```javascript | ||
const fetch = require("sync-fetch"); | ||
|
||
const jsonServer = require('json-server') | ||
const express = require("express"); | ||
|
||
const data = fetch( | ||
"https://github.com/you/yourrepo/releases/download/blox/data.json" | ||
).json(); | ||
const app = require("express")(); | ||
const router = jsonServer.router(data, { foreignKeySuffix: '_id' }) | ||
|
||
|
||
app.use("/api", router); | ||
|
||
|
||
const port = process.env.PORT || 3000; | ||
|
||
module.exports = app.listen(port, () => | ||
console.log(`Server running on ${port}, http://localhost:${port}`) | ||
); | ||
``` | ||
|
||
We're using `json-server` to serve the dataset as an `express` application at the `/api` route. When you run this, you can make a `GET` request to `/api/datasetname` (`/api/articles` for example) and get back all the data in that dataset. | ||
|
||
See the [documentation](https://www.npmjs.com/package/json-server) for complete details. | ||
|
||
This recipe won't typically work without some modification of the source. The key is to figure out how to adapt an `express` endpoint to your hosting provider's serverless hosting. Vercel is happy to serve up a function running `express` without modification. Azure Functions requires an adapter like [azure-function-express](https://www.npmjs.com/package/azure-function-express). | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters