CRI-O and Kubernetes follow the same release cycle and deprecation policy. For more information visit the Kubernetes versioning documentation.
Version - Branch | Kubernetes branch/version | Maintenance status |
---|---|---|
CRI-O 1.16.x - release-1.16 | Kubernetes 1.16 branch, v1.16.x | = |
CRI-O 1.17.x - release-1.17 | Kubernetes 1.17 branch, v1.17.x | = |
CRI-O 1.18.x - release-1.18 | Kubernetes 1.18 branch, v1.18.x | = |
CRI-O HEAD - master | Kubernetes master branch | ✓ |
Key:
✓
Changes in main Kubernetes repo about CRI are actively implemented in CRI-O=
Maintenance is manual, only bugs will be patched.
The release notes for CRI-O are hand-crafted and can be continuously retrieved from our GitHub pages website.
CRI-O is meant to provide an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of CRI-O is tied to the scope of the CRI.
At a high level, we expect the scope of CRI-O to be restricted to the following functionalities:
- Support multiple image formats including the existing Docker image format
- Support for multiple means to download images including trust & image verification
- Container image management (managing image layers, overlay filesystems, etc)
- Container process lifecycle management
- Monitoring and logging required to satisfy the CRI
- Resource isolation as required by the CRI
- Building, signing and pushing images to various image storages
- A CLI utility for interacting with CRI-O. Any CLIs built as part of this project are only meant for testing this project and there will be no guarantees on the backward compatibility with it.
This is an implementation of the Kubernetes Container Runtime Interface (CRI) that will allow Kubernetes to directly launch and manage Open Container Initiative (OCI) containers.
The plan is to use OCI projects and best of breed libraries for different aspects:
- Runtime: runc (or any OCI runtime-spec implementation) and oci runtime tools
- Images: Image management using containers/image
- Storage: Storage and management of image layers using containers/storage
- Networking: Networking support through use of CNI
It is currently in active development in the Kubernetes community through the design proposal. Questions and issues should be raised in the Kubernetes sig-node Slack channel.
Command | Description |
---|---|
crio(8) | OCI Kubernetes Container Runtime daemon |
Note that kpod and its container management and debugging commands have moved to a separate repository, located here.
File | Description |
---|---|
crio.conf(5) | CRI-O Configuration file |
policy.json(5) | Signature Verification Policy File(s) |
registries.conf(5) | Registries Configuration file |
storage.conf(5) | Storage Configuration file |
You can configure CRI-O to inject OCI Hooks when creating containers.
We provide useful information for operations and development transfer as it relates to infrastructure that utilizes CRI-O.
For async communication and long running discussions please use issues and pull requests on the github repo. This will be the best place to discuss design and implementation.
For chat communication we have an IRC channel #CRI-O on chat.freenode.net, and a channel on the Kubernetes slack that everyone is welcome to join and chat about development.
We maintain a curated list of links related to CRI-O. Did you find something interesting on the web about the project? Awesome, feel free to open up a PR and add it to the list.
To install CRI-O
, you can follow our installation guide.
Alternatively, if you'd rather build CRI-O
from source, checkout our setup
guide.
We also provide a way in building static binaries of CRI-O
via nix.
Those binaries are available for every successfully built commit on our Google Cloud Storage Bucket.
This means that the latest commit can be downloaded via:
> curl -f https://storage.googleapis.com/k8s-conform-cri-o/artifacts/crio-$(git ls-remote https://github.com/cri-o/cri-o master | cut -c1-9).tar.gz -o crio.tar.gz
Before you begin, you'll need to start CRI-O
You can run a local version of Kubernetes with CRI-O
using local-up-cluster.sh
:
- Clone the Kubernetes repository
- From the Kubernetes project directory, run:
CGROUP_DRIVER=systemd \
CONTAINER_RUNTIME=remote \
CONTAINER_RUNTIME_ENDPOINT='unix:///var/run/crio/crio.sock' \
./hack/local-up-cluster.sh
For more guidance in running CRI-O
, visit our tutorial page
CRI-O exposes per default the gRPC API to fulfill the Container Runtime Interface (CRI) of Kubernetes. Besides this, there exists an additional HTTP API to retrieve further runtime status information about CRI-O. Please be aware that this API is not considered to be stable and production use-cases should not rely on it.
On a running CRI-O instance, we can access the API via an HTTP transfer tool like curl:
$ sudo curl -v --unix-socket /var/run/crio/crio.sock http://localhost/info | jq
{
"storage_driver": "btrfs",
"storage_root": "/var/lib/containers/storage",
"cgroup_driver": "systemd",
"default_id_mappings": { ... }
}
The following API entry points are currently supported:
Path | Content-Type | Description |
---|---|---|
/info |
application/json |
General information about the runtime, like storage_driver and storage_root . |
/containers/:id |
application/json |
Dedicated container information, like name , pid and image . |
/config |
application/toml |
The complete TOML configuration (defaults to /etc/crio/crio.conf ) used by CRI-O. |
The tool crio-status
can be used to access the API with a dedicated command
line tool. It supports all API endpoints via the dedicated subcommands config
,
info
and containers
, for example:
$ sudo go run cmd/crio-status/main.go info
cgroup driver: systemd
storage driver: btrfs
storage root: /var/lib/containers/storage
default GID mappings (format <container>:<host>:<size>):
0:0:4294967295
default UID mappings (format <container>:<host>:<size>):
0:0:4294967295
Please refer to the CRI-O Metrics guide.
A weekly meeting is held to discuss CRI-O development. It is open to everyone. The details to join the meeting are on the wiki.