Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jetstream Maxtext Module #719

Merged
merged 35 commits into from
Jul 9, 2024
Merged
Show file tree
Hide file tree
Changes from 33 commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
2a7e568
first commit
Bslabe123 Jul 1, 2024
2824bb2
terraform fmt
Bslabe123 Jul 1, 2024
d8e1228
Update README.md
Bslabe123 Jul 1, 2024
cc3d7c5
prometheus adapter module in main
Bslabe123 Jul 1, 2024
d627b02
remove apply.sh
Bslabe123 Jul 1, 2024
f662092
typo
Bslabe123 Jul 1, 2024
4f1ba93
terraform fmt
Bslabe123 Jul 1, 2024
2086f62
Merge remote-tracking branch 'origin/main' into jetstream-module
Bslabe123 Jul 2, 2024
c8b3687
large cleanup and validation
Bslabe123 Jul 2, 2024
b63b93c
moved fields and made module variables consistent with example variables
Bslabe123 Jul 2, 2024
1a0444e
parameterized accelerator selectors
Bslabe123 Jul 2, 2024
505fbe1
parameterize metrics scrape interval
Bslabe123 Jul 2, 2024
b441e28
fmt
Bslabe123 Jul 2, 2024
d193afa
fmt
Bslabe123 Jul 2, 2024
dfd8078
load parameters parameterization and multiple hpa resources
Bslabe123 Jul 2, 2024
e36f05f
fmt
Bslabe123 Jul 2, 2024
2cc9a1f
parameterized model name
Bslabe123 Jul 2, 2024
4b37ebd
update readme and validators
Bslabe123 Jul 2, 2024
de8a8f7
changes to jetstream module deployment readme
Bslabe123 Jul 2, 2024
359cc6d
terraform fmt
Bslabe123 Jul 2, 2024
d86578f
accelerator_memory_used_percentage -> memory_used_percentage
Bslabe123 Jul 3, 2024
ccc736a
changes to READMEs
Bslabe123 Jul 3, 2024
d77e30c
tweaks
Bslabe123 Jul 3, 2024
482fd8e
metrics port optional
Bslabe123 Jul 3, 2024
58081c4
sample tfvars no longer includes autoscaling config
Bslabe123 Jul 3, 2024
bb81350
example autoscaling config
Bslabe123 Jul 3, 2024
7af7810
Update README.md
Bslabe123 Jul 8, 2024
ab88992
Update README.md
Bslabe123 Jul 8, 2024
aece9b0
Update README.md
Bslabe123 Jul 8, 2024
376a90a
strengthen hpa config validation
Bslabe123 Jul 8, 2024
161c333
More updates to readmes
Bslabe123 Jul 8, 2024
880cb36
tweak to readme
Bslabe123 Jul 8, 2024
eb44246
typo
Bslabe123 Jul 8, 2024
d5d05f4
missing kubectl apply
Bslabe123 Jul 8, 2024
cc64c39
typos
Bslabe123 Jul 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 25 additions & 25 deletions modules/custom-metrics-stackdriver-adapter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,32 +2,9 @@

Adapted from https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml

## Usage
## Installation via bash, gcloud, and kubectl

To use this module, include it from your terraform main:

```
module "custom_metrics_stackdriver_adapter" {
source = "./path/to/custom-metrics-stackdriver-adapter"
}
```

For a workload identity enabled cluster, some additional configuration is
needed:

```
module "custom_metrics_stackdriver_adapter" {
source = "./path/to/custom-metrics-stackdriver-adapter"
workload_identity = {
enabled = true
project_id = "<PROJECT_ID>"
}
}
```

## Bash equivalent of this module

Assure the following are set before running:
Assure the following environment variables are set:
- PROJECT_ID: Your GKE project ID
- WORKLOAD_IDENTITY: Is workload identity federation enabled in the target cluster?

Expand Down Expand Up @@ -63,3 +40,26 @@ kubectl apply -f apiservice_v1beta2.custom.metrics.k8s.io.yaml.tftpl
kubectl apply -f apiservice_v1beta1.external.metrics.k8s.io.yaml.tftpl
kubectl apply -f clusterrolebinding_external-metrics-reader.yaml.tftpl
```

## Installation via Terraform

To use this as a module, include it from your terraform main:

```
module "custom_metrics_stackdriver_adapter" {
source = "./path/to/custom-metrics-stackdriver-adapter"
}
```

For a workload identity enabled cluster, some additional configuration is
needed:

```
module "custom_metrics_stackdriver_adapter" {
source = "./path/to/custom-metrics-stackdriver-adapter"
workload_identity = {
enabled = true
project_id = "<PROJECT_ID>"
}
}
```
171 changes: 171 additions & 0 deletions modules/jetstream-maxtext-deployment/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
This module deploys Jetstream Maxtext to a cluster. If `prometheus_port` is set then a [PodMontoring CR](https://cloud.google.com/stackdriver/docs/managed-prometheus/setup-managed#gmp-pod-monitoring) will be deployed for scraping metrics and exporting them to Google Cloud Monitoring. See the [deployment template](./templates/deployment.yaml.tftpl) to see which command line args are passed by default. For additional configuration please reference the [MaxText base config file](https://github.com/google/maxtext/blob/main/MaxText/configs/base.yml) for a list of configurable command line args and their explainations.

## Installation via bash and kubectl

Assure the following environment variables are set:
- MODEL_NAME: The name of your LLM (as of the writing of this README valid options are "gemma-7b", "llama2-7b", "llama2-13b")
- PARAMETERS_PATH: Where to find the parameters for your LLM (if using the checkpoint-converter it will be "gs:\/\/$BUCKET_NAME\/final\/unscanned\/gemma_7b-it\/0\/checkpoints\/0\/items" where $BUCKET_NAME is the same one used in the checkpoint-converter)
- (optional) METRICS_PORT: Port to emit custom metrics on
- (optional) TPU_TOPOLOGY: Topology of TPU chips used by jetstream (default: "2x4")
- (optional) TPU_TYPE: Type of TPUs used (default: "tpu-v5-lite-podslice")
- (optional) TPU_CHIP_COUNT: Number of TPU chips requested, can be obtained by algebraically evaluating TPU_TOPOLOGY
- (optional) MAXENGINE_SERVER_IMAGE: Maxengine server container image
- (optional) JETSTREAM_HTTP_SERVER_IMAGE: Jetstream HTTP server container image

```
if [ -z "$MAXENGINE_SERVER_IMAGE" ]; then
MAXENGINE_SERVER_IMAGE="us-docker.pkg.dev\/cloud-tpu-images\/inference\/maxengine-server:v0.2.2"
fi

if [ -z "$JETSTREAM_HTTP_SERVER_IMAGE" ]; then
JETSTREAM_HTTP_SERVER_IMAGE="us-docker.pkg.dev\/cloud-tpu-images\/inferenc\/jetstream-http:v0.2.2"
fi

if [ -z "$TPU_TOPOLOGY" ]; then
TPU_TOPOLOGY="2x4"
fi

if [ -z "$TPU_TYPE" ]; then
TPU_TYPE="tpu-v5-lite-podslice"
fi

if [ -z "$TPU_CHIP_COUNT" ]; then
TPU_CHIP_COUNT="8"
fi

if [ -z "$MODEL_NAME" ]; then
echo "Must provide MODEL_NAME in environment" 1>&2
exit 2;
fi

if [ -z "$PARAMETERS_PATH" ]; then
echo "Must provide PARAMETERS_PATH in environment" 1>&2
exit 2;
fi

JETSTREAM_MANIFEST=$(mktemp)
cat ./templates/deployment.yaml.tftpl >> "$JETSTREAM_MANIFEST"

PODMONITORING_MANIFEST=$(mktemp)
cat ./templates/podmonitoring.yaml.tftpl >> "$PODMONITORING_MANIFEST"

if [ "$METRICS_PORT" != "" ]; then
cat $PODMONITORING_MANIFEST | sed "s/\${metrics_port}/$METRICS_PORT/g" >> "$PODMONITORING_MANIFEST"
cat $JETSTREAM_MANIFEST | sed "s/\${metrics_port_arg}/prometheus_port=$METRICS_PORT/g" >> "$JETSTREAM_MANIFEST"

cat $PODMONITORING_MANIFEST | kubectl apply -f -
else
cat $JETSTREAM_MANIFEST | sed "s/\${metrics_port_arg}//g" >> "$JETSTREAM_MANIFEST"
fi

cat $JETSTREAM_MANIFEST \
| sed "s/\${tpu-type}/$TPU_TYPE/g" \
| sed "s/\${tpu-topology}/$TPU_TOPOLOGY/g" \
| sed "s/\${tpu-chip-count}/$TPU_CHIP_COUNT/g" \
| sed "s/\${maxengine_server_image}/$MAXENGINE_SERVER_IMAGE/g" \
| sed "s/\${jetstream_http_server_image}/$JETSTREAM_HTTP_SERVER_IMAGE/g" \
| sed "s/\${model_name}/$MODEL_NAME/g" \
| sed "s/\${load_parameters_path_arg}/$PARAMETERS_PATH/g" >> "$JETSTREAM_MANIFEST"

cat $JETSTREAM_MANIFEST | kubectl apply -f -
```
## (Optional) Autoscaling Components

Applying the following resources to your cluster will enable you to scale the number of Jetstream server pods with custom or system metrics:
- Metrics Adapter (either [Prometheus-adapter](https://github.com/kubernetes-sigs/prometheus-adapter)(recommended) or [CMSA](https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/custom-metrics-stackdriver-adapter)): For making metrics from the Google Cloud Monitoring API visible to resources within the cluster.
- [Horizontal Pod Autoscaler (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/): For reading metrics and setting the maxengine-servers deployments replica count accordingly.

### Metrics Adapter

#### Custom Metrics Stackdriver Adapter

Follow the [Custom-metrics-stackdriver-adapter README](https://github.com/GoogleCloudPlatform/ai-on-gke/tree/main/modules/custom-metrics-stackdriver-adapter/README.md) to install without terraform.

Once installed the values of the following metrics can be used as averageValues in a HorisontalPodAutoscaler (HPA):
Bslabe123 marked this conversation as resolved.
Show resolved Hide resolved
- Jetstream metrics (i.e. any metric prefixed with "jetstream_")
- "memory_used" (the current sum of memory usage across all accelerators used by a node in bytes, note this value can be extremely large since the unit of measurement is bytes)

#### Prometheus Adapter

Follow the [Prometheus-adapter README](https://github.com/GoogleCloudPlatform/ai-on-gke/tree/main/modules/prometheus-adapter/README.md) to install without terraform. A few notes:

This module uses the the prometheus-community/prometheus-adapter Helm chart as part of the install process, it has a values file that requires "CLUSTER_NAME" to be replaced with your cluster name in order to properly filter metrics. This is a consequence of differing cluster name schemes between GKE and standard k8s clusters. Instructions for each are as follows for if the cluster name isnt already known. For GKE clusters, Remove any characters prior to and including the last underscore with `kubectl config current-context | awk -F'_' ' { print $NF }'` to get the cluster name. For other clusters, The cluster name is simply: `kubectl config current-context`.

Instructions to set the PROMETHEUS_HELM_VALUES_FILE env var as follows:

```
PROMETHEUS_HELM_VALUES_FILE=$(mktemp)
sed "s/\${cluster_name}/$CLUSTER_NAME/g" ../templates/values.yaml.tftpl >> "$PROMETHEUS_HELM_VALUES_FILE"
```

Once installed the values of the following metrics can be used as averageValues in a HorisontalPodAutoscaler (HPA):
Bslabe123 marked this conversation as resolved.
Show resolved Hide resolved
- Jetstream metrics (i.e. any metric prefixed with "jetstream_")
- "memory_used_percentage" (the percentage of total accelerator memory used across all accelerators used by a node)

### Horizontal Pod Autoscalers

The following should be run for each HPA, assure the following are set before running:
- ADAPTER: The adapter currently in cluster, can be either 'custom-metrics-stackdriver-adapter' or 'prometheus-adapter'
- MIN_REPLICAS: Lower bound for number of jetstream replicas
- MAX_REPLICAS: Upper bound for number of jetstream replicas
- METRIC: The metrics whose value will be compared against the average value, can be any metric listed above
- AVERAGE_VALUE: Average value to be used for calculating replica cound, see [docs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details) for more details

```
if [ -z "$ADAPTER" ]; then
echo "Must provide ADAPTER in environment" 1>&2
exit 2;
fi

if [ -z "$MIN_REPLICAS" ]; then
echo "Must provide MIN_REPLICAS in environment" 1>&2
exit 2;
fi

if [ -z "$MAX_REPLICAS" ]; then
echo "Must provide MAX_REPLICAS in environment" 1>&2
exit 2;
fi

if [[ $METRIC =~ ^jetstream_.* ]]; then
METRICS_SOURCE_TYPE="Pods"
METRICS_SOURCE="pods"
elif [ $METRIC == memory_used ] && [ "$ADAPTER" == custom-metrics-stackdriver-adapter ]; then
METRICS_SOURCE_TYPE="External"
METRICS_SOURCE="external"
METRIC="kubernetes.io|node|accelerator|${METRIC}"
elif [ $METRIC == memory_used_percentage ] && [ "$ADAPTER" == prometheus-adapter ]; then
METRICS_SOURCE_TYPE="External"
METRICS_SOURCE="external"
else
echo "Must provide valid METRIC for ${ADAPTER} in environment" 1>&2
exit 2;
fi

if [ -z "$AVERAGE_VALUE" ]; then
echo "Must provide AVERAGE_VALUE in environment" 1>&2
exit 2;
fi

echo "apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: jetstream-hpa-$(uuidgen)
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: maxengine-server
minReplicas: ${MIN_REPLICAS}
maxReplicas: ${MAX_REPLICAS}
metrics:
- type: ${METRICS_SOURCE_TYPE}
${METRICS_SOURCE}:
metric:
name: ${METRIC}
target:
type: AverageValue
averageValue: ${AVERAGE_VALUE}
" | kubectl apply -f -
```
113 changes: 113 additions & 0 deletions modules/jetstream-maxtext-deployment/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
/**
* Copyright 2024 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

locals {
deployment_template = "${path.module}/templates/deployment.yaml.tftpl"
service_template = "${path.module}/templates/service.yaml.tftpl"
podmonitoring_template = "${path.module}/templates/podmonitoring.yaml.tftpl"
cmsa_jetstream_hpa_template = "${path.module}/templates/custom-metrics-stackdriver-adapter/hpa.jetstream.yaml.tftpl"
prometheus_jetstream_hpa_template = "${path.module}/templates/prometheus-adapter/hpa.jetstream.yaml.tftpl"
}

resource "kubernetes_manifest" "jetstream-deployment" {
count = 1
manifest = yamldecode(templatefile(local.deployment_template, {
maxengine_server_image = var.maxengine_deployment_settings.maxengine_server_image
jetstream_http_server_image = var.maxengine_deployment_settings.jetstream_http_server_image
model_name = var.maxengine_deployment_settings.model_name
load_parameters_path_arg = var.maxengine_deployment_settings.parameters_path
metrics_port_arg = var.maxengine_deployment_settings.metrics_port != null ? format("prometheus_port=%d", var.maxengine_deployment_settings.metrics_port) : "",
tpu-topology = var.maxengine_deployment_settings.accelerator_selectors.topology
tpu-type = var.maxengine_deployment_settings.accelerator_selectors.accelerator
tpu-chip-count = var.maxengine_deployment_settings.accelerator_selectors.chip_count
}))
}

resource "kubernetes_manifest" "jetstream-service" {
count = 1
manifest = yamldecode(file(local.service_template))
}

resource "kubernetes_manifest" "jetstream-podmonitoring" {
count = var.maxengine_deployment_settings.metrics_port != null ? 1 : 0
manifest = yamldecode(templatefile(local.podmonitoring_template, {
metrics_port = var.maxengine_deployment_settings.metrics_port != null ? var.maxengine_deployment_settings.metrics_port : "",
metrics_scrape_interval = var.maxengine_deployment_settings.metrics_scrape_interval
}))
}

module "custom_metrics_stackdriver_adapter" {
count = var.hpa_config.metrics_adapter == "custom-metrics-stackdriver-adapter" ? 1 : 0
source = "../custom-metrics-stackdriver-adapter"
workload_identity = {
enabled = true
project_id = var.project_id
}
}

module "prometheus_adapter" {
count = var.hpa_config.metrics_adapter == "prometheus-adapter" ? 1 : 0
source = "../prometheus-adapter"
credentials_config = {
kubeconfig = {
path : "~/.kube/config"
}
}
project_id = var.project_id
config_file = templatefile("${path.module}/templates/prometheus-adapter/values.yaml.tftpl", {
cluster_name = var.cluster_name
})
}

resource "kubernetes_manifest" "prometheus_adapter_hpa_custom_metric" {
for_each = {
for index, rule in var.hpa_config.rules :
index => {
index = index
target_query = rule.target_query
average_value_target = rule.average_value_target
}
if var.maxengine_deployment_settings.custom_metrics_enabled && var.hpa_config.metrics_adapter == "prometheus-adapter"
}

manifest = yamldecode(templatefile(local.prometheus_jetstream_hpa_template, {
index = each.value.index
hpa_type = try(each.value.target_query, "")
hpa_averagevalue_target = try(each.value.average_value_target, 1)
hpa_min_replicas = var.hpa_config.min_replicas
hpa_max_replicas = var.hpa_config.max_replicas
}))
}

resource "kubernetes_manifest" "cmsa_hpa_custom_metric" {
for_each = {
for index, rule in var.hpa_config.rules :
index => {
index = index
target_query = rule.target_query
average_value_target = rule.average_value_target
}
if var.maxengine_deployment_settings.custom_metrics_enabled && var.hpa_config.metrics_adapter == "custom-metrics-stackdriver-adapter"
}

manifest = yamldecode(templatefile(local.cmsa_jetstream_hpa_template, {
index = each.value.index
hpa_type = try(each.value.target_query, "")
hpa_averagevalue_target = try(each.value.average_value_target, 1)
hpa_min_replicas = var.hpa_config.min_replicas
hpa_max_replicas = var.hpa_config.max_replicas
}))
}
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: jetstream-hpa
namespace: ${namespace}
name: jetstream-hpa-${index}
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
Expand All @@ -20,12 +20,11 @@ spec:
type: AverageValue
averageValue: ${hpa_averagevalue_target}
%{ else }
- type: Pods
pods:
- type: External
external:
metric:
name: kubernetes.io|node|accelerator|memory_used
name: kubernetes.io|node|accelerator|${hpa_type}
target:
type: AverageValue
averageValue: ${hpa_averagevalue_target}
%{ endif }

%{ endif }
Loading
Loading