Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

secrets-store-csi-driver install fails on Azure RedHat OpenShift [ARO] #363

Closed
ezYakaEagle442 opened this issue Jan 9, 2021 · 24 comments
Closed
Labels
bug Something isn't working

Comments

@ezYakaEagle442
Copy link
Contributor

What steps did you take and what happened:

Following the secrets-store-csi-driver install docs, I hit a security issue specific to OpenShift related to the securityContext.

helm install csi-secrets-store-provider-azure csi-secrets-store-provider-azure/csi-secrets-store-provider-azure -n $target_namespace
helm ls -n $target_namespace -o yaml
helm status csi-secrets-store-provider-azure -n $target_namespace

Create key-Vault & Secret

az provider register -n Microsoft.KeyVault

az keyvault create --name $vault_name --enable-soft-delete true --location $location -g $rg_name
az keyvault show --name $vault_name 
az keyvault update --name $vault_name --default-action deny -g $rg_name 

kv_id=$(az keyvault show --name $vault_name -g $rg_name --query "id" --output tsv)

az keyvault secret set --name $vault_secret_name --value $vault_secret --description "CSI secret store driver - ${appName} Secret" --vault-name $vault_name
az keyvault secret list --vault-name $vault_name
az keyvault secret show --vault-name $vault_name --name $vault_secret_name --output tsv

aro_client_id=$(az aro show -n $cluster_name -g $rg_name --query 'servicePrincipalProfile.clientId' -o tsv)

Perform role assignments

az role assignment create --role Reader --assignee $aro_client_id --scope /subscriptions/$subId/resourcegroups/$rg_name/providers/Microsoft.KeyVault/vaults/$vault_name # $kv_id

az keyvault set-policy -n $vault_name --key-permissions get --spn $aro_client_id
az keyvault set-policy -n $vault_name --secret-permissions get --spn $aro_client_id
az keyvault set-policy -n $vault_name --certificate-permissions get --spn $aro_client_id

Configure & Deploy secretproviderclasses

export SUBSCRIPTION_ID=$subId
export RESOURCE_GROUP=$rg_name
export TENANT_ID=$tenantId
export KV_NAME=$vault_name
export SECRET_NAME=$vault_secret_name

envsubst < ./cnf/secrets-store-csi-provider-class.yaml > deploy/secrets-store-csi-provider-class.yaml
cat deploy/secrets-store-csi-provider-class.yaml
oc apply -f deploy/secrets-store-csi-provider-class.yaml -n $target_namespace
oc get secretproviderclasses -n $target_namespace
oc describe secretproviderclasses azure-$KV_NAME -n $target_namespace

envsubst < ./cnf/csi-demo-pod-sp.yaml > deploy/csi-demo-pod-sp.yaml
cat deploy/csi-demo-pod-sp.yaml
oc apply -f deploy/csi-demo-pod-sp.yaml -n $target_namespace

oc get po -n $target_namespace -o wide
oc get events -n $target_namespace | grep -i "Error" 
oc describe pod nginx-secrets-store-inline -n $target_namespace
oc logs nginx-secrets-store-inline -n $target_namespace
Name:         nginx-secrets-store-inline
Namespace:    staging
Priority:     0
Node:         aro-azarc-101-x7jmv-worker-westeurope1-zg27m/172.32.2.6
Start Time:   Sat, 09 Jan 2021 20:54:03 +0100
Labels:       <none>
Annotations:  openshift.io/scc: node-exporter
Status:       Pending
IP:
IPs:          <none>
Containers:
  nginx:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/secrets-store from secrets-store-inline (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jrqj9 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  secrets-store-inline:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            secrets-store.csi.k8s.io
    FSType:
    ReadOnly:          true
    VolumeAttributes:      secretProviderClass=azure-kv-azarc
  default-token-jrqj9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jrqj9
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                 From               Message
  ----     ------       ----                ----               -------
  Normal   Scheduled    17m                 default-scheduler  Successfully assigned staging/nginx-secrets-store-inline to aro-azarc-101-x7jmv-worker-westeurope1-zg27m
  Warning  FailedMount  78s (x7 over 15m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[secrets-store-inline default-token-jrqj9]: timed out waiting for the condition
  Warning  FailedMount  43s (x16 over 17m)  kubelet            MountVolume.SetUp failed for volume "secrets-store-inline" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name secrets-store.csi.k8s.io not found in the list of registered CSI drivers
oc get events -n $target_namespace | grep -i "Error" 
 Warning   FailedCreate   daemonset/csi-secrets-store-provider-azure-secrets-store-csi-driver   Error creating: pods "csi-secrets-store-provider-azure-secrets-store-csi-driver-" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[0].securityContext.containers[1].hostPort: Invalid value: 9808: Host ports are not allowed to be used spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[1].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[1].securityContext.containers[1].hostPort: Invalid value: 9808: Host ports are not allowed to be used spec.containers[2].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[2].securityContext.containers[1].hostPort: Invalid value: 9808: Host ports are not allowed to be used]
63s         Warning   FailedCreate   daemonset/csi-secrets-store-provider-azure                            Error creating: pods "csi-secrets-store-provider-azure-" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used]

What did you expect to happen:

The install should be sucessfull following the docs

Anything else you would like to add:

Which access mode did you use to access the Azure Key Vault instance:
[e.g. Service Principal, Pod Identity, User Assigned Managed Identity, System Assigned Managed Identity]

I used Service Principal

Environment:

  • Secrets Store CSI Driver version: (use the image tag):
    app_version: 0.0.11
    chart: csi-secrets-store-provider-azure-0.0.15

  • Azure Key Vault provider version: (use the image tag):

  • Kubernetes version: (use kubectl version and kubectl get nodes -o wide):

Client Version: openshift-clients-4.5.0-202006231303.p0-16-g3f6a83fb7
Server Version: 4.5.16
Kubernetes Version: v1.18.3+2fbd7c7

  • Cluster type: (e.g. AKS, aks-engine, etc): Azure RedHat OpenShift [ARO]
@ezYakaEagle442 ezYakaEagle442 added the bug Something isn't working label Jan 9, 2021
@ezYakaEagle442
Copy link
Contributor Author

oc describe ds csi-secrets-store-provider-azure -n $target_namespace

Name:           csi-secrets-store-provider-azure
Selector:       app=csi-secrets-store-provider-azure
Node-Selector:  kubernetes.io/os=linux
Labels:         app=csi-secrets-store-provider-azure
                app.kubernetes.io/instance=csi-secrets-store-provider-azure
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=csi-secrets-store-provider-azure
                app.kubernetes.io/version=0.0.11
                helm.sh/chart=csi-secrets-store-provider-azure-0.0.15
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=csi-secrets-store-provider-azure
                    app.kubernetes.io/instance=csi-secrets-store-provider-azure
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=csi-secrets-store-provider-azure
                    app.kubernetes.io/version=0.0.11
                    helm.sh/chart=csi-secrets-store-provider-azure-0.0.15
  Service Account:  csi-secrets-store-provider-azure
  Containers:
   provider-azure-installer:
    Image:      mcr.microsoft.com/oss/azure/secrets-store/provider-azure:0.0.11
    Port:       <none>
    Host Port:  <none>
    Args:
      --endpoint=unix:///provider/azure.sock
    Limits:
      cpu:     50m
      memory:  100Mi
    Requests:
      cpu:        50m
      memory:     100Mi
    Environment:  <none>
    Mounts:
      /provider from provider-vol (rw)
      /var/lib/kubelet/pods from mountpoint-dir (rw)
  Volumes:
   provider-vol:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/secrets-store-csi-providers
    HostPathType:
   mountpoint-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:
Events:
  Type     Reason        Age                   From                  Message
  ----     ------        ----                  ----                  -------
  Warning  FailedCreate  5m31s (x32 over 99m)  daemonset-controller  Error creating: pods "csi-secrets-store-provider-azure-" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used]
oc describe ds csi-secrets-store-provider-azure-secrets-store-csi-driver -n $target_namespace
Name:           csi-secrets-store-provider-azure-secrets-store-csi-driver
Selector:       app=secrets-store-csi-driver
Node-Selector:  kubernetes.io/os=linux
Labels:         app=secrets-store-csi-driver
                app.kubernetes.io/instance=csi-secrets-store-provider-azure
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=secrets-store-csi-driver
                app.kubernetes.io/version=0.0.18
                helm.sh/chart=secrets-store-csi-driver-0.0.18
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=secrets-store-csi-driver
                    app.kubernetes.io/instance=csi-secrets-store-provider-azure
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=secrets-store-csi-driver
                    app.kubernetes.io/version=0.0.18
                    helm.sh/chart=secrets-store-csi-driver-0.0.18
  Service Account:  secrets-store-csi-driver
  Containers:
   node-driver-registrar:
    Image:      mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.0.1
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=/var/lib/kubelet/plugins/csi-secrets-store/csi.sock
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from plugin-dir (rw)
      /registration from registration-dir (rw)
   secrets-store:
    Image:      mcr.microsoft.com/oss/kubernetes-csi/secrets-store/driver:v0.0.18
    Port:       9808/TCP
    Host Port:  9808/TCP
    Args:
      --endpoint=$(CSI_ENDPOINT)
      --nodeid=$(KUBE_NODE_NAME)
      --provider-volume=/etc/kubernetes/secrets-store-csi-providers
      --grpc-supported-providers=azure
      --rotation-poll-interval=2m
      --metrics-addr=:8080
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     50m
      memory:  100Mi
    Liveness:  http-get http://:healthz/healthz delay=30s timeout=10s period=15s #success=1 #failure=5
    Environment:
      CSI_ENDPOINT:    unix:///csi/csi.sock
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from plugin-dir (rw)
      /etc/kubernetes/secrets-store-csi-providers from providers-dir (rw)
      /var/lib/kubelet/pods from mountpoint-dir (rw)
   liveness-probe:
    Image:      mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.1.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --csi-address=/csi/csi.sock
      --probe-timeout=3s
      --health-port=9808
      -v=2
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /csi from plugin-dir (rw)
  Volumes:
   mountpoint-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  DirectoryOrCreate
   registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
   plugin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/csi-secrets-store/
    HostPathType:  DirectoryOrCreate
   providers-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/secrets-store-csi-providers
    HostPathType:  DirectoryOrCreate
Events:
  Type     Reason        Age                    From                  Message
  ----     ------        ----                   ----                  -------
  Warning  FailedCreate  7m51s (x32 over 102m)  daemonset-controller  Error creating: pods "csi-secrets-store-provider-azure-secrets-store-csi-driver-" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[0].securityContext.containers[1].hostPort: Invalid value: 9808: Host ports are not allowed to be used spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[1].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[1].securityContext.containers[1].hostPort: Invalid value: 9808: Host ports are not allowed to be used spec.containers[2].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[2].securityContext.containers[1].hostPort: Invalid value: 9808: Host ports are not allowed to be used]

@ezYakaEagle442
Copy link
Contributor Author

I wonder if the DaemonSet should run with a ServiceAccont, so that we can configure something like :

oc create serviceaccount the-driver-sa -n default
oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:default:the-driver-sa
oc adm policy add-scc-to-user hostaccess system:serviceaccount:default:the-driver-sa

?

@andyzhangx
Copy link

@ezYakaEagle442 azure disk CSI driver also uses hostNetwork: true in driver daemonset, and that driver works on the same env?
cc @aramase @ritazh could you help? this issue is a little urgent, thanks.

@ezYakaEagle442
Copy link
Contributor Author

@andyzhangx indeed I am testing the CSI Drivers for Azure Disk, Azure File, Azure BLOB + Secret store driver on the same ARO cluster, this scenrio will happen in real life :)

@andyzhangx
Copy link

@ezYakaEagle442 check this link: https://stackoverflow.com/questions/61239490/openshift-unable-to-validate-against-any-security-context-constraint. if Azure Disk CSI driver works, that means there is some additional restriction in Secret store driver.

@ezYakaEagle442
Copy link
Contributor Author

@andyzhangx indeed, the difference is that I could rely on the Azure disk driver SA, adding :

oc adm policy add-scc-to-user privileged system:serviceaccount:kube-system:csi-azuredisk-node-sa
oc describe scc privileged
Name:                                           privileged
Priority:                                       <none>
Access:
  Users:                                        system:admin,system:serviceaccount:openshift-infra:build-controller
  Groups:                                       system:cluster-admins,system:nodes,system:masters
Settings:
  Allow Privileged:                             true
  Allow Privilege Escalation:                   true
  Default Add Capabilities:                     <none>
  Required Drop Capabilities:                   <none>
  Allowed Capabilities:                         *
  Allowed Seccomp Profiles:                     *
  Allowed Volume Types:                         *
  Allowed Flexvolumes:                          <all>
  Allowed Unsafe Sysctls:                       *
  Forbidden Sysctls:                            <none>
  Allow Host Network:                           true
  Allow Host Ports:                             true
  Allow Host PID:                               true
  Allow Host IPC:                               true
  Read Only Root Filesystem:                    false
  Run As User Strategy: RunAsAny
    UID:                                        <none>
    UID Range Min:                              <none>
    UID Range Max:                              <none>
  SELinux Context Strategy: RunAsAny
    User:                                       <none>
    Role:                                       <none>
    Type:                                       <none>
    Level:                                      <none>
  FSGroup Strategy: RunAsAny
    Ranges:                                     <none>
  Supplemental Groups Strategy: RunAsAny
    Ranges:                                     <none>

Is there any SA running the Secret Strore Driver DaemonSet ?

@andyzhangx
Copy link

@ezYakaEagle442
Copy link
Contributor Author

@andyzhangx theis SA has not been installed on my cluster that's the issue, I think it could work based the manual installer you referenced, but in my case I did used the HELM install which templates seems not being able to create the SA before being blocked by the OpenShift SCC

@andyzhangx
Copy link

@andyzhangx theis SA has not been installed on my cluster that's the issue, I think it could work based the manual installer you referenced, but in my case I did used the HELM install which templates seems not being able to create the SA before being blocked by the OpenShift SCC

so that's a bug in secret store driver helm install?

@ezYakaEagle442
Copy link
Contributor Author

@andyzhangx it looks lilke the install is different between the manual yaml install and the HELM install indeed.
It is super confuding to have this site install page refering to another site https://github.com/kubernetes-sigs/secrets-store-csi-driver#install-the-secrets-store-csi-driver

@ezYakaEagle442
Copy link
Contributor Author

helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver -n $target_namespace
oc get crd
oc get ds -n $target_namespace
oc describe csi-secrets-store-secrets-store-csi-driver -n $target_namespace
Name:           csi-secrets-store-secrets-store-csi-driver
Selector:       app=secrets-store-csi-driver
Node-Selector:  kubernetes.io/os=linux
Labels:         app=secrets-store-csi-driver
                app.kubernetes.io/instance=csi-secrets-store
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=secrets-store-csi-driver
                app.kubernetes.io/version=0.0.18
                helm.sh/chart=secrets-store-csi-driver-0.0.18
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=secrets-store-csi-driver
                    app.kubernetes.io/instance=csi-secrets-store
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=secrets-store-csi-driver
                    app.kubernetes.io/version=0.0.18
                    helm.sh/chart=secrets-store-csi-driver-0.0.18
  Service Account:  secrets-store-csi-driver
  Containers:
   node-driver-registrar:
    Image:      k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
    Port:       <none>
    Host Port:  <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=/var/lib/kubelet/plugins/csi-secrets-store/csi.sock
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from plugin-dir (rw)
      /registration from registration-dir (rw)
   secrets-store:
    Image:      k8s.gcr.io/csi-secrets-store/driver:v0.0.18
    Port:       9808/TCP
    Host Port:  9808/TCP
    Args:
      --endpoint=$(CSI_ENDPOINT)
      --nodeid=$(KUBE_NODE_NAME)
      --provider-volume=/etc/kubernetes/secrets-store-csi-providers
      --grpc-supported-providers=gcp;
      --metrics-addr=:8095
    Limits:
      cpu:     200m
      memory:  200Mi
    Requests:
      cpu:     50m
      memory:  100Mi
    Liveness:  http-get http://:healthz/healthz delay=30s timeout=10s period=15s #success=1 #failure=5
    Environment:
      CSI_ENDPOINT:    unix:///csi/csi.sock
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from plugin-dir (rw)
      /etc/kubernetes/secrets-store-csi-providers from providers-dir (rw)
      /var/lib/kubelet/pods from mountpoint-dir (rw)
   liveness-probe:
    Image:      k8s.gcr.io/sig-storage/livenessprobe:v2.1.0
    Port:       <none>
    Host Port:  <none>
    Args:
      --csi-address=/csi/csi.sock
      --probe-timeout=3s
      --health-port=9808
      -v=2
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /csi from plugin-dir (rw)
  Volumes:
   mountpoint-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  DirectoryOrCreate
   registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry/
    HostPathType:  Directory
   plugin-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/csi-secrets-store/
    HostPathType:  DirectoryOrCreate
   providers-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/secrets-store-csi-providers
    HostPathType:  DirectoryOrCreate
Events:
  Type     Reason        Age                 From                  Message
  ----     ------        ----                ----                  -------
  Warning  FailedCreate  30s (x19 over 11m)  daemonset-controller  Error creating: pods "csi-secrets-store-secrets-store-csi-driver-" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[0].securityContext.containers[1].hostPort: Invalid value: 9808: Host ports are not allowed to be used spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[1].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[1].securityContext.containers[1].hostPort: Invalid value: 9808: Host ports are not allowed to be used spec.containers[2].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[2].securityContext.containers[1].hostPort: Invalid value: 9808: Host ports are not allowed to be used]

@ezYakaEagle442
Copy link
Contributor Author

@andyzhangx could you please update the HELM and yaml files adding in the DaemonSet

  securityContext:
    privileged: true

@ezYakaEagle442
Copy link
Contributor Author

@andyzhangx as discussed applying the SCC to ARO, I have now sucessfully installed the Driver + Azure KV Provider.

Now I hit a new error mesage

oc describe pod nginx-secrets-store-inline -n $target_namespace
Name:         nginx-secrets-store-inline
Namespace:    staging
Priority:     0
Node:         aro-azarc-101-x7jmv-worker-westeurope2-bk6x7/172.32.2.4
Start Time:   Mon, 11 Jan 2021 13:16:06 +0100
Labels:       <none>
Annotations:  openshift.io/scc: node-exporter
Status:       Pending
IP:
IPs:          <none>
Containers:
  nginx:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/secrets-store from secrets-store-inline (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jrqj9 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  secrets-store-inline:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            secrets-store.csi.k8s.io
    FSType:
    ReadOnly:          true
    VolumeAttributes:      secretProviderClass=kv-azarc
  default-token-jrqj9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jrqj9
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                From               Message
  ----     ------       ----               ----               -------
  Normal   Scheduled    52s                default-scheduler  Successfully assigned staging/nginx-secrets-store-inline to aro-azarc-101-x7jmv-worker-westeurope2-bk6x7
  Warning  FailedMount  20s (x7 over 53s)  kubelet            MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod staging/nginx-secrets-store-inline, err: failed to find provider binary azure, err: stat /etc/kubernetes/secrets-store-csi-providers/azure/provider-azure: no such file or directory

@aramase
Copy link
Member

aramase commented Jan 11, 2021

@ezYakaEagle442 From the events, it looks like azure hasn't been configured as a GRPC provider.

If you use helm to install https://github.com/Azure/secrets-store-csi-driver-provider-azure/tree/master/charts/csi-secrets-store-provider-azure#installing-the-chart, grpcSupportedProviders: azure are already set in the values.yaml.

If you are installing via individual yamls, https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/docs/install-yamls.md#install-the-secrets-store-csi-driver, please add the --grpc-supported-providers=azure flag to the secrets-store container args. If you're installing the driver using the driver helm charts, then you can set grpcSupportedProviders=azure as part of helm install.

NOTE: 0.0.9+ release of the Azure Key Vault provider is incompatible with the Secrets Store CSI Driver versions < v0.0.14. While installing the Secrets Store CSI Driver using yamls, add the following flag --grpc-supported-providers=azure to the Linux and Windows daemonset manifests.

@ezYakaEagle442
Copy link
Contributor Author

ezYakaEagle442 commented Jan 11, 2021

@aramase as you have seen I have opened PR-364 as I could not used the HELM Chart due to the missing privilege. I then installed the yaml files :
`
``sh
oc apply -f ./cnf/secrets-store-csi-driver-provider-azure__provider-azure-installer.yaml

```sh
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-secrets-store-provider-azure
---


apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: csi-secrets-store-provider-azure
  name: csi-secrets-store-provider-azure
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: csi-secrets-store-provider-azure
  template:
    metadata:
      labels:
        app: csi-secrets-store-provider-azure
    spec:
      serviceAccountName: csi-secrets-store-provider-azure
      hostNetwork: true
      containers:
        - name: provider-azure-installer
          image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:0.0.11
          imagePullPolicy: IfNotPresent
          args:
            - --endpoint=unix:///provider/azure.sock
          lifecycle:
            preStop:
              exec:
                command:
                  - "rm /provider/azure.sock"
          resources:
            requests:
              cpu: 50m
              memory: 100Mi
            limits:
              cpu: 50m
              memory: 100Mi
          volumeMounts:
            - mountPath: "/provider"
              name: providervol
            - name: mountpoint-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: HostToContainer
          securityContext:
            privileged: true             
      volumes:
        - name: providervol
          hostPath:
            path: "/etc/kubernetes/secrets-store-csi-providers"
        - name: mountpoint-dir
          hostPath:
            path: /var/lib/kubelet/pods
      nodeSelector:
        kubernetes.io/os: linux

I see grpcSupportedProviders: azure the in https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/charts/csi-secrets-store-provider-azure/values.yaml but not in the Template ...

I do not see that param neither at https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/deployment/provider-azure-installer.yaml

Where to add that paramater ???

@aramase
Copy link
Member

aramase commented Jan 11, 2021

@ezYakaEagle442 That value is for the secrets-store-csi-driver. The helm charts in this repo have the driver charts as dep and take care of setting the --grpc-supported-providers=azure. However if the driver and provider are installed separately, then the --grpc-supported-providers=azure arg needs to be added to the secrets-store container in the driver manually as documented here: https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/docs/install-yamls.md.

Sample yaml config for reference: https://github.com/kubernetes-sigs/secrets-store-csi-driver/blob/master/manifest_staging/deploy/secrets-store-csi-driver.yaml#L50

@ezYakaEagle442
Copy link
Contributor Author

@aramase I am sorry but this is super confusing, I did install the secrets-store-csi-driver from the HELM Chart, I have just double checked the - '--grpc-supported-providers=gcp;' was correctly set in the args.

I had then I then installed the Azure Keyvault Provider with yaml files as I explained above, should I also add the - '--grpc-supported-providers=gcp;' in the Azure Keyvault Provider DaemonSet ??

@aramase
Copy link
Member

aramase commented Jan 11, 2021

@ezYakaEagle442 The --grpc-supported-providers needs to contain azure. If you're using the helm charts to install the driver, please use --set grpcSupportedProviders=azure when deploying the driver with the helm chart. This arg is only required for the driver.

@aramase
Copy link
Member

aramase commented Jan 11, 2021

We are making changes in the driver repo to add azure as a grpc-supported-provider by default and this will be part of next release and charts. Until the next release it requires this to be setup during install of the driver if the driver and provider installed separately.

  1. If installing driver using helm charts from secrets-store-csi-driver, then run the following helm install command:
helm install csi secrets-store-csi-driver/secrets-store-csi-driver --set grpcSupportedProviders=azure
  1. If installing using deployment manifests from deploy dir in secrets-store-csi-driver, then set the --grpc-supported-providers=azure in the secrets-store container args.
  2. If the chart used to install if from this repo, then that installs the driver and provider. We also set the grpcSupportedProviders=azure in helm values which means no user action will be required. But since the charts don't have the privileged: true, this will not be applicable for you now. But with the next release of charts after your PR is merged, you should just be able to install with one helm command.

Hope that clarifies the question.

@ezYakaEagle442
Copy link
Contributor Author

ezYakaEagle442 commented Jan 11, 2021

ok I missied that Key Information : If the chart used to install is from this repo, then that installs the driver and provider, good to know.

I did step1 , not step 2 , so if I understand correctly a tstep 3 I should add - grpcSupportedProviders=azure in the csi-secrets-store-provider-azure DS below, @aramase could you please confirm ?

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-secrets-store-provider-azure
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: csi-secrets-store-provider-azure
  name: csi-secrets-store-provider-azure
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: csi-secrets-store-provider-azure
  template:
    metadata:
      labels:
        app: csi-secrets-store-provider-azure
    spec:
      serviceAccountName: csi-secrets-store-provider-azure
      hostNetwork: true
      containers:
        - name: provider-azure-installer
          image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:0.0.11
          imagePullPolicy: IfNotPresent
          args:
            - --endpoint=unix:///provider/azure.sock
            - --grpcSupportedProviders=azure
          lifecycle:
            preStop:
              exec:
                command:
                  - "rm /provider/azure.sock"
          resources:
            requests:
              cpu: 50m
              memory: 100Mi
            limits:
              cpu: 50m
              memory: 100Mi
          volumeMounts:
            - mountPath: "/provider"
              name: providervol
            - name: mountpoint-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: HostToContainer
          securityContext:
            privileged: true             
      volumes:
        - name: providervol
          hostPath:
            path: "/etc/kubernetes/secrets-store-csi-providers"
        - name: mountpoint-dir
          hostPath:
            path: /var/lib/kubelet/pods
      nodeSelector:
        kubernetes.io/os: linux

@aramase
Copy link
Member

aramase commented Jan 11, 2021

@ezYakaEagle442 Sorry if I'm not clear. Those are 3 different options for install and not steps. Please let me know if a call is helpful?

From you description you installed the driver by running

helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver -n $target_namespace

You just need to add --set grpcSupportProviders=azure in that command. This is to tell the driver to use grpc while communicating with the driver.

@ezYakaEagle442
Copy link
Contributor Author

ezYakaEagle442 commented Jan 11, 2021

also I realized my mistake with '--grpc-supported-providers=gcp;' which is the default value set in the
HELM Chart, by the way, this is not fair that GCP is the default value ... should either blank or list all providers including Azure !

@aramase
Copy link
Member

aramase commented Jan 11, 2021

also I realized my mistake with '--grpc-supported-providers=gcp;' wich is the default value set in the
HELM Chart, by the way, this is not fait that GCP is the default value ... should either blank or list all providers including Azure !

that's right! Azure provider wasn't added to support backward compatibility. However we're adding it as part of next release.

@aramase
Copy link
Member

aramase commented Feb 2, 2021

Closed with #364

@aramase aramase closed this as completed Feb 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants