Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

katib-mysql pod :Access denied for user 'root'@'localhost' (using password: YES) #1212

Closed
ijackcy opened this issue Jun 9, 2020 · 21 comments

Comments

@ijackcy
Copy link

ijackcy commented Jun 9, 2020

/kind bug

What steps did you take and what happened:

When I deployed kubeflow1.0, katib-db-manager-849b858bc8-4h8c9 and katib-mysql-7f99dfd774-gc4qp don't work well.

katib-db-manager-849b858bc8-4h8c9 0/1 ImagePullBackOff 0 11h
katib-mysql-7f99dfd774-gc4qp 0/1 Running 1 11h was

when i checked the log of katib-mysql-7f99dfd774-gc4qp :

Events:
Type Reason Age From Message


Warning Unhealthy 80s (x4130 over 11h) kubelet, slaver003 Readiness probe failed: mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

when i enter the katib-mysql-7f99dfd774-gc4qp :
root@katib-mysql-7f99dfd774-gc4qp:/# mysql -D ${MYSQL_DATABASE} -u root -p${MYSQL_ROOT_PASSWORD} -e 'SELECT 1'
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
root@katib-mysql-7f99dfd774-gc4qp:/# mysql -D ${MYSQL_DATABASE} -e 'SELECT 1'
ERROR 1049 (42000): Unknown database 'katib'

What did you expect to happen:

all of pod work well

Anything else you would like to add:

and then i try to delete the error pods, and modify the images(mysql 5) ,also the same error.
Access denied for user 'root'@'localhost' (using password: YES)
Environment:

  • Kubeflow version:1.0
  • Minikube version:
  • Kubernetes version: (use kubectl version):1.15.1
  • OS (e.g. from /etc/os-release):
@issue-label-bot
Copy link

Issue-Label Bot is automatically applying the labels:

Label Probability
area/katib 0.79

Please mark this comment with 👍 or 👎 to give our bot feedback!
Links: app homepage, dashboard and code for this bot.

@issue-label-bot
Copy link

Issue Label Bot is not confident enough to auto-label this issue.
See dashboard for more details.

@gaocegege
Copy link
Member

May I ask how you install the katib?

@ijackcy
Copy link
Author

ijackcy commented Jun 12, 2020

May I ask how you install the katib?

When I installed kubeflow, automatically started katib pod

@andreyvelich
Copy link
Member

@ijackcy Can you check that MYSQL_ROOT_PASSWORD env is set on katib-mysql pod.
Execute on katib-mysql pod and run echo $MYSQL_ROOT_PASSWORD

Also, can you show the logs and describe katib-db-manager-849b858bc8-4h8c9 pod, please.

@crazy-canux
Copy link

got the same issue here.

k8s@ubuntu2:~/k8s/kubeflow$ kubectl describe pod katib-db-manager-54b64f99b-nmtzv -n kubeflow
Name: katib-db-manager-54b64f99b-nmtzv
Namespace: kubeflow
Priority: 0
Node: ubuntu4/10.103.238.34
Start Time: Thu, 16 Jul 2020 08:32:27 +0000
Labels: app=katib
app.kubernetes.io/component=katib
app.kubernetes.io/instance=katib-controller-0.8.0
app.kubernetes.io/managed-by=kfctl
app.kubernetes.io/name=katib-controller
app.kubernetes.io/part-of=kubeflow
app.kubernetes.io/version=0.8.0
component=db-manager
pod-template-hash=54b64f99b
Annotations: sidecar.istio.io/inject: false
Status: Running
IP: 10.244.1.23
IPs:
IP: 10.244.1.23
Controlled By: ReplicaSet/katib-db-manager-54b64f99b
Containers:
katib-db-manager:
Container ID: docker://1f5b973902cd2ab197809625fa3e7e7b46c7453d6add485f34d8b7f908828317
Image: gcr.io/kubeflow-images-public/katib/v1alpha3/katib-db-manager:v0.8.0
Image ID: docker-pullable://gcr.io/kubeflow-images-public/katib/v1alpha3/katib-db-manager@sha256:60ace3d4dbb66eb346ee19b5ad08a998ecf668e68cf699065aedb623c48fe767
Port: 6789/TCP
Host Port: 0/TCP
Command:
./katib-db-manager
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 17 Jul 2020 02:03:12 +0000
Finished: Fri, 17 Jul 2020 02:04:13 +0000
Ready: False
Restart Count: 164
Liveness: exec [/bin/grpc_health_probe -addr=:6789] delay=10s timeout=1s period=60s #success=1 #failure=5
Readiness: exec [/bin/grpc_health_probe -addr=:6789] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
DB_NAME: mysql
DB_PASSWORD: <set to the key 'MYSQL_ROOT_PASSWORD' in secret 'katib-mysql-secrets'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4rrh5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-4rrh5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4rrh5
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Pulled 35m (x157 over 13h) kubelet, ubuntu4 Container image "gcr.io/kubeflow-images-public/katib/v1alpha3/katib-db-manager:v0.8.0" already present on machine
Warning Unhealthy 15m (x798 over 13h) kubelet, ubuntu4 Readiness probe failed: timeout: failed to connect service ":6789" within 1s
Warning BackOff 9s (x3099 over 13h) kubelet, ubuntu4 Back-off restarting failed container

k8s@ubuntu2:~/k8s/kubeflow$ kubectl describe pod katib-mysql-74747879d7-rmlts -n kubeflow
Name: katib-mysql-74747879d7-rmlts
Namespace: kubeflow
Priority: 0
Node: ubuntu3/10.103.238.33
Start Time: Thu, 16 Jul 2020 08:32:43 +0000
Labels: app=katib
app.kubernetes.io/component=katib
app.kubernetes.io/instance=katib-controller-0.8.0
app.kubernetes.io/managed-by=kfctl
app.kubernetes.io/name=katib-controller
app.kubernetes.io/part-of=kubeflow
app.kubernetes.io/version=0.8.0
component=mysql
pod-template-hash=74747879d7
Annotations: sidecar.istio.io/inject: false
Status: Running
IP: 10.244.2.24
IPs:
IP: 10.244.2.24
Controlled By: ReplicaSet/katib-mysql-74747879d7
Containers:
katib-mysql:
Container ID: docker://43d58cf143e185fc48efc2a5f74410cbcd45f50b779c60f728fd404825affa6d
Image: mysql:8
Image ID: docker-pullable://mysql@sha256:fe0a5b418ecf9b450d0e59062312b488d4d4ea98fc81427e3704f85154ee859c
Port: 3306/TCP
Host Port: 0/TCP
Args:
--datadir
/var/lib/mysql/datadir
State: Running
Started: Thu, 16 Jul 2020 10:50:00 +0000
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 16 Jul 2020 10:48:37 +0000
Finished: Thu, 16 Jul 2020 10:49:59 +0000
Ready: False
Restart Count: 1
Liveness: exec [/bin/bash -c mysqladmin ping -u root -p${MYSQL_ROOT_PASSWORD}] delay=30s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/bin/bash -c mysql -D ${MYSQL_DATABASE} -u root -p${MYSQL_ROOT_PASSWORD} -e 'SELECT 1'] delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'MYSQL_ROOT_PASSWORD' in secret 'katib-mysql-secrets'> Optional: false
MYSQL_ALLOW_EMPTY_PASSWORD: true
MYSQL_DATABASE: katib
Mounts:
/var/lib/mysql from katib-mysql (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4rrh5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
katib-mysql:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: katib-mysql
ReadOnly: false
default-token-4rrh5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4rrh5
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning Unhealthy 4m23s (x5478 over 15h) kubelet, ubuntu3 Readiness probe failed: mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

@andreyvelich
Copy link
Member

@crazy-canux Can you try to execute on your mysql pod kubectl exec -it <katib-mysql-pod-name> -n kubeflow -- /bin/bash?

@crazy-canux
Copy link

crazy-canux commented Jul 18, 2020

@andreyvelich Yes, I can see the password is "test" when execute "echo $MYSQL_ROOT_PASSWORD".

@andreyvelich
Copy link
Member

@crazy-canux Can you try to check if mysql DB works.
Run: mysql -D ${MYSQL_DATABASE} -u root -p${MYSQL_ROOT_PASSWORD} -e 'SELECT 1' on katib-mysql pod.

Also, try to delete /tmp/katib directory if you are using host path for mysql PV on your cluster and restart Katib.

@crazy-canux
Copy link

@andreyvelich after restart katib-mysql, error changed.

kubectl describe pod katib-mysql-6c44769759-7w5vq -n kubeflow
Name:         katib-mysql-6c44769759-7w5vq
Namespace:    kubeflow
Priority:     0
Node:         ubuntu3/10.103.238.33
Start Time:   Wed, 22 Jul 2020 02:00:49 +0000
Labels:       app=katib
              app.kubernetes.io/component=katib
              app.kubernetes.io/instance=katib-controller-0.8.0
              app.kubernetes.io/managed-by=kfctl
              app.kubernetes.io/name=katib-controller
              app.kubernetes.io/part-of=kubeflow
              app.kubernetes.io/version=0.8.0
              component=mysql
              pod-template-hash=6c44769759
Annotations:  sidecar.istio.io/inject: false
Status:       Running
IP:           10.244.2.37
IPs:
  IP:           10.244.2.37
Controlled By:  ReplicaSet/katib-mysql-6c44769759
Containers:
  katib-mysql:
    Container ID:  docker://108ef0a18c34fd40c960218be17586a104a354cbb91830cc55f74abd6393f239
    Image:         mysql:8
    Image ID:      docker-pullable://mysql@sha256:fe0a5b418ecf9b450d0e59062312b488d4d4ea98fc81427e3704f85154ee859c
    Port:          3306/TCP
    Host Port:     0/TCP
    Args:
      --datadir
      /var/lib/mysql/datadir
      --skip-grant-tables
    State:          Running
      Started:      Wed, 22 Jul 2020 02:02:15 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Wed, 22 Jul 2020 02:00:53 +0000
      Finished:     Wed, 22 Jul 2020 02:02:13 +0000
    Ready:          False
    Restart Count:  1
    Liveness:       exec [/bin/bash -c mysqladmin ping -u root -p${MYSQL_ROOT_PASSWORD}] delay=30s timeout=5s period=10s #success=1 #failure=3
    Readiness:      exec [/bin/bash -c mysql -D ${MYSQL_DATABASE} -u root -p${MYSQL_ROOT_PASSWORD} -e 'SELECT 1'] delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:
      MYSQL_ROOT_PASSWORD:         <set to the key 'MYSQL_ROOT_PASSWORD' in secret 'katib-mysql-secrets'>  Optional: false
      MYSQL_ALLOW_EMPTY_PASSWORD:  true
      MYSQL_DATABASE:              katib
    Mounts:
      /var/lib/mysql from katib-mysql (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4rrh5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  katib-mysql:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  katib-mysql
    ReadOnly:   false
  default-token-4rrh5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4rrh5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  4m12s                  default-scheduler  Successfully assigned kubeflow/katib-mysql-6c44769759-7w5vq to ubuntu3
  Normal   Killing    3m18s                  kubelet, ubuntu3   Container katib-mysql failed liveness probe, will be restarted
  Normal   Pulled     2m47s (x2 over 4m9s)   kubelet, ubuntu3   Container image "mysql:8" already present on machine
  Normal   Created    2m47s (x2 over 4m9s)   kubelet, ubuntu3   Created container katib-mysql
  Normal   Started    2m46s (x2 over 4m8s)   kubelet, ubuntu3   Started container katib-mysql
  Warning  Unhealthy  2m11s (x12 over 4m1s)  kubelet, ubuntu3   Readiness probe failed: mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
  Warning  Unhealthy  2m8s (x4 over 3m38s)  kubelet, ubuntu3  Liveness probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure.
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'
Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
  Warning  Unhealthy  111s (x2 over 2m1s)  kubelet, ubuntu3  Readiness probe failed: mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1049 (42000): Unknown database 'katib'
 mysql -D ${MYSQL_DATABASE} -u root -p${MYSQL_ROOT_PASSWORD} -e 'SELECT 1'
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1049 (42000): Unknown database 'katib'

@andreyvelich
Copy link
Member

@crazy-canux Did you check that PVC is properly bounded:
kubectl get pvc katib-mysql -n kubeflow ?

Which volume are you using for the mysql PV ?
If it is local hostPath, did you try to delete directory on the k8s cluster node that I mentioned above?

@Utkagr
Copy link

Utkagr commented Oct 5, 2020

@andreyvelich I'm facing the same error. I checked the cluster nodes and there's no such directories /tmp/katib to delete in the first place.

@andreyvelich
Copy link
Member

@Utkagr Can you check that PVC was bounded successfully?
How did you deploy Katib?

@Utkagr
Copy link

Utkagr commented Oct 6, 2020

@andreyvelich The PVC is bounded successfully and I deployed katib using the kubeflow manifest v1.1 only which is the standard setup recommended. Would you please check the logs here kubeflow/manifests#1565 (comment) and see what could be a potential issue? Thanks a lot.

@andreyvelich
Copy link
Member

Thanks @Utkagr.
Try this:

  1. Add imagePullPolicy: Always in mysql-db deployment.
    This error: 2020-09-30T08:22:02.975759Z 0 [ERROR] [MY-011947] [InnoDB] Cannot open '/var/lib/mysql/datadir/ib_buffer_pool' for reading: No such file or directory can happen when you try to upgrade MySQL DB to a various versions.
  2. Try to change /var/lib/mysql to a different location.

If it doesn't work, try to use PV with host path and PVC from these manifests. Then you can directly clean-up your folders on the cluster with old mysql instances.

@Utkagr
Copy link

Utkagr commented Oct 7, 2020

Thanks @Utkagr.
Try this:

  1. Add imagePullPolicy: Always in mysql-db deployment.
    This error: 2020-09-30T08:22:02.975759Z 0 [ERROR] [MY-011947] [InnoDB] Cannot open '/var/lib/mysql/datadir/ib_buffer_pool' for reading: No such file or directory can happen when you try to upgrade MySQL DB to a various versions.
  2. Try to change /var/lib/mysql to a different location.

@andreyvelich Thanks for your comment. On trying 1, katib-mysql pod is going crashloopbackoff with these logs

2020-10-07 04:36:04+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started.
2020-10-07 04:36:04+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-10-07 04:36:04+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started.
2020-10-07T04:36:04.684844Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.21) starting as process 1
2020-10-07T04:36:04.694295Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2020-10-07T04:36:05.626114Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
mysqld: Table 'mysql.plugin' doesn't exist
2020-10-07T04:36:05.819528Z 0 [ERROR] [MY-010735] [Server] Could not open the mysql.plugin table. Please perform the MySQL upgrade procedure.
2020-10-07T04:36:05.831662Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2020-10-07T04:36:05.933535Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2020-10-07T04:36:06.047445Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2020-10-07T04:36:06.080734Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2020-10-07T04:36:06.080950Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2020-10-07T04:36:06.087502Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2020-10-07T04:36:06.088097Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables
2020-10-07T04:36:06.088531Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-001146 - Table 'mysql.component' doesn't exist
2020-10-07T04:36:06.088763Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-003543 - The mysql.component table is missing or has an incorrect definition.
2020-10-07T04:36:06.089738Z 0 [ERROR] [MY-010326] [Server] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist
2020-10-07T04:36:06.090006Z 0 [ERROR] [MY-010952] [Server] The privilege system failed to initialize correctly. For complete instructions on how to upgrade MySQL to a new version please see the 'Upgrading MySQL' section from the MySQL manual.
2020-10-07T04:36:06.090643Z 0 [ERROR] [MY-010119] [Server] Aborting
2020-10-07T04:36:07.563752Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.21)  MySQL Community Server - GPL.

On trying 2(changed path to /var/lib/katib-mysql wherever required in katib-mysql-deployment.yaml), pod is still in crashloopbackoff with the same logs.

If it doesn't work, try to use PV with host path and PVC from these manifests. Then you can directly clean-up your folders on the cluster with old mysql instances.

On this point, since I'm using manifests repo to setup katib, I can only see this pvc definition inside katib base directory

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: katib-mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Not sure what I should change here, apart from labels and ns, configs are the same.

@Utkagr
Copy link

Utkagr commented Oct 15, 2020

Thanks @Utkagr.
Try this:

  1. Add imagePullPolicy: Always in mysql-db deployment.
    This error: 2020-09-30T08:22:02.975759Z 0 [ERROR] [MY-011947] [InnoDB] Cannot open '/var/lib/mysql/datadir/ib_buffer_pool' for reading: No such file or directory can happen when you try to upgrade MySQL DB to a various versions.
  2. Try to change /var/lib/mysql to a different location.

@andreyvelich Thanks for your comment. On trying 1, katib-mysql pod is going crashloopbackoff with these logs

2020-10-07 04:36:04+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started.
2020-10-07 04:36:04+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-10-07 04:36:04+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.21-1debian10 started.
2020-10-07T04:36:04.684844Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.21) starting as process 1
2020-10-07T04:36:04.694295Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2020-10-07T04:36:05.626114Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
mysqld: Table 'mysql.plugin' doesn't exist
2020-10-07T04:36:05.819528Z 0 [ERROR] [MY-010735] [Server] Could not open the mysql.plugin table. Please perform the MySQL upgrade procedure.
2020-10-07T04:36:05.831662Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2020-10-07T04:36:05.933535Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2020-10-07T04:36:06.047445Z 0 [Warning] [MY-010015] [Repl] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2020-10-07T04:36:06.080734Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2020-10-07T04:36:06.080950Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2020-10-07T04:36:06.087502Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2020-10-07T04:36:06.088097Z 0 [Warning] [MY-010441] [Server] Failed to open optimizer cost constant tables
2020-10-07T04:36:06.088531Z 0 [ERROR] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-001146 - Table 'mysql.component' doesn't exist
2020-10-07T04:36:06.088763Z 0 [Warning] [MY-013129] [Server] A message intended for a client cannot be sent there as no client-session is attached. Therefore, we're sending the information to the error-log instead: MY-003543 - The mysql.component table is missing or has an incorrect definition.
2020-10-07T04:36:06.089738Z 0 [ERROR] [MY-010326] [Server] Fatal error: Can't open and lock privilege tables: Table 'mysql.user' doesn't exist
2020-10-07T04:36:06.090006Z 0 [ERROR] [MY-010952] [Server] The privilege system failed to initialize correctly. For complete instructions on how to upgrade MySQL to a new version please see the 'Upgrading MySQL' section from the MySQL manual.
2020-10-07T04:36:06.090643Z 0 [ERROR] [MY-010119] [Server] Aborting
2020-10-07T04:36:07.563752Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.21)  MySQL Community Server - GPL.

On trying 2(changed path to /var/lib/katib-mysql wherever required in katib-mysql-deployment.yaml), pod is still in crashloopbackoff with the same logs.

If it doesn't work, try to use PV with host path and PVC from these manifests. Then you can directly clean-up your folders on the cluster with old mysql instances.

On this point, since I'm using manifests repo to setup katib, I can only see this pvc definition inside katib base directory

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: katib-mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Not sure what I should change here, apart from labels and ns, configs are the same.

Resolved using https://github.com/kubeflow/katib/tree/master/manifests/v1alpha3/pv manifests. Thanks @andreyvelich

@andreyvelich
Copy link
Member

@Utkagr It's good to hear!
Btw, what cluster are you using to run Kubeflow?
What is your dynamic volume provisioner?

@Utkagr
Copy link

Utkagr commented Oct 16, 2020

It's rook-ceph and we've our on prem setup with 1 master 3 workers

@andreyvelich
Copy link
Member

You might need some additional setup for running mysql with this Provisioner.
Maybe this solution: #1156.

@andreyvelich
Copy link
Member

Feel free to re-open issue if you have this problem again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants