Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lens can't open pod or node terminal on single cluster #8128

Open
KarooolisZi opened this issue Nov 22, 2024 · 5 comments
Open

Lens can't open pod or node terminal on single cluster #8128

KarooolisZi opened this issue Nov 22, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@KarooolisZi
Copy link

KarooolisZi commented Nov 22, 2024

Describe the bug
Hello, I was upgrading EKS to 1.29 when lens suddenly stopped opening terminals for both nodes and pods. It might be a coincidence because during the process of node groups upgrade, until some point I was still able to drain nodes using node terminal in lens.

After it stopped working, node terminal showed this message:
failed to open a node shell: Unable to start terminal process: CreateProcess failed

Pods terminals showed similar errors to this:

+ kubectl exec -i -t -n <ns> <pod> -c de ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.
At line:1 char:1
+ kubectl exec -i -t -n <ns> <pod> -c de ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ResourceUnavailable: (:) [], ApplicationFailedException
    + FullyQualifiedErrorId : NativeCommandFailed

What is interesting, other clusters, which run 1.28 were okay. I also was able to do all the commands from my local terminal for both 1.28 and 1.29 clusters.

I thought that kubectl might be too old so I upgraded my kubectl from 1.27 to 1.29. This also didn't change anything.

On my last step, I reinstalled newest Lens IDE version, which did not help either.

Other clusters I manage are 1.28 and I didn't have issues with them even after this happened on 1.29.

I also got Lens IDE informational message like this:
If terminal shell is not ready please check your shell init files, if applicable.

After trying all of these things I managed to get this error, too:
Error occurred: Pod creation timed outfailed to open a node shell: failed to create node pod

To Reproduce
Steps to reproduce the behavior:

  1. Go to nodes section
  2. Click on Node shell for target node
  3. Wait with Connecting... message until error comes in
  4. See error

Expected behavior
This should let you enter node terminal or do prepared commands like drain/cordon/uncordon.

Environment (please complete the following information):

  • Lens Version: 2024.11.131815-latest (not sure which one I used before)
  • OS: Windows 10 Enterprise, Version 22H2, OS Build 19045.5131
  • Installation method (e.g. snap or AppImage in Linux): .exe for Windows

Kubeconfig:
Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1UQXlOakV5TVRJeE0xb1hEVE15TVRBeU16RXlNVEl4TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTjRhCmFsSUxjNFdWU2M3ZkY1MFA1eFNSeUwrdkl2eW5obUV1KzVXYVhIZmNpMTcyZVlqVHdiWFN5SmozbFVneWdqVkIKVWlsNk5FVFFFYllBajVEb3RreFQ3ODBvK25heTJ3dWZLL2wwUjQyZXJrTmlCMnVCWEE0OUdRTjlieHl5Yi9PbwpGTkVQaXQ5KzVUYnlJYmtUYmo4SjdOSkdUdmxOK3BLQXNCOW9qd2RKNHlNMnZITjZwMG1YRDY1S0JqMC85dm56CncxVnRXRzNaQ2VNSk1jZjhVRk5lcEk2K3BEbjU0OUNQekgwVVFUTTZjd1k1Y0ZKQko5UjEvNitUekxjbGRJWk8KMzIybFI2djRaa1hWOTJibFE1cG1nUU4xdzhCWmpVbkVoVWN1VEQxb1FweVorajAvRzU0UHNJa0dRSXdlM2tPVQpQMnlKd3ZyUDJDTWVuT25rdStrQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZMRXY0SHlzM2NHbFlLMDVBdEJtcUtaWWo4Ym9NQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBREQwQkphTGNpNDJXUlgxK244VApVVTRGZU94aTJOSHU2aG40OS9nU0VWRGF6WEMrdlZ4NXhRY0JJejNBWTMwQitnbUVGdUNDVEIydzdMeEdGL1A0CjBBeFVUTklHZ0hrUWJ6bGdKV25Da2QybzUra1hzZUtwRWxsRzArQUVXd0x1K2daNWhYVHRsMk5TVkhtaElaVnoKdlo1SStleGYxK2NBOWNyc2w1THFLQ1J6OW00TXVFMGlFQkJ2L3ZxV25peGNsaG9HR3dETUxuL0RnNmdhSjQzNgp3M1BsZ2lpa1F2T3FZSWdpMEI5NEl3MGRmZVFJaXF5c2M0cUlIRFJWSXpiM2F1MG92T0xFcUpSWmpjTEh6ZWIrCktxcXNYQWRpc1l2dUZLRzNrSVVsVFBPeXlVdTJqaTJ4dlhabFJCS29PRnpheGZYbnhjR3dvRHlnS3hBcWtLalcKNDBvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: working-cluster
  name: working-cluster
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EVXlOVEE0TURreU5sb1hEVE15TURVeU1qQTRNRGt5Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG81ClhSZGNZcXU5WUdubStwNDBibEhnbjlCQnIrUnZsSWlZb0R2RHUzZ2hhSG5lVm9vYnV3ZExYUGdhUHlTSU5XVzMKNW52eVk0dWdYTURlT0w0ZnJMT0xTMFlTNFpFK1Qrc3p6UGJuaGp4RHk1d0VxbDRSMG53UnRrZVk0R3NHVE51YwptTkY5eEVCRGRmdWdDNG5EV0N5Z0hJSVVnRVhMLzB6dENRTlNUV3NMNFJndkRGakdIQmF3RllQbUVjWU9GNXdTCi91UjIrNDUxc0tSS3RZMjVNUWFjSjNOYmp4UmlSb3dyYy82eUREcnV2QWtxdDdSRktZMnR3YXFvWVh5ZWQvS0gKamVEVnVXNmRqMTJqTHRRUFpwbDRMdk45U242Y3pGTVRkYUJvZ285QlZQZWRFMDJ5bnZQZU56Y0xlMnFGNkRJMwpicEZvT05ZbGQ2anQwVGJXUUxFQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZId1ZsT3JRWkxoSnNaMlFSQms2cmtDU29xVUtNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQmpydEpnUklrck1NUGswbWVmSgorN21xbzI5cFhlSmdJYzVBRXY2bEpFZzRjcWc0Tm9vL0xtc3FLOGh6S21pNGcyamRKR3VZRGwvVlpYQXJnVHpvCkRKK0dzSnVVYnJUMTVEZzZZZUZmUEZ2bmRRSCs2TVR3akJLSUZxU1RWclBzZXRoOEVsZTRyTG9MVEdjWnpoL1gKWFNJTjYrV01oNVU2Y2JtdS9JYWlCTERGckwyblNGcmFycUtJdmNNajB3cU9teVdFMGU3QXhlZU5JTUk1aWJxbwpEaFl2MHI0enh5MVVLZXdmY2kvVnZLUjV0c1BwMG1ORjl6dE50aVIySG52UjlvVEg3akplSitnaWMwZDQ2V3Z3CnZzditGM0RnVUhCbm9QT0FlZVhqNzVuRFZ1N0oyaGk3RDFXVU9xL1J5RUJWWmtaNnZFQXo5a25OTUttbkhCQm4KTHdBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: cluster-in-question
  name: cluster-in-question
contexts:
- context:
    cluster: working-cluster
    user: working-cluster
  name: working-cluster
- context:
    cluster: cluster-in-question
    user: cluster-in-question
  name: cluster-in-question
current-context: cluster-in-question
kind: Config
preferences: {}
users:
- name: working-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - region
      - eks
      - get-token
      - --cluster-name
      - working-cluster
      command: aws
      env:
      - name: AWS_PROFILE
        value: profile name
      interactiveMode: IfAvailable
      provideClusterInfo: false

- name: cluster-in-question
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - region
      - eks
      - get-token
      - --cluster-name
      - cluster-in-question
      command: aws
      env:
      - name: AWS_PROFILE
        value: profile name
      interactiveMode: IfAvailable
      provideClusterInfo: false

Additional details
I checked one issue where you asked whether haproxy or similar tool is being used. In my case, no similar tools are being used.

@KarooolisZi KarooolisZi added the bug Something isn't working label Nov 22, 2024
@pasztorl
Copy link

+1

@koshevka
Copy link

Hello KarooolisZi,

Thank you for reaching out to Lens support!

Thank you for reporting a bug.
We are working on your issue. Stand by for further updates.

Regards,
Oleksandr from Lens

@KarooolisZi
Copy link
Author

Hello @okoshevka,
Thank you for the swift response!

@sm1lexops
Copy link

bug repeated can't pod exec issues/8113

@KarooolisZi
Copy link
Author

@sm1lexops details are different

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants