Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ory-hydra fatal error: mlock failed #9395

Closed
erichorwath opened this issue Sep 8, 2020 · 6 comments
Closed

ory-hydra fatal error: mlock failed #9395

erichorwath opened this issue Sep 8, 2020 · 6 comments
Labels
area/security Issues or PRs related to security area/service-mesh Issues or PRs related to service-mesh

Comments

@erichorwath
Copy link

erichorwath commented Sep 8, 2020

Description
We have created a new aws-gardener cluster and installed kyma with default settings.
Node OS: Garden Linux 27.1.0 5.4.0-4-cloud-amd64 5.4.19-1 (see comments below)
Kubernetes: v1.18.5
Kyma cluster version: 1.15.0

Expected result
ory-hydra pod gets ready

Actual result
ory-hydra is not starting.
And we are not able to run kubectl -n default label installation/kyma-installation action=install

Troubleshooting

Logs:

kubectl logs -n kyma-system ory-hydra-567f9448b-25pk6 -c hydra
runtime: mlock of signal stack failed: 12
runtime: increase the mlock limit (ulimit -l) or
runtime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+
fatal error: mlock failed

runtime stack:
runtime.throw(0x11774a2, 0xc)
        /usr/local/go/src/runtime/panic.go:1112 +0x72
runtime.mlockGsignal(0xc000267c80)
        /usr/local/go/src/runtime/os_linux_x86.go:72 +0x107
runtime.mpreinit(0xc000430380)
        /usr/local/go/src/runtime/os_linux.go:341 +0x78
runtime.mcommoninit(0xc000430380)
        /usr/local/go/src/runtime/proc.go:630 +0x108
runtime.allocm(0xc000081000, 0x11ddda8, 0x0)
        /usr/local/go/src/runtime/proc.go:1390 +0x14e
runtime.newm(0x11ddda8, 0xc000081000)
        /usr/local/go/src/runtime/proc.go:1704 +0x39
runtime.startm(0x0, 0xc000102a01)
        /usr/local/go/src/runtime/proc.go:1869 +0x12a
runtime.wakep(...)
        /usr/local/go/src/runtime/proc.go:1953
runtime.resetspinning()
        /usr/local/go/src/runtime/proc.go:2415 +0x93
runtime.schedule()
        /usr/local/go/src/runtime/proc.go:2527 +0x2de
runtime.park_m(0xc000267680)
        /usr/local/go/src/runtime/proc.go:2690 +0x9d
runtime.mcall(0x0)
        /usr/local/go/src/runtime/asm_amd64.s:318 +0x5b

goroutine 1 [runnable, locked to thread]:
io.copyBuffer(0x1336aa0, 0xc000415470, 0x1336b40, 0xc000252840, 0x0, 0x0, 0x0, 0xc0005655c8, 0x47327c, 0xc0002ff930)
        /usr/local/go/src/io/io.go:383 +0x390
io.Copy(...)
        /usr/local/go/src/io/io.go:364
github.com/ory/hydra/consent.bindataRead(0x1b04d20, 0x8b, 0x8b, 0x1189e09, 0x1c, 0x93e22d6700000003, 0x4fc2b4, 0x1189e09, 0x1c, 0x5c)
        /home/ory/consent/sql_migration_files.go:67 +0x1eb
github.com/ory/hydra/consent.migrationsSqlShared14SqlBytes(...)
        /home/ory/consent/sql_migration_files.go:605
github.com/ory/hydra/consent.migrationsSqlShared14Sql(0x1007680, 0xc000415170, 0x1189e09)
        /home/ory/consent/sql_migration_files.go:612 +0x6c
github.com/ory/hydra/consent.Asset(0x1189e09, 0x1c, 0xc0005658b8, 0x1, 0x1, 0xc0002ffc00, 0x2c)
        /home/ory/consent/sql_migration_files.go:988 +0xc8
github.com/ory/x/dbal.NewPackerMigrationSource(0x1375cc0, 0xc0002ff8f0, 0xc0002522c0, 0x2b, 0x2b, 0x11db960, 0xc000349980, 0x2, 0x2, 0x201, ...)
        /go/pkg/mod/github.com/ory/x@v0.0.111/dbal/migrate.go:100 +0x8ae
github.com/ory/x/dbal.NewMustPackerMigrationSource(0x1375cc0, 0xc0002ff8f0, 0xc0002522c0, 0x2b, 0x2b, 0x11db960, 0xc000349980, 0x2, 0x2, 0xc000414f01, ...)
        /go/pkg/mod/github.com/ory/x@v0.0.111/dbal/migrate.go:69 +0xb7
github.com/ory/hydra/consent.init()
        /home/ory/consent/sql_helper.go:32 +0xfbe

goroutine 17 [select]:
github.com/dgraph-io/ristretto.(*defaultPolicy).processItems(0xc000202d80)
        /go/pkg/mod/github.com/dgraph-io/ristretto@v0.0.2/policy.go:96 +0xbe
created by github.com/dgraph-io/ristretto.newDefaultPolicy
        /go/pkg/mod/github.com/dgraph-io/ristretto@v0.0.2/policy.go:80 +0x129

goroutine 20 [select]:
github.com/dgraph-io/ristretto.(*Cache).processItems(0xc0002004e0)
        /go/pkg/mod/github.com/dgraph-io/ristretto@v0.0.2/cache.go:299 +0xed
created by github.com/dgraph-io/ristretto.(*Cache).Clear
        /go/pkg/mod/github.com/dgraph-io/ristretto@v0.0.2/cache.go:293 +0xc9

image

@kubadz kubadz added area/security Issues or PRs related to security area/service-mesh Issues or PRs related to service-mesh labels Sep 9, 2020
@strekm
Copy link
Contributor

strekm commented Sep 17, 2020

hello @erichorwath i did a quick google to look around and i found heated discussions on golang repo: golang/go#37436 and referencing to golang/go#35777. In short they suggest an update host os as a solution. Not much we can do on kyma side. Please let me if we can close this ticket.

Cheers,
Magda

@erichorwath
Copy link
Author

erichorwath commented Sep 17, 2020

Hi Magda,
As Gardener colleague confirmed, kernel version is already updated, just shown wrong by golang/kubernetes:

$ uname -a
Linux shoot--foo--bar-cpu-worker-z1-6476b9d667-wttfl 5.4.0-4-cloud-amd64 #1 SMP Debian 5.4.19-1 (2020-02-13) x86_64 GNU/Linux

What do you think, will this kyma error fixed by gardenlinux/gardenlinux#128 ?

Thanks and best regards,
Eric

@strekm
Copy link
Contributor

strekm commented Sep 17, 2020

@erichorwath gardenlinux/gardenlinux#128 should work!

@strekm
Copy link
Contributor

strekm commented Sep 23, 2020

@erichorwath any news? can we close this?

@erichorwath
Copy link
Author

erichorwath commented Sep 23, 2020

With new Garden Linux nodes (2nd Oct 2020), Kubernetes 1.18.8 and Kyma 1.16_RC2, I cannot reproduce this error anymore hence closing this issue.

@oliverkane
Copy link

Hey, all. I'm late to the party, but I wanted to share something I noticed.

I was using ORY's Hydra via their Helm Chart, and I noticed that the default values point to a 6 month old version (1.4.6) where as their latest revision is much later (1.8x).

Chart values:
https://github.com/ory/k8s/blob/master/helm/charts/hydra/values.yaml#L8

Docker Hub tags:
https://hub.docker.com/r/oryd/hydra/tags

I've posted a question in the public ORY slack channel, but have yet to hear a response. If I recall, I'll post my findings here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/security Issues or PRs related to security area/service-mesh Issues or PRs related to service-mesh
Projects
None yet
Development

No branches or pull requests

4 participants