Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[oCIS] Scaling oCIS in kubernetes causes requests to fail Timebox 8PD #8589

Open
butonic opened this issue Mar 6, 2024 · 37 comments
Open

[oCIS] Scaling oCIS in kubernetes causes requests to fail Timebox 8PD #8589

butonic opened this issue Mar 6, 2024 · 37 comments
Labels
Priority:p2-high Escalation, on top of current planning, release blocker Type:Bug

Comments

@butonic
Copy link
Member

butonic commented Mar 6, 2024

During loadtests we seem to be losing requests. We have identified several possible causes:

1. when a new pod is added it does not seem to receive traffic

This might be caused by clients not picking up the new service. One reason would be that the same grpc connection is reused. We need to make sure that every service uses the a selector.Next() call to get a fresh client from the registry.

2. when a pod is shut down because kubernetes moves it to a different node or it is descheduled it still receives traffic

This might be caused by latency. The client got a grpc client with selector.Next() but then the pod was killed before the request reached it. We should retry requests, but the grpc built in retry mechanism would need to know all possible services. That is not how the reva pool works.

We could configure the grpc connection to retry requests:

	var retryPolicy = `{
		"methodConfig": [{
			// config per method or all methods under service
			"name": [{"service": "grpc.examples.echo.Echo"}],
			"waitForReady": true,

			"retryPolicy": {
				"MaxAttempts": 4,
				"InitialBackoff": ".01s",
				"MaxBackoff": ".01s",
				"BackoffMultiplier": 1.0,
				// this value is grpc code
				"RetryableStatusCodes": [ "UNAVAILABLE" ]
			}
		}]
	}`

	conn, err := grpc.Dial(
		address,
		grpc.WithTransportCredentials(cred),
		grpc.WithDefaultServiceConfig(retryPolicy),
		grpc.WithDefaultCallOptions(
			grpc.MaxCallRecvMsgSize(maxRcvMsgSize),
		),
		grpc.WithStatsHandler(otelgrpc.NewClientHandler(
			otelgrpc.WithTracerProvider(
				options.tracerProvider,
			),
			otelgrpc.WithPropagators(
				rtrace.Propagator,
			),
		)),
	)

but they would just try the same ip. To actually send requests to different servers, aka client side load balancing we would have to add sth. like:

	// Make another ClientConn with round_robin policy.
	roundrobinConn, err := grpc.Dial(
		fmt.Sprintf("%s:///%s", exampleScheme, exampleServiceName),
		grpc.WithDefaultServiceConfig(`{"loadBalancingConfig": [{"round_robin":{}}]}`), // This sets the initial balancing policy.
		grpc.WithTransportCredentials(insecure.NewCredentials()),
	)

The load balancing works based on name resolving.

We could add all this to the reva pool ... or we use a go micro grpc client that already implements a pool, integrates with the service registry and can do retry, backoff and whatnot. But this requires generating micro glients for the cs3 api using github.com/go-micro/generator/cmd/protoc-gen-micro

3. pod readyness and health endpoints do not reflect the actual state of the pod

Currently, the /healthz and /readyz endpoints are independent from the actual service implementation. But some services need some time to be ready or flush all requests on shutdown. This also needs to be investigated.
For ready we could use a channel to communicate between the actual handler and the debug handler.
And AFAIR @rhafer mentioned we need to take care of shutdown functions ... everywhere.

4. the services are needlessly split into separate pods

Instead of startinf a pod for every service we should aggregate all processes that are involved in translating a request until they reach a storage provider:

  • proxy should stay alone as it is the first service that is hit by traffic. and we may need it to shard the userbase of large instances by routing requests to a specific shard
  • frontend, webdav, ocs & graph -> gateway & auth providers are all stateless and should run in a single frontend pod
  • storage-system might go together with user and group providers
  • strorage-users does the bulk of the work this makes sense to put into a pod (actually this already combines a storageprovider and a dataprovider which we should maybe even split? one is for metadata, the other for blob transfer)
  • sharing ... might even go into the frontend

The services should use localhost or even unix sockets to talk to each other. go can very efficiently use the resources in a pod an handle requests concurrently. We really only create a ton of overhead that stresses the kubernetes APIs and can be reduced.

@rhafer
Copy link
Contributor

rhafer commented Mar 6, 2024

And AFAIR @rhafer mentioned we need to take care of shutdown functions ... everywhere.

Hm, I don't remember what exactly I mentioned, but the biggest issues with shutdown were IIRC related to running ocis in single binary mode, because reva just does an os.Exit() from the first service finishing the SIGTERM/SIGQUIT/SIGINT signal handler, causing all other services to go away with before finishing their shutdown,l obviously.

When running as separate services there is already the possiblity to do a more graceful shutdown for the reva services. By default reva does this only when shutdown via SIGQUIT. When a setting graceful_shutdown_timeout to something != 0 (in the reva config) the graceful shutdown can also be triggered by sending the default SIGTERM signal. (AFAIK we currently only expose graceful_shutdown_timeout in ocis for the storage-users service. (For details: cs3org/reva#4072, #6840)

@wkloucek
Copy link
Contributor

Please also see https://github.com/owncloud/enterprise/issues/6441:

oCIS doesn't benefit from the Kubernetes readiness probes behavior since it's not using Kubernetes Services to talk to each other. It uses the go micro service registry instead that knows / doesn't know about service readiness!??

For a specific Kubernetes environment with Cilium: If we could just configure hostnames / DNS names and not use the micro registry, we probably could leverage Cilium for load balancing: https://docs.cilium.io/en/stable/network/servicemesh/envoy-load-balancing/ (but it's in beta state)

Please also be aware of the "retry" concept: https://github.com/grpc/grpc-go/blob/master/examples/features/retry/README.md

@tbsbdr tbsbdr added the Priority:p2-high Escalation, on top of current planning, release blocker label Apr 15, 2024
@micbar
Copy link
Contributor

micbar commented Apr 22, 2024

@butonic @kobergj @dragonchaser

I think we should start working on 2)

@butonic
Copy link
Member Author

butonic commented May 28, 2024

What is the current state of this. We found a few bugs that explain why the search service was not scaling.

AFAICT we need to reevaluate this with a load test.

@butonic
Copy link
Member Author

butonic commented May 31, 2024

There are two options. 1. use native GRPC mechanisms to retry. 2. generate go micro clients for the CS3 API.

I'd vote the latter, because go micro already retries requests that time out and we want to move some services into ocis anyway.

A first step could be to generate go micro clients for the gateway so our graph service can use them to make CS3 calls against the gateway.

another step would be to bring ocdav to ocis ... and then replace all grpc clients with go micre generated clients.

This is a ton of work. 😞

Note that using the native GRPC client and teaching it to retry services also requires configuring which calls should be retried.

Maybe we can just tell the grpc client in the reva pool to retry all requests?

Then we would still have two ways of making requests ... I'm not sure if we can use the native grpc retry mechanism, becasue we are using a single ip addresse that has been resolved with a micro selector. AFAICT the the grpc client cannot use DNS to find the next ip.

Two worlds are colliding here ...

💥

@butonic
Copy link
Member Author

butonic commented May 31, 2024

Furthermore, I still want to be able to use an in memory transport, which we could use when embracing go micro further.

@dragonchaser
Copy link
Member

Furthermore, I still want to be able to use an in memory transport, which we could use when embracing go micro further.

#9321 we have to discuss about this

@dj4oC
Copy link
Contributor

dj4oC commented Jun 25, 2024

Priority increasing due to multiple customers are effected
\cc @dragotin

@dj4oC dj4oC added Priority:p1-urgent Consider a hotfix release with only that fix and removed Priority:p2-high Escalation, on top of current planning, release blocker labels Jun 25, 2024
@micbar
Copy link
Contributor

micbar commented Jun 25, 2024

@dj4oC Can you please provide more info from the other customers too?

@dj4oC
Copy link
Contributor

dj4oC commented Jun 25, 2024

The customer @grischdian & @blicknix is reporting, that after kubectl patch oCIS does not work because new requests still try to reach old pods. kubectl deploy on the other hand does work, because the registration is done from scratch (new pods all over). Unfortunately we cannot export logs due to security constraints. Deployment is done on with OpenShift and ArgoCD.

@butonic
Copy link
Member Author

butonic commented Jun 25, 2024

um

# kubectl deploy
error: unknown command "deploy" for "kubectl"

Did you mean apply?

What MICRO_REGISTRY is configured?

@butonic butonic self-assigned this Jun 26, 2024
@butonic
Copy link
Member Author

butonic commented Jun 26, 2024

@dj4oC @grischdian @blicknix the built in nats in the ocis helm chart cannot be scaled. you have to keep the replica at 1. if you need a redundant deployment use a dedicated nats cluster.

running multiple nats instances from the ocis chart causes a split brain situation where service lookups might return stale data. this is related to kubernetes scale up / down, but we tackled scale up and should pick up new pods properly.

This issue is tracking scale down problems, which we can address by retrying calls. Unfortuately, that is a longer path because we need to touch a lot of code.

kubectl apply vs kubectl patch vs argocd are a different issue.

@blicknix
Copy link

We only have one nats pod in den environment as it is only a dev environment. So no split brain.
MICRO_REGISTRY is nats-js-kv

@butonic
Copy link
Member Author

butonic commented Jun 27, 2024

I think I have found a way to allow using the native grpc-go Thick Client round robin load balancing using the dns:/// transport and kubernetes headless services by taking into account the transport in the service metadata. It requires reading the transport from the service and registering services with a configurable transport.

This works without ripping out the go micro service registry but we need to test these changes with helm charts that use headless services und configure the grpc protocol to be dns.

🤔

hm and we may have to register the service with its domain name ... not the external ip ... urgh ... needs more work.

@butonic
Copy link
Member Author

butonic commented Jun 28, 2024

@wkloucek @d7oc what were the problems when ocis was using the kubernetes service registry? AFAIK etcd was under heavy load.

@dragonchaser mentioned that it is possible to set up an etcd per namespace to shard the load.

when every school uses ~40 pods and every pod registers a watcher on the kubernetes api (provided by etcd) and reregisters itself every 30 sec that does create some load. I don't know if the go micro kubernetes registry subscribes to ALL pod events or if it is even possible to only receive events for a single namespace. I can imagine when every pod change needs to be propagated to every watcher that that might cause load problems.

So if you can shed some light on why the kubernetes registry was 'bad' I'd be delighted.

@butonic
Copy link
Member Author

butonic commented Jun 28, 2024

Our curent guess is that the go micro kubernetes registry was registering services in the default namespace because of a bug. when tasting on a single instance in a cluster things would be fine .... deploying more than one should break the deployment because services from multiple instances would 'see' each other. Which would explain the high load on the kubernetes API where every ocis pod is watching every ocis pod in every school ... 😞

@wkloucek
Copy link
Contributor

wkloucek commented Jul 1, 2024

I'd honestly refuse to use the "Kubernetes go-micro registry" in production even if you address some points that you described above.

I would not use it, since it introduces a thight coupling between the Kubernetes Control Plane and the workload (in this case oCIS). During Kubernetes Cluster operations (eg. updating Kubernetes itself or the infra below, especially with a setup like Gardener https://gardener.cloud), you may have situations where the Control Plan is "down" / the Kubernetes API is unreachable for some minutes. The workers / kubelets / containers in the CRI will keep running unchanged.

If you're using the "Kubernetes go-micro service registry" in this case, your workload will also be down after it reached the cache TTL since no more communication to the Kubernetes API is possible.

If you use eg. NATS as a go-micro service registry, it'll continue running and a control plane / Kubernetes API downtime will have zero impact (as long as there are no node failures, load changes, ...)

EDIT, just as a addition: the cluster DNS will also keep working while the Kubernetes API is down. So using DNS for service discovery is a valid way to go from my point of view.

@wkloucek
Copy link
Contributor

wkloucek commented Jul 1, 2024

Maybe @grischdian & @blicknix you could share your Kubernetes API availability / downtimes, too?

I guess you don't have 99,999% (26s downtime in a month) Kubernetes API availability, right?

@wkloucek
Copy link
Contributor

wkloucek commented Jul 4, 2024

I don't know how #9535 may be related here

@tbsbdr tbsbdr removed the Priority:p1-urgent Consider a hotfix release with only that fix label Jul 8, 2024
@tbsbdr tbsbdr added Priority:p2-high Escalation, on top of current planning, release blocker Status:On-Hold labels Jul 8, 2024
@grischdian
Copy link

well since we are only the "user" of the openshift we have no numbers on the availability. I Working on this issue in parallel to figure out if argo is the reason. But what I can confirm: we have no nats scaled in the environment. I will come with an update later today.

@micbar
Copy link
Contributor

micbar commented Jul 8, 2024

@butonic Is still on vacation.

We will have no progress on this within this week.

@wkloucek
Copy link
Contributor

wkloucek commented Jul 9, 2024

well since we are only the "user" of the openshift we have no numbers on the availability

Having no SLA for the control plane would be a argument for me to not use the "Kubernetes" service registry. If the control plane had a 5 minutes downtime, this would create a roughly equal a oCIS downtime. Especially if you have no control over WHEN the control plane maintenance is performed, this might be a blocker, since this might conflict with your SLAs for the oCIS workload. Having the service registry component like NATS for the "nats-js-kv" service registry running on the Kubernetes workers, provides a good separation between workload and control plane.

@wkloucek
Copy link
Contributor

wkloucek commented Jul 15, 2024

Looking at #9535 might explain some things:

when a new pod is added it does not seem to receive traffic

yeah, because only one pod might receive load ever, because the registry only holds one registered service instance

when a pod is shut down because kubernetes moves it to a different node or it is descheduled it still receives traffic

yeah, because only one service instance is known at all. So next() will always use the same one until the TTL expires

@butonic
Copy link
Member Author

butonic commented Jul 22, 2024

ok, to double check we reproduced the broken nats-js-kv registry behaviour:

  • deploy ocis with a helmchart like this
releases:
  - name: ocis
    chart: ../../charts/ocis
    namespace: ocis
    values:
      - image:
          repository: owncloud/ocis
          tag: "5.0.6"
      - externalDomain: cloud.khal.localdomain
      - features:
          basicAuthentication: true
          demoUsers: true
      - ingress:
          enabled: true
          tls:
            - secretName: ocis-dev-tls
              hosts:
                - cloud.khal.localdomain

      - logging:
          level: debug

      - insecure:
          oidcIdpInsecure: true
          ocisHttpApiInsecure: true

      - services:
          idm:
            persistence:
              enabled: true
              accessModes:
                - ReadWriteOnce

          nats:
            persistence:
              enabled: true
              accessModes:
                - ReadWriteOnce

          search:
            persistence:
              enabled: true
              accessModes:
                - ReadWriteOnce

          storagesystem:
            persistence:
              enabled: true
              accessModes:
                - ReadWriteOnce

          storageusers:
            persistence:
              enabled: true
              accessModes:
                - ReadWriteOnce

          store:
            persistence:
              enabled: true
              accessModes:
                - ReadWriteOnce

          thumbnails:
            persistence:
              enabled: true
              accessModes:
                - ReadWriteOnce

          web:
            persistence:
              enabled: true
              accessModes:
                - ReadWriteOnce

continuously propfind einsteins home like this (replace your storageid):

go run ./ocis/cmd/ocis benchmark client -u einstein:relativity -k  https://cloud.khal.localdomain/dav/spaces/{storageid}

watch gateway logs to see which pod receives the traffic:

kubectl -n ocis logs -l 'app=gateway' --prefix -f | grep '/Stat' | cut -d' ' -f1

increase the number of replicas for the gateway deployment to 3 and observe the above log output.

in 5.0.6 all requests remain on the same pod.

@butonic
Copy link
Member Author

butonic commented Jul 22, 2024

with ocis-rolling@master upscaling picks up new pods and distributes the load to the gateway pods properly.

when scaling down we see intermittient 401 and some 500 responses to the propfind for ~10 sec. then all requests return back to 207.

Note that in the 10secs there is not a single 207. Presumably, because the auth services cannot connect to the gateway as well, explaining the short 4ms 401 responses.

we will verify and dig into the scale down tomorrow ...

@micbar micbar changed the title Scaling oCIS in kubernetes causes requests to fail [oCIS] Scaling oCIS in kubernetes causes requests to fail Jul 29, 2024
@micbar micbar changed the title [oCIS] Scaling oCIS in kubernetes causes requests to fail [oCIS] Scaling oCIS in kubernetes causes requests to fail Timebox 8PD Jul 29, 2024
@butonic
Copy link
Member Author

butonic commented Jul 31, 2024

For reference: the server side connection management with GRPC_MAX_CONNECTION_AGE in oCIS and reva follows the upstream design: https://github.com/grpc/proposal/blob/master/A9-server-side-conn-mgt.md#implementation

@butonic
Copy link
Member Author

butonic commented Aug 1, 2024

so now I see the storageusers pods being OOM killed as described in #9656 (comment)

I currently think we are running into kubernetes/kubernetes#43916 (comment)

edit: @wkloucek pointed out that we have to disable mime multipart uploads because it allocates too much memory:

              driver: s3ng
              driverConfig:
                s3ng:
                  metadataBackend: messagepack
                  endpoint: ...
                  region: ...
                  bucket: ...
                  putObject:
                    # -- Disable multipart uploads when copying objects to S3
                    disableMultipart: true

now tests are more stable:

k6 run ~/cdperf/packages/k6-tests/artifacts/koko-platform-000-mixed-ramping-k6.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: /root/cdperf/packages/k6-tests/artifacts/koko-platform-000-mixed-ramping-k6.js
     output: -

  scenarios: (100.00%) 8 scenarios, 75 max VUs, 6m30s max duration (incl. graceful stop):
           * add_remove_tag_100: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: add_remove_tag_100, gracefulStop: 30s)
           * create_remove_group_share_090: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: create_remove_group_share_090, gracefulStop: 30s)
           * create_space_080: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: create_space_080, gracefulStop: 30s)
           * create_upload_rename_delete_folder_and_file_040: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: create_upload_rename_delete_folder_and_file_040, gracefulStop: 30s)
           * download_050: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: download_050, gracefulStop: 30s)
           * navigate_file_tree_020: Up to 10 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: navigate_file_tree_020, gracefulStop: 30s)
           * sync_client_110: Up to 20 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: sync_client_110, gracefulStop: 30s)
           * user_group_search_070: Up to 20 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: user_group_search_070, gracefulStop: 30s)


     ✓ authn -> loginPageResponse - status
     ✓ authn -> authorizationResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> search.searchForSharees - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.getResourceProperties - status
     ✓ client -> application.createDrive - status
     ✓ client -> resource.createResource - status
     ✓ client -> drive.deactivateDrive - status
     ✓ client -> tag.getTags - status -- (SKIPPED)
     ✓ client -> tag.createTag - status -- (SKIPPED)
     ✓ client -> resource.downloadResource - status
     ✓ client -> drive.deleteDrive - status
     ✗ client -> tag.addTagToResource - status
      ↳  0% — ✓ 0 / ✗ 47
     ✗ client -> share.createShare - status
      ↳  0% — ✓ 0 / ✗ 44
     ✗ client -> tag.removeTagToResource - status
      ↳  0% — ✓ 0 / ✗ 47
     ✗ client -> share.deleteShare - status
      ↳  0% — ✓ 0 / ✗ 44
     ✓ client -> resource.deleteResource - status
     ✓ client -> resource.uploadResource - status
     ✓ client -> resource.moveResource - status

     checks.........................: 95.97% ✓ 4343     ✗ 182 
     data_received..................: 1.7 GB 4.5 MB/s
     data_sent......................: 812 MB 2.1 MB/s
     http_req_blocked...............: avg=897.71µs min=201ns   med=271ns    max=50.13ms p(90)=581ns    p(95)=954ns   
     http_req_connecting............: avg=311.9µs  min=0s      med=0s       max=19.64ms p(90)=0s       p(95)=0s      
     http_req_duration..............: avg=319.32ms min=2.53ms  med=275.79ms max=9.91s   p(90)=483.04ms p(95)=689.99ms
       { expected_response:true }...: avg=326.99ms min=2.53ms  med=280.14ms max=9.91s   p(90)=495.38ms p(95)=699.66ms
     http_req_failed................: 3.90%  ✓ 182      ✗ 4474
     http_req_receiving.............: avg=7.25ms   min=25.73µs med=94.36µs  max=1.18s   p(90)=198.97µs p(95)=50.19ms 
     http_req_sending...............: avg=1.6ms    min=27.66µs med=92.49µs  max=1s      p(90)=157.35µs p(95)=189.69µs
     http_req_tls_handshaking.......: avg=569.49µs min=0s      med=0s       max=29.77ms p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=310.46ms min=2.45ms  med=269.36ms max=9.29s   p(90)=473.95ms p(95)=655.22ms
     http_reqs......................: 4656   12.11429/s
     iteration_duration.............: avg=6.74s    min=1.18s   med=1.31s    max=46.38s  p(90)=15.65s   p(95)=18.41s  
     iterations.....................: 3419   8.895781/s
     vus............................: 1      min=0      max=75
     vus_max........................: 75     min=75     max=75


running (6m24.3s), 00/75 VUs, 3419 complete and 2 interrupted iterations
add_remove_tag_100             ✓ [======================================] 0/5 VUs    6m0s
create_remove_group_share_090  ✓ [======================================] 0/5 VUs    6m0s
create_space_080               ✓ [======================================] 0/5 VUs    6m0s
create_upload_rename_delete... ✓ [======================================] 0/5 VUs    6m0s
download_050                   ✓ [======================================] 0/5 VUs    6m0s
navigate_file_tree_020         ✓ [======================================] 00/10 VUs  6m0s
sync_client_110                ✓ [======================================] 00/20 VUs  6m0s
user_group_search_070          ✓ [======================================] 00/20 VUs  6m0s

hmmm but I still got a kill:

Memory cgroup out of memory: Killed process 3030326 (ocis) total-vm:2363552kB, anon-rss:100068kB, file-rss:62856kB, shmem-rss:0kB, UID:1000 pgtables:576kB oom_score_adj:997

@butonic
Copy link
Member Author

butonic commented Aug 1, 2024

setting concurrentStreamParts: false also does not fix this ...

@butonic
Copy link
Member Author

butonic commented Aug 1, 2024

forcing a guaranteed memory limit by setting it to the same as request also does not stop kubernetes from OOMKilling things

          storageusers:
            resources:
              limits:
                memory: 600Mi
              requests:
                cpu: 100m
                memory: 600Mi

it might not be the storage users pod ... I need to better understand the events:

┌────────────────────────────────────────────────────────────────── Events(default)[70] ──────────────────────────────────────────────────────────────────┐
│ LAST SEEN↑       TYPE          REASON                                   OBJECT                                                             COUNT        │
│ 2m49s            Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-pv8pb        1            │
│ 4m16s            Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-j8xbn        1            │
│ 16m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-pv8pb        1            │
│ 20m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-ncxpg        1            │
│ 21m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-ncxpg        1            │
│ 22m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-j8xbn        1            │
│ 23m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-ncxpg        1            │
│ 24m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-7q2hn        1            │
│ 34m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-7q2hn        1            │
│ 35m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-2trtl        1            │
│ 35m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-2trtl        1            │
│ 35m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-2trtl        1            │
│ 35m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-7q2hn        1            │
│ 35m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-7q2hn        1            │
│ 36m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-2trtl        1            │
│ 36m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-2trtl        1            │
│ 36m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-7q2hn        1            │
│ 36m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-7q2hn        1            │
│ 36m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-hrhjd        1            │
│ 36m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-hrhjd        1            │
│ 38m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-2trtl        1            │
│ 38m              Warning       OOMKilling                               node/shoot--420505--de-lasttest-worker-icb8m-z1-74fcc-7q2hn        1            │

Also running the tests with more than 150VUs fails ... I need to check if enough users are available ...

@butonic
Copy link
Member Author

butonic commented Aug 2, 2024

nats-js-kv-registry still seems broken. we tried disabling the cache but still see old ips show up ... 😞

@butonic
Copy link
Member Author

butonic commented Aug 2, 2024

hm a micro Selector always uses a cache with a default TTL of 1 minute:

// NewSelector creates a new default selector.
func NewSelector(opts ...Option) Selector {
	sopts := Options{
		Strategy: Random,
	}

	for _, opt := range opts {
		opt(&sopts)
	}

	if sopts.Registry == nil {
		sopts.Registry = registry.DefaultRegistry
	}

	s := &registrySelector{
		so: sopts,
	}
	s.rc = s.newCache()

	return s
}

and we use that selector at least in our proxy/pkg/router/router.go:

	reg := registry.GetRegistry()
	sel := selector.NewSelector(selector.Registry(reg))

@butonic
Copy link
Member Author

butonic commented Aug 6, 2024

we fixed more issues

  1. with the natsjskv registry: Nats registry fixes #9740
  2. with the proxy registering a nats watcher for every host in the configured routes: use less selectors #9741
  3. with the netsjskv store witcher implementation not sending deletes: do not try to unmarshal on deletes kobergj/plugins#1

@butonic
Copy link
Member Author

butonic commented Aug 6, 2024

these fixes bring us to a more reasonable loadtest. Sharing and tagging seem broken, though. tagging is a known issue AFAIK but sharing used to work.

# k6 run ~/cdperf/packages/k6-tests/artifacts/koko-platform-000-mixed-ramping-k6.js

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: /root/cdperf/packages/k6-tests/artifacts/koko-platform-000-mixed-ramping-k6.js
     output: -

  scenarios: (100.00%) 8 scenarios, 75 max VUs, 6m30s max duration (incl. graceful stop):
           * add_remove_tag_100: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: add_remove_tag_100, gracefulStop: 30s)
           * create_remove_group_share_090: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: create_remove_group_share_090, gracefulStop: 30s)
           * create_space_080: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: create_space_080, gracefulStop: 30s)
           * create_upload_rename_delete_folder_and_file_040: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: create_upload_rename_delete_folder_and_file_040, gracefulStop: 30s)
           * download_050: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: download_050, gracefulStop: 30s)
           * navigate_file_tree_020: Up to 10 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: navigate_file_tree_020, gracefulStop: 30s)
           * sync_client_110: Up to 20 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: sync_client_110, gracefulStop: 30s)
           * user_group_search_070: Up to 20 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: user_group_search_070, gracefulStop: 30s)


     ✓ authn -> loginPageResponse - status
     ✓ authn -> authorizationResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> search.searchForSharees - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.getResourceProperties - status
     ✓ client -> application.createDrive - status
     ✓ client -> resource.createResource - status
     ✓ client -> drive.deactivateDrive - status
     ✓ client -> drive.deleteDrive - status
     ✗ client -> share.createShare - status
      ↳  0% — ✓ 0 / ✗ 43
     ✗ client -> share.deleteShare - status
      ↳  0% — ✓ 0 / ✗ 43
     ✓ client -> resource.deleteResource - status
     ✓ client -> tag.getTags - status -- (SKIPPED)
     ✓ client -> tag.createTag - status -- (SKIPPED)
     ✗ client -> tag.addTagToResource - status
      ↳  0% — ✓ 0 / ✗ 47
     ✗ client -> tag.removeTagToResource - status
      ↳  0% — ✓ 0 / ✗ 47
     ✓ client -> resource.uploadResource - status
     ✓ client -> resource.moveResource - status
     ✓ client -> resource.downloadResource - status

     checks.........................: 95.86% ✓ 4170      ✗ 180 
     data_received..................: 1.7 GB 4.4 MB/s
     data_sent......................: 812 MB 2.1 MB/s
     http_req_blocked...............: avg=1.02ms   min=210ns   med=271ns    max=301.35ms p(90)=581ns    p(95)=1.1µs   
     http_req_connecting............: avg=321.74µs min=0s      med=0s       max=19.94ms  p(90)=0s       p(95)=0s      
     http_req_duration..............: avg=415.27ms min=2.56ms  med=364.1ms  max=16.18s   p(90)=624.35ms p(95)=762.88ms
       { expected_response:true }...: avg=421.78ms min=2.56ms  med=367.96ms max=16.18s   p(90)=630.45ms p(95)=765.31ms
     http_req_failed................: 4.01%  ✓ 180       ✗ 4301
     http_req_receiving.............: avg=10.98ms  min=28.73µs med=92.38µs  max=13.61s   p(90)=212.02µs p(95)=49.96ms 
     http_req_sending...............: avg=1.6ms    min=26.89µs med=91.06µs  max=635.84ms p(90)=163.94µs p(95)=201.16µs
     http_req_tls_handshaking.......: avg=681.32µs min=0s      med=0s       max=284.48ms p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=402.68ms min=2.43ms  med=362.09ms max=8.7s     p(90)=601.63ms p(95)=744ms   
     http_reqs......................: 4481   11.489714/s
     iteration_duration.............: avg=7.11s    min=1.27s   med=1.4s     max=46.95s   p(90)=15.92s   p(95)=18.7s   
     iterations.....................: 3246   8.323056/s
     vus............................: 1      min=0       max=75
     vus_max........................: 75     min=73      max=75


running (6m30.0s), 00/75 VUs, 3246 complete and 3 interrupted iterations
add_remove_tag_100             ✓ [======================================] 0/5 VUs    6m0s
create_remove_group_share_090  ✓ [======================================] 0/5 VUs    6m0s
create_space_080               ✓ [======================================] 0/5 VUs    6m0s
create_upload_rename_delete... ✓ [======================================] 0/5 VUs    6m0s
download_050                   ✓ [======================================] 0/5 VUs    6m0s
navigate_file_tree_020         ✓ [======================================] 00/10 VUs  6m0s
sync_client_110                ✓ [======================================] 00/20 VUs  6m0s
user_group_search_070          ✓ [======================================] 00/20 VUs  6m0s

@butonic butonic removed their assignment Aug 6, 2024
@butonic
Copy link
Member Author

butonic commented Aug 6, 2024

The next steps for this are:

  • try bigger load test. the login of the environment does not seem to be prepared for 750 VUs. might be a scaling problem, might pe users not being provisioned ...
  • why are the sharing tests not working? all of them fail, so it is not a scaling issue
  • why are the tags tests not workeng? all of them fail, so it is not a scaling issue

before we can close this issue we need to evaluate how the 1h load tests behave. before the login problvems are fixed this is blocked.

@wkloucek
Copy link
Contributor

wkloucek commented Aug 6, 2024

try bigger load test. the login of the environment does not seem to be prepared for 750 VUs. might be a scaling problem, might pe users not being provisioned ...

According to https://github.com/owncloud-koko/deployment-documentation/tree/main/development/loadtest/de-environment each loadtest school has 5000 users configured.

Also Keycloak should be scaled as the one on PROD (CPU / RAM).

The Realm settings regarding brokering, etc should differ though because we don't really have another IDM that we can broker.

@butonic
Copy link
Member Author

butonic commented Aug 15, 2024

we need to backport all natsjskv registry fixes from #8589 (comment) to stable5.

@butonic butonic mentioned this issue Sep 2, 2024
16 tasks
@butonic
Copy link
Member Author

butonic commented Sep 10, 2024

I backported the nats-js-kv registry fixes in #10019
quite a bit ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority:p2-high Escalation, on top of current planning, release blocker Type:Bug
Projects
Status: blocked
Development

No branches or pull requests

9 participants