Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http instrumentation (eBPF probe) is not working #45

Open
ocampeau opened this issue Nov 24, 2022 · 6 comments
Open

net/http instrumentation (eBPF probe) is not working #45

ocampeau opened this issue Nov 24, 2022 · 6 comments

Comments

@ocampeau
Copy link
Contributor

ocampeau commented Nov 24, 2022

Expected Behavior

I deployed the emojivoto example in my local Kubernetes cluster. The emojivoto services, which uses gRPC, are reporting traces in Jaeger backend. This indicates to me, at least at first view, that the gRPC instrumentation in opentelemetry-go-instrumentation is working correctly

But when I deploy my own demo web server, running with default net/http package, I expect opentelemetry-go-instrumentation to detect the request's path and method and to send a trace to the Otel collector.

Actual Behavior

But none of that happens. I do not see any traces in Jaeger.

I can confirm that my demo webserver, called hello, is working properly since I am able to communicate with it with curl commands. Also, the logs are showing that the hello server is indeed working.

Here is a copy of my own demo server's Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
  namespace: emojivoto
  labels:
    app.kubernetes.io/name: hello
    app.kubernetes.io/part-of: emojivoto
    app.kubernetes.io/version: v11
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-svc
      version: v11
  template:
    metadata:
      labels:
        app: hello-svc
        version: v11
    spec:
      shareProcessNamespace: true
      terminationGracePeriodSeconds: 0
      initContainers:
        - name: copy-launcher
          image: keyval/launcher:v0.1
          command:
            - cp
            - -a
            - /kv-launcher/.
            - /odigos-launcher/
          volumeMounts:
            - name: launcherdir
              mountPath: /odigos-launcher
      containers:
        - image: MY_DEMO_IMAGE
          name: hello-svc
          command:
            - /odigos-launcher/launch
            - /hello
          volumeMounts:
            - mountPath: /odigos-launcher
              name: launcherdir
          ports:
            - containerPort: 4444
              name: http
          resources:
            requests:
              cpu: 100m
        - name: emojivoto-web-instrumentation
          image: keyval/otel-go-agent:v0.6.0
          env:
            - name: OTEL_TARGET_EXE
              value: /hello
            - name: OTEL_EXPORTER_OTLP_ENDPOINT
              value: "jaeger:4317"
            - name: OTEL_SERVICE_NAME
              value: "emojivoto-web"
          securityContext:
            runAsUser: 0
            capabilities:
              add:
                - SYS_PTRACE
            privileged: true
          volumeMounts:
            - mountPath: /sys/kernel/debug
              name: kernel-debug
      volumes:
        - name: launcherdir
          emptyDir: {}
        - name: kernel-debug
          hostPath:
            path: /sys/kernel/debug
---
apiVersion: v1
kind: Service
metadata:
  name: hello-svc
  namespace: emojivoto
spec:
  type: NodePort
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  selector:
    app: hello-svc

I can also confirm that the instrumentation sidecar is able to discover my hello server's process because the logs says so:

{"level":"info","ts":1669306652.8174577,"caller":"cli/main.go:22","msg":"starting Go OpenTelemetry Agent ..."}
{"level":"info","ts":1669306652.817512,"caller":"opentelemetry/controller.go:92","msg":"Establishing connection to OpenTelemetry collector ..."}
{"level":"info","ts":1669306654.818036,"caller":"process/discover.go:42","msg":"found process","pid":13}
{"level":"info","ts":1669306654.818234,"caller":"process/analyze.go:73","msg":"found addr of keyval map","addr":140388705247232}
{"level":"info","ts":1669306654.8208957,"caller":"process/analyze.go:136","msg":"found relevant function for instrumentation","function":"net/http.(*ServeMux).ServeHTTP","returns":2}
{"level":"info","ts":1669306654.8209321,"caller":"cli/main.go:64","msg":"target process analysis completed","pid":13,"go_version":"1.18.1","dependencies":{},"total_functions_found":1}
{"level":"info","ts":1669306654.8209662,"caller":"cli/main.go:70","msg":"invoking instrumentors"}
{"level":"info","ts":1669306654.8252027,"logger":"allocator","caller":"allocator/allocator_linux.go:19","msg":"Loading allocator","start_addr":140388705247232,"end_addr":140388717830144}
{"level":"info","ts":1669306654.826382,"caller":"instrumentors/runner.go:68","msg":"loading instrumentor","name":"net/http"}
{"level":"info","ts":1669306654.826874,"caller":"inject/injector.go:67","msg":"Injecting variables","vars":{"ctx_ptr_pos":232,"is_registers_abi":true,"method_ptr_pos":0,"path_ptr_pos":56,"url_ptr_pos":16}}
{"level":"info","ts":1669306654.8707218,"caller":"instrumentors/runner.go:77","msg":"loaded instrumentors to memory","total_instrumentors":1}

Further more, every time I send a request to my hello server, the the tool opentelemetry-go-instrumentation is detecting the request and writing a log to the console, but in the logs the request's path and method are empty, as we can see in the logs here:

{"level":"info","ts":1669306673.0694988,"caller":"opentelemetry/controller.go:44","msg":"got event","attrs":[{"Key":"http.method","Value":{"Type":"STRING","Value":""}},{"Key":"http.target","Value":{"Type":"STRING","Value":""}}]}
{"level":"info","ts":1669306673.9885561,"caller":"opentelemetry/controller.go:44","msg":"got event","attrs":[{"Key":"http.method","Value":{"Type":"STRING","Value":""}},{"Key":"http.target","Value":{"Type":"STRING","Value":""}}]}
{"level":"info","ts":1669306674.4386296,"caller":"opentelemetry/controller.go:44","msg":"got event","attrs":[{"Key":"http.method","Value":{"Type":"STRING","Value":""}},{"Key":"http.target","Value":{"Type":"STRING","Value":""}}]}
{"level":"info","ts":1669306674.734222,"caller":"opentelemetry/controller.go:44","msg":"got event","attrs":[{"Key":"http.method","Value":{"Type":"STRING","Value":""}},{"Key":"http.target","Value":{"Type":"STRING","Value":""}}]}
{"level":"info","ts":1669306674.9497232,"caller":"opentelemetry/controller.go:44","msg":"got event","attrs":[{"Key":"http.method","Value":{"Type":"STRING","Value":""}},{"Key":"http.target","Value":{"Type":"STRING","Value":""}}]}
{"level":"info","ts":1669306796.7762065,"caller":"opentelemetry/controller.go:44","msg":"got event","attrs":[{"Key":"http.method","Value":{"Type":"STRING","Value":""}},{"Key":"http.target","Value":{"Type":"STRING","Value":""}}]}

After some digging, I have found that the eBPF probe for net/http seems broken. When debugging with the eBPF trace pipe, I can see that in the uprobe uprobe_ServerMux_ServeHTTP_Returns, the *httpRequest object, which is fetched from CPU register rdi at the end of the function, does not yield any valid *httpRequest pointer.

In other words, when using bpf_printk(), I can print the request's method in uprobe_ServerMux_ServeHTTP and it works.

u64 request_pos = 4;
struct http_request_t httpReq = {};
httpReq.start_time = bpf_ktime_get_boot_ns();

// Get request struct
void* req_ptr = get_argument(ctx, request_pos);

// Get method from request
void* method_ptr = 0;
bpf_probe_read(&method_ptr, sizeof(method_ptr), (void *)(req_ptr+method_ptr_pos));
u64 method_len = 0;
bpf_probe_read(&method_len, sizeof(method_len), (void *)(req_ptr+(method_ptr_pos+8)));
u64 method_size = sizeof(httpReq.method);
method_size = method_size < method_len ? method_size : method_len;
bpf_probe_read(&httpReq.method, method_size, method_ptr);
bpf_printk("method is %s\n", httpReq.method); // this prints the correct method to /sys/kernel/debug/tracing/trace_pipe

But when adding some code in the return probe (uprobe_ServerMux_ServeHTTP_Returns) to test the same thing, the method is an empty string. Same for the request's path.

Also, there seems to be an issue with the switch to context's address as eBPF map keys instead of goroutine IDs. In both entry and return probe, the context address is different. This might be because the idiomatic way to modify the context of a request in Go is to create a new context instead. This changes the location of the context in memory, and so its own memory address. It seems that the current implementation does not take that into account.

I would gladly work on that and submit a PR if I can have more details as to why the net/http instrumentation is failing. Is it suppose to fail? Is it not fully implemented yet (but it works a few months ago when the goroutine instrumentation was in place if I remember correctly)?

Let me know and I can lend a hand.

Steps to Reproduce the Problem

  1. Create a simple web server using net/http library
  2. Deploy the server with the open telemetry-go-instrumentation as explained in the docs/getting-started guide
  3. Look in Jaeger UI to see there is no traces

Additional Info

  • Kubernetes version: v1.24.0

    Output of kubectl version:

WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:38:19Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
  • operation-system/kernel version:

    Output of awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release: Ubuntu 22.04 LTS

    Output of uname -r: 5.15.0-53-generic

@edeNFed
Copy link
Contributor

edeNFed commented Nov 28, 2022

Thank you for the detailed explanation!
I think a good solution may be replacing the following line
https://github.com/keyval-dev/opentelemetry-go-instrumentation/blob/c4799f7d5fbedcd7533daf0e4103449c6ed156e5/pkg/instrumentors/bpf/net/http/server/bpf/probe.bpf.c#L84
with

void* httpReq_ptr = find_context_in_map(ctx_iface, &context_to_http_events);

The method find_context_in_map handles this case precisely, it walks up the context tree until it finds a parent context in the provided map. (you can see similar usage in the gRPC server instrumentor).

Please let me know if this fix works, I'll be happy to merge a fix.

@ocampeau
Copy link
Contributor Author

Thanks for your reply. Sure, I'll look into that and once I get something working I send a PR.

@hnhbwlp
Copy link

hnhbwlp commented Jan 3, 2023

@edeNFed Is there any new progress on this issue, I have encountered the same problem.

@edeNFed
Copy link
Contributor

edeNFed commented Jan 4, 2023

Hi @hnhbwlp this fix will be released in the next version (probably next week)

@mariomac
Copy link

mariomac commented Feb 22, 2023

I tried the approach from the PR #48 and it didn't succeed in my case. How get_argument_by_stack is supposed to work in Go 1.17 if the arguments are passed by register?

@mariomac
Copy link

mariomac commented Mar 2, 2023

This fix in the homonymous repo worked for me: open-telemetry/opentelemetry-go-instrumentation#34

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants