This repository has been archived by the owner on Apr 26, 2024. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Draft:
/messages
investigation scratch pad1 #13440Draft:
/messages
investigation scratch pad1 #13440Changes from 13 commits
522c29b
b6a18d2
2504bc6
9cd6320
c3f3e59
9f69182
2f75287
fdce1c2
a7eabb7
13855c5
552b7f1
c51883e
ee465f9
aa5e925
2a467fd
597c3f2
f4ec9d1
898ba0e
53b8453
0f2bfa4
db04b16
4168ba5
05e5113
d8899e4
04de9ea
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder why this is the case? I was seeing this same behavior with the Jaeger
opentracing
stuff. Is the UDP connection being over saturated? Can the Jaeger agent in Docker not keep up? We see some spans come over but never the main servlet overarching one that is probably the last to be exported.But using the HTTP Jaeger collector endpoint seems to work fine for getting the whole trace.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.