-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support gRPC for query service #1307
Support gRPC for query service #1307
Conversation
model/proto/api_v2.proto
Outdated
@@ -64,6 +64,22 @@ service CollectorService { | |||
} | |||
} | |||
|
|||
message GetTraceRequest { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this proto file is getting very busy. Could you try to move the query service to api_v2_query.proto
? Or even better api_v2/query.proto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, will move this out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review! @yurishkuro
I first wanted to ensure all the necessary proto definitions are in place.
model/proto/api_v2.proto
Outdated
@@ -64,6 +64,22 @@ service CollectorService { | |||
} | |||
} | |||
|
|||
message GetTraceRequest { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, will move this out.
model/proto/api_v2.proto
Outdated
|
||
message GetTraceResponse { | ||
Trace trace = 1; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check the discussion in #1323. We have an internal grpc implementation of GetTrace, and our users have run into the issue of large traces exceeding grpc default message size. So I think we need to go with streaming APIs for methods that can return large data sets.
I am not sure what it would do to grpc_gateway, how does it map streaming calls to HTTP?
Also, it's been a while since the last time I checked, but if gRPC for JavaScript in the browser has advanced enough, then we won't even need grpc_gateway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure what it would do to grpc_gateway, how does it map streaming calls to HTTP?
Seems like they have an open issue for sending chunked JSON responses - grpc-ecosystem/grpc-gateway/issues/562
Also, it's been a while since the last time I checked, but if gRPC for JavaScript in the browser has advanced enough, then we won't even need grpc_gateway.
Apart from streaming-support (which is on their roadmap) what other features would we need in the grpc-web library?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: Went through the grpc-web readme again, and it mentions that server-side streaming is already supported. This makes it a candidate to be considered for the query UI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great! We can always put a blocking/sync facade in front of grpc service if streaming to web is not supported, but since it's there already, we should go with streaming for the main API.
287fb43
to
2d276fa
Compare
I would like a second review round @yurishkuro Changes in the last commit -
Todo -
|
model/proto/api_v2/query.proto
Outdated
|
||
// FIXME: Pass all TraceQueryParameters in SearchRequest? | ||
message SearchRequest { | ||
string query = 1 [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: Should parameters be parsed from a string query
or should the SearchRequest
have all TraceQueryParameters as in -
jaeger/storage/spanstore/interface.go
Lines 45 to 54 in 65a499a
type TraceQueryParameters struct { | |
ServiceName string | |
OperationName string | |
Tags map[string]string | |
StartTimeMin time.Time | |
StartTimeMax time.Time | |
DurationMin time.Duration | |
DurationMax time.Duration | |
NumTraces int | |
} |
I guess the former because protobuf supports just basic data types and not time/maps etc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should be structured, as in #1323
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I've made the changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please try to match the naming & ordering with https://github.com/jaegertracing/jaeger/pull/1323/files#diff-ac4433b4ef3a403411d1d1b5141eb9ccR121
Signed-off-by: Annanay <annanay.a@media.net>
Signed-off-by: Annanay <annanay.a@media.net>
…er doc Signed-off-by: Annanay <annanay.a@media.net>
Signed-off-by: Annanay <annanay.a@media.net>
Signed-off-by: Annanay <annanay.a@media.net>
Signed-off-by: Annanay <annanay.a@media.net>
Signed-off-by: Annanay <annanay.a@media.net>
Signed-off-by: Annanay <annanay.a@media.net>
Thanks for the review @yurishkuro. I've addressed comments and updated. |
Signed-off-by: Yuri Shkuro <ys@uber.com>
Signed-off-by: Yuri Shkuro <ys@uber.com>
Signed-off-by: Yuri Shkuro <ys@uber.com>
Signed-off-by: Yuri Shkuro <ys@uber.com>
Codecov Report
@@ Coverage Diff @@
## master #1307 +/- ##
==========================================
- Coverage 99.83% 99.81% -0.03%
==========================================
Files 179 180 +1
Lines 8557 8631 +74
==========================================
+ Hits 8543 8615 +72
- Misses 7 8 +1
- Partials 7 8 +1
Continue to review full report at Codecov.
|
Signed-off-by: Annanay <annanay.a@media.net>
Thanks for the changes @yurishkuro! About tests, should I use a mock query service (as in the current commit) or an instance of query service with mock span readers / dependency readers? |
mock QS is fine |
Signed-off-by: Annanay <annanay.a@media.net>
b3b446b
to
86d770b
Compare
Signed-off-by: Annanay <annanay.a@media.net>
Signed-off-by: Annanay <annanay.a@media.net>
@yurishkuro could you please review this? I think the last remaining lines from coverage are stream sends, which I'm not sure how to test. |
Signed-off-by: Yuri Shkuro <ys@uber.com>
🎉 There is a bit of a clean-up I'd like to do in the test, I think it's leaking goroutines, but didn't want to block the PR on that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@annanay25 post-merge comments / improvements
) | ||
|
||
var ( | ||
grpcServerPort = ":14251" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should be using :0 port
go func() { | ||
err := grpcServer.Serve(lis) | ||
require.NoError(t, err) | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
goroutine leak? Or will defer server.Stop()
in the tests stop it?
return grpcServer, addr, readStorage, dependencyStorage | ||
} | ||
|
||
func initializeTestServerGRPCWithOptions(t *testing.T) (*grpc.Server, net.Addr, *spanstoremocks.Reader, *depsmocks.Reader, *spanstoremocks.Reader, *spanstoremocks.Writer) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a) any reason not to merge this with the above function?
b) instead of returning many objects it's better to return a single struct
c) to avoid server life cycle management, we typically do this
func withGRPCServer(t *testing.T, actualTest func(server *server)) {
server := ... create ...
defer server.close() // other life cycle mgmt
actualTest(server)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Booked #1483
Return(mockTrace, nil).Once() | ||
|
||
client, conn := newGRPCClient(t, addr) | ||
defer conn.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar to above, withServerAndClient(t *testing.T, func actualTest(server *server, client *client))
this way it would remove a lot of boilerplate code from the tests
grpcL := s.Match( | ||
cmux.HTTP2HeaderField("content-type", "application/grpc"), | ||
cmux.HTTP2HeaderField("content-type", "application/grpc+proto")) | ||
httpL := s.Match(cmux.Any()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Booked #1482 to refactor this out of main
Thanks @yurishkuro! 🎊 I've been traveling but should get a chance to look at this soon :) Edit: We should now revisit the implementation roadmap in #773 |
Which problem is this PR solving?
query-service
.Short description of the changes
GetTrace()
Signed-off-by: Annanay annanay.a@media.net