-
Notifications
You must be signed in to change notification settings - Fork 855
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to improve logging and tracing support in future release? #1061
Comments
Hello @jackc thanks for such feature. That will help a lot |
Is anyone working on this? I'm considering switching to the pgx interface to take advantage of the binary protocol, but if it stops us from integrating with e.g. DataDog in the future (DataDog/dd-trace-go#697) that could be a problem. |
@jackc any progress on this. |
I have not heard of anyone working on this. |
I'm wondering how practical it would be to provide some sort of hooks or middleware chains which could be triggered before or after an |
I like the idea of hooks, but there would need to be a lot more than before / after I could see something like: type BeforeQueryHook interface {
BeforeQuery(...)
} Every hook point could have its own interface. The logger or tracer could implement as many or as few hooks as it wanted to. This would allow adding or even removing hooks in a backwards compatible way. |
Couple more questions since I'm not very familiar with the inner workings of pgx:
I could try looking into a PR (if you want) but would probably need some guidance along the way |
It would be a separate interface for each hook. But I don't think there would need to be a separate one for each method.
I don't think so. Txs and pools ultimately call methods on Conn so |
Well I guess hooks would be the best approach, but wrapping the driver is a simpler approach - kind of similar to the work they did on: https://github.com/uptrace/opentelemetry-go-extra/tree/main/otelsql Tracing do work if you use I do think that this should not be done as part of this repo as this is an extension (wrapper) on top of the pgx driver. I know this means that it's more "brittle" but necessary but would make things simpler to maintain (as @jackc goals are different from "tracing"). As for tracing engine, I do believe the community should focus on OpenTelemetry (and not OC/DD/Jaeger etc) as its de-facto standard for tracing. As this is a major blocker for my team and I, I will try the find the time and develop this myself. My approach would be wrapping pgx.Conn kind of what otelsql did. For everyone else interested in my current setup (sqlx, pgx driver[stdlib] and otelsql) -> func (cfg *PostgresConfig) String() string {
return fmt.Sprintf("host=%s port=%s user=%s password=%s dbname=%s sslmode=disable", cfg.Host, cfg.Port, cfg.User, cfg.Password, cfg.DB)
}
func NewPostgres(ctx context.Context, pgcfg ...PostgresConfig) *sqlx.DB {
cfg := newPostgresConfigDefault()
if len(pgcfg) > 0 {
// TODO: merge pgcfg with default vaules
cfg = &pgcfg[0]
}
dcfg, err := pgx.ParseConfig(cfg.String())
if err != nil {
log.S(ctx).Fatalf("unable to parse postgres connection string: %v", err.Error())
}
conf := otelsql.OpenDB(stdlib.GetConnector(*dcfg))
conn, err := sqlx.NewDb(conf, "pgx"), nil
if err != nil {
log.S(ctx).Fatalf("socket server unexpected error %v", err)
}
return conn
} (Note that I can't use otelsqlx [uptrace] because I doesn't support OpenDB using an existing connector, this is why I do sqlx.NewDb to create sqlx connection) And in my repositories I do something like: func NewPgProjectsRepository(conn *sqlx.DB) *PgProjectsRepository {
r := &PgProjectsRepository{
conn: conn,
db: db.New(conn),
}
return r
} P.S I'm 100% down for replacing sqlx with pgx but not sure it covers its functionalities with parsing to structs or don't know how to convert pgx.Conn to stdlib to "downgrade". In one of my attempts I tried using "AquireConn"/conn.Raw on stdlib but it kinda of a deadend since I dont know when to release a connection and want it to be used like I do with database/sql conn |
The crucial thing about any hooks-based solution as I see it is ensuring consistency of the context sent through. OpenTelemetry, and I believe its predecessor frameworks, all work by stuffing the span and correlation information into the ctx. Pgx is already great about making sure a passed-in context is preserved, but for these hooks to be able to correlate calls within the path of a single request, the right context needs to be passed in and out of the hooks in the right order. If a bit more background on what information is needed is helpful, the OpenTelemetry spec provides a list of keys for traces on database calls. Obviously these would actually be populated by an otel integration library, but hopefully this is a helpful indication of what info the hooks will want to be able to collect, beyond the basics like timings and SQL content. |
any update on this topic? |
Replaces existing logging support. Package tracelog provides adapter for old style logging. #1061
I just pushed a big change to the There are various interfaces a tracer can implement. e.g. // QueryTracer traces Query, QueryRow, and Exec.
type QueryTracer interface {
// TraceQueryStart is called at the beginning of Query, QueryRow, and Exec calls. The returned context is used for the
// rest of the call and will be passed to TraceQueryEnd.
TraceQueryStart(ctx context.Context, conn *Conn, data TraceQueryStartData) context.Context
TraceQueryEnd(ctx context.Context, conn *Conn, data TraceQueryEndData)
} As I mentioned in my original post, the simple logging that pgx supports in This is the last major change planned for All that to say, I'd suggest those who care about this feature trying it soon. Once |
I've tried this out locally and I have no complaints so far regarding the API. Thank you @jackc! Anyone who wants to give this a spin, I've created https://github.com/exaring/otelpgx based on the v5.0.0-alpha.5 tag. |
@jackc Copy and paste code from the Or I think |
@jnst I think the tracer implementation wrapping another tracer (middleware style) makes more sense than a slice of tracers. That gives application code the ability to control which tracers run and in what order. |
@jackc
The practice of executing operations one at a time in a slice structure is often seen in server application interceptors. I am using Datadog in my product and am trying to support pgx to visualize database processing times in a distributed tracing. func (t *pgxTracer) TraceQueryStart(ctx context.Context, conn *pgx.Conn, data pgx.TraceQueryStartData) context.Context {
ctx = t.logTracer.TraceQueryStart(ctx, conn, data)
var c *pgx.ConnConfig
if conn != nil {
c = conn.Config()
}
span := t.startSpan(ctx, "query", data.SQL, c)
if span != nil {
return tracer.ContextWithSpan(ctx, span)
}
return ctx
} Click here for the entire code. |
To come to @jackc's support, I don't see how these are problems to be honest.
This might be true in your case, but loads of people don't need logs.
Wrapping middlewares is a standard procedure and used in many popular router implementations in Go.
This is an implementation detail and I don't see how a slice of tracers would simplify this. Also you link to a blog post, but this is just one opinion. If you look at a hugely popular post on HTTP services, wrapping is encouraged. Also chi, one of the most popular HTTP routers in the Go ecosystem is using wrapping middleware handlers. I don't have stakes in this as I'm just a (happy) user of pgx, but I understand why the proposed |
Multiple tracers can be implemented in several ways. I happen to prefer the wrapping style like the HTTP middleware mentioned above. But a slice of tracers can itself be easily implemented as a type MultiQueryTracer struct {
Tracers []QueryTracer
}
func (m *MultiQueryTracer) TraceQueryStart(ctx context.Context, conn *Conn, data TraceQueryStartData) context.Context {
for _, t := range m.Tracers {
ctx = t.TraceQueryStart(ctx, conn, data)
}
return ctx
}
func (m *MultiQueryTracer) TraceQueryEnd(ctx context.Context, conn *Conn, data TraceQueryEndData) {
for _, t := range m.Tracers {
t.TraceQueryEnd(ctx, conn, data)
}
} |
is anyone working on datadog integration? if not I'll try to (FYI - DataDog/dd-trace-go#697 (comment)) |
Hi @jackc, I try to create the Trace for newrelic right now. And try to unit test it. For now function. cmd, err := conn.Exec(ctx, query, args...) It doesn't have any problem. But for rows, err := conn.Query(ctx, query, args...) I can't see where is Line 624 in 3e825ec
And not found |
After more reading on the code, I found out that |
Hi @jackc, would it be possible to get your opinion (or anyone else's on this thread) on DataDog/dd-trace-go#1537 ? |
latest update, as dd-trace-go supports three versions back (1.17 as of now), that integration has to wait, but still could anyone please comment on the API as well as data collected? |
@mrkagelui I haven't used used DataDog, so my opinion probably isn't super valuable, but I took a quick look and nothing in the pgx parts looked out of the ordinary. |
I'm going to close this issue since the feature was merged into v5 and v5 has now been released. For any bugs or new features we can create new issues or discussions. |
Thank you so much! Yes I wanted to see if the data collected makes sense and it's the correct use of the interface and data |
If anyone's struggling with this, I threw this together based on a comment in this issue and another (which are in the code file). This uses the slog logger interface. https://gist.github.com/zaydek/91f27cdd35c6240701f81415c3ba7c07 |
The original design goal for logging was simply to log what queries are being run and how long they take.
There have been a few requests to support additional logging / tracing points or new ways of configuring what is currently logged (#853, #998). The current logging system works fine for me, but as I'm considering the possibility of a new major release I'd like to open the door to breaking changes to the logging / tracing interface.
As I do not have a first-hand need for this I do not have any opinion on what these changes might be, and I would probably not implement them myself, but I wanted to bring up the possible opportunity for those who may have the interest and ability while there is a window open for possible breaking changes.
The text was updated successfully, but these errors were encountered: