-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: start of config reloading #1
Conversation
@@ -61,6 +63,9 @@ func (hc *healthCheckExtension) Start(host extension.Host) error { | |||
} | |||
|
|||
func (hc *healthCheckExtension) Shutdown() error { | |||
if !atomic.CompareAndSwapInt32(&instanceState, instanceCreated, instanceNotCreated) { | |||
return errors.New("only a single instance can be created per process") | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this logic live in Shutdown
vs Start
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of this at all. I'm debating refactoring it a bit so the only run one of me logic isn't duplicated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok so we want to clear the flag on shutdown though, not start. It's indicating whether the instance has been shutdown.
ok not going to do OnConfigChange, I think that's a huge change and I'll probably need more guidance from the current maintainers. We'll find out when I open the main PR. I also am not going to refactor the extensions, there's already an issue open for it and I'd prefer to do it there as they want a health check rewrite as well: |
service/service.go
Outdated
|
||
if err != errReloadConfig { | ||
break | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not loving:
- the distance between initializing and checking
err
, - the
for { ... if cond { break } }
pattern here.
🤔 could wefor err := errReloadConfig; err == errReloadConfig; { ... }
? I don't love that either 😞 . I also consideredreturn
here instead ofbreak
, but that'd require sticking the stuff after the loop into adefer
func.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could break setup and teardown into different functions, then it might be a bit cleaner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this cleaner? Kind of annoying as you have to pass the ballast around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended putting the for body into a function, not sure if I like it tbh. I can try the splitting as well, though it has the same passing the ballast around problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hrm I think I made it worse tbh. reworking...
service/service_test.go
Outdated
} | ||
defer srcFile.Close() | ||
|
||
outFile, err := os.OpenFile(dst, os.O_CREATE|os.O_WRONLY, 0666) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: consistency should be dstFile
service/service.go
Outdated
|
||
// Everything is ready, now run until an event requiring shutdown happens. | ||
err := app.runAndWaitForShutdownEvent() | ||
for err := app.run(ballast); err == errReloadConfig; err = app.run(ballast) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what an interesting while loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need to pass around the ballast? Can you just keep the runtime.KeepAlive(ballast)
at the end of this function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume it was there because you need to run the keep alive during the shutdown. Else it would have been at the end of the function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, you really just need it at the end of the function. The runtime.KeepAlive
call is really a compiler directive that keeps the variable alive, so moving this to the end of this function is sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah ok. TIL
I rewrote it, I think this is pretty clear. |
app.setupTelemetry(ballastSizeBytes) | ||
|
||
err := errReloadConfig | ||
for err == errReloadConfig { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would be the cleanest with a do while() , but here's the go version of that.
service/service.go
Outdated
app.runAndWaitForShutdownEvent() | ||
|
||
// Begin shutdown sequence. | ||
func (app *Application) shutdown(ballast []byte) { | ||
runtime.KeepAlive(ballast) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would stick this runtime.KeepAlive(ballast)
call in the execute
function right before AppTelemetry.shutdown()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok then I don't have to pass it around.
9aa0385
to
c5b3c21
Compare
any more comments @fbogsany ? I'll push this up soon if we're good. |
I moved teh ballast around, though GH was being shitty with the branches so I had to squash to force a UI update. |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
Closed as inactive. Feel free to reopen if this PR is still being worked on. |
To resolve the govulncheck reports: ``` Vulnerability #1: GO-2023-1987 Large RSA keys can cause high CPU usage in crypto/tls More info: https://pkg.go.dev/vuln/GO-2023-1987 Standard library Found in: crypto/tls@go1.19.11 Fixed in: crypto/tls@go1.21rc4 Example traces found: Error: #1: service/internal/proctelemetry/config.go:299:27: proctelemetry.initOTLPgRPCExporter calls otlpmetricgrpc.New, which eventually calls tls.Conn.Handshake Error: #2: service/internal/proctelemetry/config.go:156:39: proctelemetry.InitPrometheusServer calls http.Server.ListenAndServe, which eventually calls tls.Conn.HandshakeContext Error: #3: service/service.go:251:36: service.buildResource calls uuid.NewRandom, which eventually calls tls.Conn.Read Error: #4: service/config.go:35:13: service.Config.Validate calls fmt.Printf, which eventually calls tls.Conn.Write Error: #5: service/telemetry/telemetry.go:32:28: telemetry.Telemetry.Shutdown calls trace.TracerProvider.Shutdown, which eventually calls tls.Dialer.DialContext ``` https://github.com/open-telemetry/opentelemetry-collector/actions/runs/5753675727/job/15597394973?pr=8144
porting the old code over to the new one, I didn't want to rebase all the errors.
Things
once I'm done and we've reviewed I'll push it up, looks like someone else wants it as well:
open-telemetry#273
fyi @fbogsany @rdooley