Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.../input/entityanalytics/provider/okta: Rate limiting fix, improvements #41977

Merged

Conversation

chrisberkhout
Copy link
Contributor

@chrisberkhout chrisberkhout commented Dec 10, 2024

Proposed commit message

.../input/entityanalytics/provider/okta: Rate limiting fix, improvements

- Fix a bug in the stopping of requests when `x-rate-limit-remaining: 0`.
- Add a deadline so long waits return immediately as errors.
- Add an option to set a fixed request rate.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Related issues

@chrisberkhout chrisberkhout self-assigned this Dec 10, 2024
@chrisberkhout chrisberkhout requested a review from a team as a code owner December 10, 2024 19:32
@elasticmachine
Copy link
Collaborator

Pinging @elastic/security-service-integrations (Team:Security-Service Integrations)

@botelastic botelastic bot added needs_team Indicates that the issue/PR needs a Team:* label and removed needs_team Indicates that the issue/PR needs a Team:* label labels Dec 10, 2024
Copy link
Contributor

mergify bot commented Dec 10, 2024

This pull request does not have a backport label.
If this is a bug or security fix, could you label this PR @chrisberkhout? 🙏.
For such, you'll need to label your PR with:

  • The upcoming major version of the Elastic Stack
  • The upcoming minor version of the Elastic Stack (if you're not pushing a breaking change)

To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-8./d is the label to automatically backport to the 8./d branch. /d is the digit

Copy link
Contributor

mergify bot commented Dec 10, 2024

backport-8.x has been added to help with the transition to the new branch 8.x.
If you don't need it please use backport-skip label and remove the backport-8.x label.

@mergify mergify bot added the backport-8.x Automated backport to the 8.x branch with mergify label Dec 10, 2024
@@ -27,6 +27,7 @@ func defaultConfig() conf {
SyncInterval: 24 * time.Hour,
UpdateInterval: 15 * time.Minute,
LimitWindow: time.Minute,
LimitFixed: nil,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't needed. Is it here for explicitness?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I was thinking out loud. Doesn't need to be there. Removed.

Comment on lines 52 to 53
ready := make(chan struct{})
close(ready)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest something like

// alwaysReady is an non-blocking chan.
var alwaysReady = make(chan struct{})

func init() { close(alwaysReady) }

func (r RateLimiter) endpoint(path string) endpointRateLimiter {
	if existing, ok := r.byEndpoint[path]; ok {
		return existing
	}
	limit := rate.Limit(1)
	if r.fixedLimit != nil {
		limit = rate.Limit(float64(*r.fixedLimit) / r.window.Seconds())
	}
	limiter := rate.NewLimiter(limit, 1) // Allow a single fetch operation to obtain limits from the API
	newEndpointRateLimiter := endpointRateLimiter{
		limiter: limiter,
		ready:   alwaysReady,
	}
	r.byEndpoint[path] = newEndpointRateLimiter
	return newEndpointRateLimiter
}

so that we have a single never-blocking chan.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done with the name immediatelyReady.
It's true that channel will always be ready, but the endpointRateLimiter's ready field won't always have that value...
I don't know. Is this better, or the same? (or less good?)

log.Debugw("rate limit", "limit", limiter.Limit(), "burst", limiter.Burst(), "url", url.String())
return limiter.Wait(ctx)
e := r.endpoint(endpoint)
<-e.ready
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<-e.ready
<-e.ready
select {
case <-e.ready:
case <-ctx.Done():
return ctx.Err()
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

e := r.endpoint(endpoint)
<-e.ready
log.Debugw("rate limit", "limit", e.limiter.Limit(), "burst", e.limiter.Burst(), "url", url.String())
ctxWithDeadline, cancel := context.WithDeadline(ctx, time.Now().Add(waitDeadline))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the deadline be calculated before the e.ready recv?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I intended to apply it only when using the rate limiter, not when requests are shut down, but I'll change it to apply to both.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

target := 30.0
buffer := 0.01

if tokens < target-buffer || tokens > target+buffer {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if tokens < target-buffer || tokens > target+buffer {
if tokens < target-buffer || target+buffer < tokens {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

wait := time.Since(start)

if wait > 1010*time.Millisecond {
t.Errorf("doesn't allow requests to resume after reset. had to wait %d milliseconds", wait.Milliseconds())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
t.Errorf("doesn't allow requests to resume after reset. had to wait %d milliseconds", wait.Milliseconds())
t.Errorf("doesn't allow requests to resume after reset. had to wait %s", wait)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

const endpoint = "/foo"
limiter := r.limiter(endpoint)
url, _ := url.Parse(endpoint)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
url, _ := url.Parse(endpoint)
url, err := url.Parse(endpoint)
if err != nil {
t.Fatalf("failed to parse endpoint: %v", err)
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.


const endpoint = "/foo"

url, _ := url.Parse(endpoint)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@chrisberkhout
Copy link
Contributor Author

@efd6 Addressed comments. Also added documentation changes that I just forgot to commit before. Now also has CHANGELOG.next.asciidoc entries (1 bugfix, 1 addition/improvement).

@chrisberkhout chrisberkhout requested a review from efd6 December 11, 2024 17:31
Copy link
Contributor

mergify bot commented Dec 11, 2024

This pull request is now in conflicts. Could you fix it? 🙏
To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/

git fetch upstream
git checkout -b ea-okta-rate-limiting-operability upstream/ea-okta-rate-limiting-operability
git merge upstream/main
git push upstream ea-okta-rate-limiting-operability

Copy link
Contributor

@efd6 efd6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

@chrisberkhout chrisberkhout merged commit 08b7d84 into elastic:main Dec 12, 2024
20 of 22 checks passed
mergify bot pushed a commit that referenced this pull request Dec 12, 2024
…nts (#41977)

- Fix a bug in the stopping of requests when `x-rate-limit-remaining: 0`.
- Add a deadline so long waits return immediately as errors.
- Add an option to set a fixed request rate.

(cherry picked from commit 08b7d84)
chrisberkhout added a commit that referenced this pull request Dec 12, 2024
…limiting fix, improvements (#42008)

* .../input/entityanalytics/provider/okta: Rate limiting fix, improvements (#41977)

- Fix a bug in the stopping of requests when `x-rate-limit-remaining: 0`.
- Add a deadline so long waits return immediately as errors.
- Add an option to set a fixed request rate.

(cherry picked from commit 08b7d84)

---------

Co-authored-by: Chris Berkhout <chris.berkhout@elastic.co>
michalpristas pushed a commit to michalpristas/beats that referenced this pull request Dec 13, 2024
…nts (elastic#41977)

- Fix a bug in the stopping of requests when `x-rate-limit-remaining: 0`.
- Add a deadline so long waits return immediately as errors.
- Add an option to set a fixed request rate.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-8.x Automated backport to the 8.x branch with mergify bugfix enhancement Team:Security-Service Integrations Security Service Integrations Team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants