-
Notifications
You must be signed in to change notification settings - Fork 302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Provide option to retry request on rateLimitExceeded errors #809
Comments
@tswast Are you good with this? |
I think this would have to be done at DB-API layer -- and I'd actually like to retry queries by default there. Although maybe it could work on query job. It's a bit trickier with query job since we can't retry if they've set a job ID. Also, we'd need to track whether they've explicitly set a destination table or if the destination table was set by the API (in which case we need to unset it before retrying the query) |
I'd actually expect to use a |
As discussed offline, we can make the query job save the original request so that we don't have to worry about anything except for resetting the job ID before retrying. |
Closing in favor of #539 |
Reopened because the scope here is a little different and I'd like to discuss without spamming Jake Summers. |
I want to bring up a couple of issues. Lets start with @tswast is your thought that |
Yes, I think that's the way it'd have to work for at least some jobs. Though as you found out, it seems we can do some detection of failed by but retryable jobs even as early as the call to With |
With We want retry at a higher level if we get an error from |
Next, retry predicates. This makes predicates hard to express. You either need a new kind of Retry with a predicate that's applied to non-error results, or you need a flag to cause error responses (status code 200, but with an error status in the response data) to be turned into exceptions. As I discovered in our call today, we'd want to retry at the API level for Maybe (thinking out loud):
|
@tswast I'm confused about whether API requests should be retried after In fact, maybe there's a deeper semantic I'm missing about these kind of responses. |
That's correct. The job was created, so retrying exactly the same request will fail. But retrying with a fresh job ID might succeed. |
That's unlikely to work because of re-used job IDs. |
This is mainly needed for sqlalchemy-bigquery compliance tests, but I can imagine it being useful for other use cases. :)
SQLAlchemy dialect compliance tests create tables at the beginning of a suite of tests and reuse them for the suite.
If a test fails due to
rateLimitExceeded
errors, the tables are left in an unexpected state. Retrying the test may not work.There's an apparent option to recreate tables for each test, but it's not implemented correctly. I could probably fix it for current SQLAlchemy releases (1.4), but I'd probably have to monkey-patch 1.3.
I propose 2 new query-job config settings:
retry_on_rate_limit_exceeded
:int
, maximum retries when the client gets an "Exceeded rate limits" error, defaulting to 0retry_delay_on_rate_limit_exceeded
:int
, number of seconds to wait before retrying when the client gets an "Exceeded rate limits" error, defaulting to 60.The text was updated successfully, but these errors were encountered: