-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] add provisional azure.core.rest #19502
Conversation
…into get_testserver_working * 'master' of https://github.com/Azure/azure-sdk-for-python: (55 commits) [AppConfig] mypy fixes (#18858) [translation] renames (#19285) [Key Vault] Update metadata for release (#19286) [AppConfig] enable samples (#18857) KeyVaultBackupOperation -> KeyVaultBackupResult (#19284) renaming (#19280) [Key Vault] Custom polling method for admin backup client (#19204) [textanalytics] add ARM template + run samples in CI (#19270) removed example from description (#18781) [Key Vault] Update READMEs with RBAC information (#19275) Sync eng/common directory with azure-sdk-tools for PR 1688 (#19272) update all ubuntu vmImage to 20.04 (#19227) add base class for feedback (#19265) Enable tests for integration samples (#18531) docs: fix a few simple typos (#19127) Increment package version after release of azure-eventgrid (#19197) Increment package version after release of azure-monitor-query (#19208) earliest versions of communication identity tests are set up improperly. skip 1.0.0 in regression testing (#19258) Run mypy in azure-keyvault-administration CI (#19246) [text analytics] change return type of analyze actions to list of list (#18994) ...
…into get_testserver_working * 'main' of https://github.com/Azure/azure-sdk-for-python: (45 commits) ignore coretestserver readme (#19436) Add Ubuntu 20 to local dns bypass template (#19432) Sync eng/common directory with azure-sdk-tools for PR 1729 (#19415) Async/BearerTokenCredentialPolicy consistently calls on_exception (#19195) [EventHubs] Fix bug in sending stress test code and update default stress test settings (#19429) [EventHubs] Get IoT Hub Name from Redirect Address in sample (#19314) [textanalytics] regen on v3.1 (#19193) Adapt EG to arm template (#19262) [Key Vault] Extend pipeline test timeout (#19404) Update platform matrix to ubuntu 20 (#19296) [AppConfig] Add lock to SyncTokenPolicy (#19395) Regenerate monitor code (#19375) Increment version for keyvault releases (#19402) Aggregation should be a list (#19381) [azure-mgmt-monitor] skip test to unblock ci (#19390) Cloud event should parse smaller ms precisions (#19259) Update release date (#19399) [Communication]: use x-ms-date for hmac (#19396) [Key Vault] Performance tests for certificates, keys, and secrets (#19002) Deprecate azure-monitor (#19384) ...
except ImportError: | ||
pass | ||
try: | ||
from .transport import AsyncioRequestsTransportResponse |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to import all of them?
Not only needed in except ImportError?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i can add a check for each of these to only try to import if response_type is still None
. But basically I put them each into their own try except import error block bc, for example, if you have a trio transport and no aiohttp isntalled, importing aiohttp will cause an error for you, so I wanted to accomodate all of the different combos
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I now stop importing once I get a reseponse type. Thanks for the suggestion @xiangyan99 !
if rest_request: | ||
response = _to_rest_response(response) | ||
if not kwargs.get("stream", False): | ||
response.read() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we leak a stream if read
fails?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, I'm going to add a try except when I read the response, and close the response before throwing the error. thanks!
ctype = self.headers.get(aiohttp.hdrs.CONTENT_TYPE, "").lower() | ||
mimetype = aiohttp.helpers.parse_mimetype(ctype) | ||
|
||
encoding = mimetype.parameters.get("charset") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given we set encoding here, why we need L80?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm following the strucutre for the code in pipeline transport. But basically, it seems that if the chardet detecting fails, encoding still might be none, so we fall back to "utf-8-sig"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean L84 will override L80, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cleaned this up a bit, lmk your thoughts, thanks!
""" | ||
raise NotImplementedError() | ||
|
||
def iter_text(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you override iter_text for aiohttp?
If not, how can we make sure self.encoding works fine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up moving parts of the encoding into the base class for encoding.
Basically, 1. if we have an application/json, following RFC 7159 and 7483, I default this to "utf-8". 2. if the response's content has been read in, I'll add an additional step of using chardet.detect to get the response's content. With this, the self.encoding should cover more corner cases with iter_text
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible iter_text is called w/o .read()?
In this case, content is empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah definitely. Basically for iter_bytes, iter_text, and iter_lines, if content
exists, we'll yield the content to users. Otherwise, we'll call stream download generator and get the bytes from there.
This is the same behavior as httpx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You calls iter_bytes or content?
It seems to me iter_bytes does not generate content?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here's kind of the scenarios where you would use one vs. the other
- I want the whole body on the response, and I want to access it all at once: Here you would call .content, similar to
.body()
on the old responses. You may need to call.read()
first, to read in the response's bytes to content before - I want to iterate over the body: here, i call one of the
iter_
functions. This doesn't read the body into the response, but outputs it as an iterator. As more of an implementation detail: if the response's bytes are already read intocontent
, we will iterate over the content in all of theiter_
functions exceptiter_raw
(consistent with httpx), and output them in chunks <= the block size we get from the transport
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we are on same page. So if the content is not read into the body, iter_text will not be able to detect the encoding from content, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It still will be able. to do all of the checks, except the final fallback I added, which is the chardet.detect of the content
return self.content.decode(encoding) | ||
|
||
@property | ||
def num_bytes_downloaded(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please hide it. There is no reliable way to return the bytes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All I do in num_bytes_downloaded is I get. the length of the chunk before I send it to users, I feel like that's p reliable, why are you thinking it's not reliable? sorry if I'm missing something
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in compression scenario. Which num is correct? Before decompression or after?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're right that seems fair, i can remove for now
data=pipeline_transport_request.data | ||
) | ||
|
||
def to_rest_response(pipeline_transport_response): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will this method do if users have created their client with a custom transport?
Does it matter here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if they create their client with a custom transport, since i don't have a corresponding rest transport response to change it to, I change it to the default azure.core.rest.HttpResponse
or azure.core.rest.AsyncHttpResponse
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - can't see anything that would break existing pipelines.
Haven't dug too deep into the new implementation of the new rest
implementation, but API surface looks good and well documented as provisional :)
/azp run python - core - ci |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run python - communication - ci |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run python - identity - ci |
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did a short fast review, LGTM :)
No description provided.