Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] add provisional azure.core.rest #19502

Merged
merged 69 commits into from
Jun 29, 2021
Merged

[core] add provisional azure.core.rest #19502

merged 69 commits into from
Jun 29, 2021

Conversation

iscai-msft
Copy link
Contributor

No description provided.

…into get_testserver_working

* 'master' of https://github.com/Azure/azure-sdk-for-python: (55 commits)
  [AppConfig] mypy fixes (#18858)
  [translation] renames (#19285)
  [Key Vault] Update metadata for release (#19286)
  [AppConfig] enable samples (#18857)
  KeyVaultBackupOperation -> KeyVaultBackupResult (#19284)
  renaming (#19280)
  [Key Vault] Custom polling method for admin backup client (#19204)
  [textanalytics] add ARM template + run samples in CI (#19270)
  removed example from description (#18781)
  [Key Vault] Update READMEs with RBAC information (#19275)
  Sync eng/common directory with azure-sdk-tools for PR 1688 (#19272)
  update all ubuntu vmImage to 20.04 (#19227)
  add base class for feedback (#19265)
  Enable tests for integration samples (#18531)
  docs: fix a few simple typos (#19127)
  Increment package version after release of azure-eventgrid (#19197)
  Increment package version after release of azure-monitor-query (#19208)
  earliest versions of communication identity tests are set up improperly. skip 1.0.0 in regression testing (#19258)
  Run mypy in azure-keyvault-administration CI (#19246)
  [text analytics] change return type of analyze actions to list of list (#18994)
  ...
…into get_testserver_working

* 'main' of https://github.com/Azure/azure-sdk-for-python: (45 commits)
  ignore coretestserver readme (#19436)
  Add Ubuntu 20 to local dns bypass template (#19432)
  Sync eng/common directory with azure-sdk-tools for PR 1729 (#19415)
  Async/BearerTokenCredentialPolicy consistently calls on_exception (#19195)
  [EventHubs] Fix bug in sending stress test code and update default stress test settings (#19429)
  [EventHubs] Get IoT Hub Name from Redirect Address in sample (#19314)
  [textanalytics] regen on v3.1 (#19193)
  Adapt EG to arm template (#19262)
  [Key Vault] Extend pipeline test timeout (#19404)
  Update platform matrix to ubuntu 20 (#19296)
  [AppConfig] Add lock to SyncTokenPolicy (#19395)
  Regenerate monitor code (#19375)
  Increment version for keyvault releases (#19402)
  Aggregation should be a list (#19381)
  [azure-mgmt-monitor] skip test to unblock ci (#19390)
  Cloud event should parse smaller ms precisions (#19259)
  Update release date (#19399)
  [Communication]: use x-ms-date for hmac (#19396)
  [Key Vault] Performance tests for certificates, keys, and secrets (#19002)
  Deprecate azure-monitor (#19384)
  ...
except ImportError:
pass
try:
from .transport import AsyncioRequestsTransportResponse
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to import all of them?

Not only needed in except ImportError?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i can add a check for each of these to only try to import if response_type is still None. But basically I put them each into their own try except import error block bc, for example, if you have a trio transport and no aiohttp isntalled, importing aiohttp will cause an error for you, so I wanted to accomodate all of the different combos

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I now stop importing once I get a reseponse type. Thanks for the suggestion @xiangyan99 !

if rest_request:
response = _to_rest_response(response)
if not kwargs.get("stream", False):
response.read()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will we leak a stream if read fails?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, I'm going to add a try except when I read the response, and close the response before throwing the error. thanks!

ctype = self.headers.get(aiohttp.hdrs.CONTENT_TYPE, "").lower()
mimetype = aiohttp.helpers.parse_mimetype(ctype)

encoding = mimetype.parameters.get("charset")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given we set encoding here, why we need L80?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm following the strucutre for the code in pipeline transport. But basically, it seems that if the chardet detecting fails, encoding still might be none, so we fall back to "utf-8-sig"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean L84 will override L80, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cleaned this up a bit, lmk your thoughts, thanks!

"""
raise NotImplementedError()

def iter_text(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you override iter_text for aiohttp?

If not, how can we make sure self.encoding works fine?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ended up moving parts of the encoding into the base class for encoding.
Basically, 1. if we have an application/json, following RFC 7159 and 7483, I default this to "utf-8". 2. if the response's content has been read in, I'll add an additional step of using chardet.detect to get the response's content. With this, the self.encoding should cover more corner cases with iter_text

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible iter_text is called w/o .read()?

In this case, content is empty?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah definitely. Basically for iter_bytes, iter_text, and iter_lines, if content exists, we'll yield the content to users. Otherwise, we'll call stream download generator and get the bytes from there.

This is the same behavior as httpx

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You calls iter_bytes or content?

It seems to me iter_bytes does not generate content?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here's kind of the scenarios where you would use one vs. the other

  1. I want the whole body on the response, and I want to access it all at once: Here you would call .content, similar to .body() on the old responses. You may need to call .read() first, to read in the response's bytes to content before
  2. I want to iterate over the body: here, i call one of the iter_ functions. This doesn't read the body into the response, but outputs it as an iterator. As more of an implementation detail: if the response's bytes are already read into content, we will iterate over the content in all of the iter_ functions except iter_raw (consistent with httpx), and output them in chunks <= the block size we get from the transport

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are on same page. So if the content is not read into the body, iter_text will not be able to detect the encoding from content, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It still will be able. to do all of the checks, except the final fallback I added, which is the chardet.detect of the content

return self.content.decode(encoding)

@property
def num_bytes_downloaded(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please hide it. There is no reliable way to return the bytes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All I do in num_bytes_downloaded is I get. the length of the chunk before I send it to users, I feel like that's p reliable, why are you thinking it's not reliable? sorry if I'm missing something

Copy link
Member

@xiangyan99 xiangyan99 Jun 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in compression scenario. Which num is correct? Before decompression or after?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right that seems fair, i can remove for now

data=pipeline_transport_request.data
)

def to_rest_response(pipeline_transport_response):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will this method do if users have created their client with a custom transport?
Does it matter here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if they create their client with a custom transport, since i don't have a corresponding rest transport response to change it to, I change it to the default azure.core.rest.HttpResponse or azure.core.rest.AsyncHttpResponse

Copy link
Member

@annatisch annatisch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - can't see anything that would break existing pipelines.
Haven't dug too deep into the new implementation of the new rest implementation, but API surface looks good and well documented as provisional :)

@iscai-msft
Copy link
Contributor Author

/azp run python - core - ci

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@iscai-msft
Copy link
Contributor Author

/azp run python - communication - ci

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@iscai-msft
Copy link
Contributor Author

/azp run python - identity - ci

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Copy link
Member

@lmazuel lmazuel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a short fast review, LGTM :)

@iscai-msft iscai-msft merged commit 662dedf into main Jun 29, 2021
@iscai-msft iscai-msft deleted the rest_layer branch June 29, 2021 18:29
rakshith91 pushed a commit to rakshith91/azure-sdk-for-python that referenced this pull request Jul 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants