-
-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
missing client field endpoint_ruleset_resolver in botocore 1.28.0 #976
Comments
I hope that solution will be to update the botocore to the newest version. |
Thanks for reporting and thanks for all work team! I can verify the same issue with
Sorry for not providing more details but that's all I can for now. But it should be noted that |
you cannot mix and match botocore and aiobotocore versions, that is why we have strict version requirements |
actually you can. If you shouldn't that's a different question ^^ We fixed this by hardfixing the botocore version. Until now we just had
yes you are right. It is more an issue for future that needs to be taken care of. For the next version of aiobotocore. |
well, if you obey rules ya cannot per setup.py requirements, if you want to be a rule breaker you have to deal with the consequences :) Each botocore bump is very time consuming and detailed to catch issues like these. In fact in the last attempt to do a minor patch bump it brought forth a new auth codepath that is going to require lots of new async plumbing :( |
Ah, not the rule breaker by far ^^ We didn't specifically request for botocore upgrade but probably some package from above did it and overrode aiobotocore's requirements. As mentioned we did not have |
Absence of this feature from botocore means no use of aiobotocore with minio on-prem service. Any |
this is why it's critical to use something like pipenv/poetry. yep! see #36 and boto/botocore#458 |
another option is for this project to swap to the cpp lib...another huge task |
is there anything else needed for this issue or just a version bump? |
"Just" a version bump please.
Tried Pipenv and it didn't work well for us. Though it was 2020 and last release then was in 2018 and there was even an issue asking if this project was dead. Now I see the releases are often and project is alive. Maybe it's worth an another try as the concept was nice (npm like). |
In this PR, I'm adding `aiobotocore` as an additional dependency until aio-libs/aiobotocore#976 has been resolved. I'm adding `AwsBaseAsyncHook` a basic async AWS hook. This will at present support the default `botocore` auth i'e if airflow connection is not provided then auth using ENV and if airflow connection is provided then basic auth with secret-key/access-key-id/profile/token and arn-method. maybe we can support the other auth incrementally depending on the community interest. Because the dependency making things a little bit complicated so I have created a new test module `deferrable` inside AWS provider tests and I'm keeping the async-related test in this particular module. I have also added a new CI job to test/run the AWS deferable operator tests and I'm ignoring the deferable tests in other CI job test runs. Add a trigger class which will wait until the Redshift pause request reaches a terminal state i.e paused or fail, We also have retry logic like the sync operator. This PR donates the following developed RedshiftPauseClusterOperatorAsync` in [astronomer-providers](https://github.com/astronomer/astronomer-providers) repo to apache airflow.
In this PR, I'm adding `aiobotocore` as an additional dependency until aio-libs/aiobotocore#976 has been resolved. I'm adding `AwsBaseAsyncHook` a basic async AWS hook. This will at present support the default `botocore` auth i'e if airflow connection is not provided then auth using ENV and if airflow connection is provided then basic auth with secret-key/access-key-id/profile/token and arn-method. maybe we can support the other auth incrementally depending on the community interest. Because the dependency making things a little bit complicated so I have created a new test module `deferrable` inside AWS provider tests and I'm keeping the async-related test in this particular module. I have also added a new CI job to test/run the AWS deferable operator tests and I'm ignoring the deferable tests in other CI job test runs. Add a trigger class which will wait until the Redshift pause request reaches a terminal state i.e paused or fail, We also have retry logic like the sync operator. This PR donates the following developed RedshiftPauseClusterOperatorAsync` in [astronomer-providers](https://github.com/astronomer/astronomer-providers) repo to apache airflow. GitOrigin-RevId: cf77c3b96609aa8c260566274d54b06eb38c8100
In this PR, I'm adding `aiobotocore` as an additional dependency until aio-libs/aiobotocore#976 has been resolved. I'm adding `AwsBaseAsyncHook` a basic async AWS hook. This will at present support the default `botocore` auth i'e if airflow connection is not provided then auth using ENV and if airflow connection is provided then basic auth with secret-key/access-key-id/profile/token and arn-method. maybe we can support the other auth incrementally depending on the community interest. Because the dependency making things a little bit complicated so I have created a new test module `deferrable` inside AWS provider tests and I'm keeping the async-related test in this particular module. I have also added a new CI job to test/run the AWS deferable operator tests and I'm ignoring the deferable tests in other CI job test runs. Add a trigger class which will wait until the Redshift pause request reaches a terminal state i.e paused or fail, We also have retry logic like the sync operator. This PR donates the following developed RedshiftPauseClusterOperatorAsync` in [astronomer-providers](https://github.com/astronomer/astronomer-providers) repo to apache airflow. GitOrigin-RevId: cf77c3b96609aa8c260566274d54b06eb38c8100
In this PR, I'm adding `aiobotocore` as an additional dependency until aio-libs/aiobotocore#976 has been resolved. I'm adding `AwsBaseAsyncHook` a basic async AWS hook. This will at present support the default `botocore` auth i'e if airflow connection is not provided then auth using ENV and if airflow connection is provided then basic auth with secret-key/access-key-id/profile/token and arn-method. maybe we can support the other auth incrementally depending on the community interest. Because the dependency making things a little bit complicated so I have created a new test module `deferrable` inside AWS provider tests and I'm keeping the async-related test in this particular module. I have also added a new CI job to test/run the AWS deferable operator tests and I'm ignoring the deferable tests in other CI job test runs. Add a trigger class which will wait until the Redshift pause request reaches a terminal state i.e paused or fail, We also have retry logic like the sync operator. This PR donates the following developed RedshiftPauseClusterOperatorAsync` in [astronomer-providers](https://github.com/astronomer/astronomer-providers) repo to apache airflow. GitOrigin-RevId: cf77c3b96609aa8c260566274d54b06eb38c8100
In this PR, I'm adding `aiobotocore` as an additional dependency until aio-libs/aiobotocore#976 has been resolved. I'm adding `AwsBaseAsyncHook` a basic async AWS hook. This will at present support the default `botocore` auth i'e if airflow connection is not provided then auth using ENV and if airflow connection is provided then basic auth with secret-key/access-key-id/profile/token and arn-method. maybe we can support the other auth incrementally depending on the community interest. Because the dependency making things a little bit complicated so I have created a new test module `deferrable` inside AWS provider tests and I'm keeping the async-related test in this particular module. I have also added a new CI job to test/run the AWS deferable operator tests and I'm ignoring the deferable tests in other CI job test runs. Add a trigger class which will wait until the Redshift pause request reaches a terminal state i.e paused or fail, We also have retry logic like the sync operator. This PR donates the following developed RedshiftPauseClusterOperatorAsync` in [astronomer-providers](https://github.com/astronomer/astronomer-providers) repo to apache airflow. GitOrigin-RevId: cf77c3b96609aa8c260566274d54b06eb38c8100
Describe the bug
With botocore 1.28.0 (released 14h ago) aiobotocore 2.4.0 will fail when creating client because botocore 1.28.0 has a new required positional field
endpoint_ruleset_resolver
.Tested locally on Windows and on Lambda (linux py 3.8)
The same code we have works with botocore 1.27.95
Checklist
pip check
passes without errorspip freeze
resultspip freeze results
Environment:
Additional context
aioboto3==10.1.0
aiobotocore==2.4.0
aiofiles==0.8.0
aiohttp==3.8.1
aioitertools==0.8.0
aiosignal==1.2.0
altgraph==0.17.2
ansicon==1.89.0
async-timeout==4.0.2
asyncmock==0.4.2
atomicwrites==1.4.0
attrs==21.2.0
aws-cdk.assets==1.137.0
aws-cdk.aws-acmpca==1.137.0
aws-cdk.aws-apigateway==1.137.0
aws-cdk.aws-applicationautoscaling==1.137.0
aws-cdk.aws-athena==1.137.0
aws-cdk.aws-autoscaling==1.137.0
aws-cdk.aws-autoscaling-common==1.137.0
aws-cdk.aws-autoscaling-hooktargets==1.137.0
aws-cdk.aws-certificatemanager==1.137.0
aws-cdk.aws-cloudformation==1.137.0
aws-cdk.aws-cloudfront==1.137.0
aws-cdk.aws-cloudfront-origins==1.137.0
aws-cdk.aws-cloudwatch==1.137.0
aws-cdk.aws-cloudwatch-actions==1.137.0
aws-cdk.aws-codebuild==1.137.0
aws-cdk.aws-codecommit==1.137.0
aws-cdk.aws-codeguruprofiler==1.137.0
aws-cdk.aws-codepipeline==1.137.0
aws-cdk.aws-codestarnotifications==1.137.0
aws-cdk.aws-cognito==1.137.0
aws-cdk.aws-dynamodb==1.137.0
aws-cdk.aws-ec2==1.137.0
aws-cdk.aws-ecr==1.137.0
aws-cdk.aws-ecr-assets==1.137.0
aws-cdk.aws-ecs==1.137.0
aws-cdk.aws-efs==1.137.0
aws-cdk.aws-eks==1.137.0
aws-cdk.aws-elasticloadbalancing==1.137.0
aws-cdk.aws-elasticloadbalancingv2==1.137.0
aws-cdk.aws-events==1.137.0
aws-cdk.aws-events-targets==1.137.0
aws-cdk.aws-globalaccelerator==1.137.0
aws-cdk.aws-glue==1.137.0
aws-cdk.aws-iam==1.137.0
aws-cdk.aws-kinesis==1.137.0
aws-cdk.aws-kinesisfirehose==1.137.0
aws-cdk.aws-kms==1.137.0
aws-cdk.aws-lambda==1.137.0
aws-cdk.aws-lambda-event-sources==1.137.0
aws-cdk.aws-logs==1.137.0
aws-cdk.aws-route53==1.137.0
aws-cdk.aws-route53-targets==1.137.0
aws-cdk.aws-s3==1.137.0
aws-cdk.aws-s3-assets==1.137.0
aws-cdk.aws-s3-notifications==1.137.0
aws-cdk.aws-sam==1.137.0
aws-cdk.aws-secretsmanager==1.137.0
aws-cdk.aws-servicediscovery==1.137.0
aws-cdk.aws-signer==1.137.0
aws-cdk.aws-sns==1.137.0
aws-cdk.aws-sns-subscriptions==1.137.0
aws-cdk.aws-sqs==1.137.0
aws-cdk.aws-ssm==1.137.0
aws-cdk.aws-stepfunctions==1.137.0
aws-cdk.aws-stepfunctions-tasks==1.137.0
aws-cdk.cloud-assembly-schema==1.137.0
aws-cdk.core==1.137.0
aws-cdk.custom-resources==1.137.0
aws-cdk.cx-api==1.137.0
aws-cdk.lambda-layer-awscli==1.137.0
aws-cdk.lambda-layer-kubectl==1.137.0
aws-cdk.lambda-layer-node-proxy-agent==1.137.0
aws-cdk.region-info==1.137.0
aws-xray-sdk==2.10.0
blessed==1.19.0
boto==2.49.0
boto3==1.25.0
botocore==1.28.0
cattrs==1.8.0
certifi==2021.10.8
chalice==1.26.3
chardet==3.0.4
charset-normalizer==2.0.9
click==8.0.3
colorama==0.4.4
constructs==3.3.168
coverage==5.5
demjson==2.2.4
dirtyjson==1.0.7
ecdsa==0.17.0
frozenlist==1.2.0
future==0.18.2
idna==2.10
iniconfig==1.1.1
inquirer==2.8.0
jinxed==1.1.0
jmespath==0.10.0
jsii==1.56.0
mock==4.0.3
multidict==5.2.0
mypy-extensions==0.4.3
packaging==21.3
pefile==2021.9.3
Pillow==8.2.0
pluggy==1.0.0
psutil==5.8.0
publication==0.0.3
py==1.11.0
pyasn1==0.4.8
pyinstaller==4.10
pyinstaller-hooks-contrib==2022.2
pyparsing==3.0.7
pytest==6.2.5
pytest-asyncio==0.17.2
pytest-cov==3.0.0
pytest-html==3.1.1
pytest-metadata==2.0.1
python-dateutil==2.8.2
python-editor==1.0.4
python-jose==3.3.0
pywin32-ctypes==0.2.0
PyYAML==6.0
qrcode==7.3.1
readchar==2.0.1
requests==2.25.1
rsa==4.8
s3transfer==0.6.0
six==1.16.0
toml==0.10.2
typing-extensions==3.10.0.2
unittest-xml-reporting==3.0.4
urllib3==1.25.11
watchtower==3.0.0
wcwidth==0.2.5
wrapt==1.13.3
yarl==1.7.2
The text was updated successfully, but these errors were encountered: