Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test cases written in moto using Latest version of boto3 fails #1793

Closed
krishnamee2004 opened this issue Aug 25, 2018 · 84 comments
Closed

Test cases written in moto using Latest version of boto3 fails #1793

krishnamee2004 opened this issue Aug 25, 2018 · 84 comments

Comments

@krishnamee2004
Copy link

krishnamee2004 commented Aug 25, 2018

Test cases written in moto makes actual AWS API calls to botocore instead of mocking them. This happens with the latest version of boto3 (1.8.). It used to work fine without issues with 1.7. versions.

Sample code to reproduce error

import boto3
import json
from moto import mock_s3

@mock_s3
def test_mock_s3():
    client = boto3.client('s3', region_name='us-east-1')
    client.create_bucket(Bucket='testbucket')
    response = client.list_buckets()
    print json.dumps(response, default=str)

if __name__ == "__main__":
    test_mock_s3()

Expected result

Method should return the ListBuckets response. It should look something like:

{"Owner": {"DisplayName": "webfile", "ID": "bcaf1ffd86f41161ca5fb16fd081034f"}, "Buckets": [{"CreationDate": "2006-02-03 16:45:09+00:00", "Name": "testbucket"}], "ResponseMetadata": {"RetryAttempts": 0, "HTTPStatusCode": 200, "HTTPHeaders": {"Content-Type": "text/plain"}}}

Actual error

botocore.errorfactory.BucketAlreadyExists: An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.

Full stack trace

Traceback (most recent call last):
  File "testcases.py", line 14, in <module>
    test_mock_s3()
  File "/private/tmp/virtualenv2/lib/python2.7/site-packages/moto/core/models.py", line 71, in wrapper
    result = func(*args, **kwargs)
  File "testcases.py", line 8, in test_mock_s3
    client.create_bucket(Bucket='testbucket')
  File "/private/tmp/virtualenv2/lib/python2.7/site-packages/botocore/client.py", line 314, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/private/tmp/virtualenv2/lib/python2.7/site-packages/botocore/client.py", line 612, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.BucketAlreadyExists: An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.

Library versions

moto : 1.3.4
boto3 : 1.8.1 - fails
boto3 : 1.7.84 - succeeds
@yingyi-nauto
Copy link

Similar problem with mock_sqs. Got the following error:
ClientError: An error occurred (AccessDenied) when calling the CreateQueue operation: Access to the resource https://queue.amazonaws.com/ is denied.

@rouge8
Copy link
Contributor

rouge8 commented Aug 26, 2018

It looks like the issue is actually with botocore >= 1.11.0 which no longer uses requests and instead directly uses urllib3: boto/botocore#1495. This means moto probably can't use responses anymore...

mateiz pushed a commit to mlflow/mlflow that referenced this issue Aug 27, 2018
As described in getmoto/moto#1793, moto doesn't seem to be properly mocking S3 calls with boto3 1.8 releases, so this PR pins the version of boto3 in test-requirements.txt to the latest 1.7 boto3 release

Also: this PR removes the `awscli` dependency, which transitively depends on a botocore version (1.11) that causes the s3 artifact store tests to fail.
@garyd203
Copy link
Contributor

garyd203 commented Aug 27, 2018

Obviously we want to update moto so that it can work with the new implementation of botocore.

Whilst we figure out what this looks like, is it worthwhile restricting moto's install_requires in setup.py to require only compatible versions of botocore? Clearly this isn't a perfect solution, but if it saves some pain for some of our users then it could be worthwhile.

EDIT: I have added a PR for this as a discussion starter

garyd203 added a commit to garyd203/moto that referenced this issue Aug 27, 2018
botocore v1.11.0 changed it's internal implementation so that it now
uses a different library for HTTP requests. This means that moto's
mocking will not work, and test code will inadvertently call the live
AWS service.

As an interim solution to reduce the impact of this breakage, we
restrict the "required" (ie. recommended) version of botocore so that
users will be less likely to use an incompatible version, and will
receive a pip warning when they do.
gchiesa pushed a commit to gchiesa/s3vaultlib that referenced this issue Aug 27, 2018
gerddoliwa added a commit to gerddoliwa/ultimate-source-of-aws-accounts that referenced this issue Aug 27, 2018
manthey added a commit to girder/girder that referenced this issue Aug 27, 2018
Boto3 1.8 breaks our tests using moto.  See
getmoto/moto#1793 .  We can unpin when moto is
updated.
manthey added a commit to girder/girder that referenced this issue Aug 27, 2018
Boto3 1.8 breaks our tests using moto.  See
getmoto/moto#1793 .  We can unpin when moto is
updated.

This also has to add boto-core as a requirement to allow moto to
install.
MartinThoma added a commit to MartinThoma/mpu that referenced this issue Aug 27, 2018
@grudelsud
Copy link

pinning to boto3<1.8 is not really a solution as it would outdate the library pretty soon and doesn't solve when installing dependencies relying on newer versions either, as this will cause incompatibilities.
I appreciate this comment doesn't bring much to the table, but can we have an idea of how big of an effort would be to update moto to reflect the current status of libraries?

@garyd203
Copy link
Contributor

@grudelsud I agree that pinning boto3 is not a viable long-term solution, and introduces problems of its own. But given that users are currently being affected by tests silently falling back to making real boto3 calls (which can have significant unexpected side-effects if you happen to have valid AWS credentials available), it seems worthwhile to consider a quick solution as well a permanent solution. I feel that, for now, the problems from pinning boto3 are lower severity than doing nothing.

That said, I would be a million times happier if someone put forward a genuine fix. But I lack the time and knowledge to do that myself at this point, or even to guess how much work would be involved.

phobologic added a commit to cloudtools/stacker that referenced this issue Aug 28, 2018
getmoto/moto#1793

Until we either move entirely off moto to botocore.Stubber, this is the
easiest workaround.
phobologic added a commit to cloudtools/stacker that referenced this issue Aug 28, 2018
getmoto/moto#1793

Until we either move entirely off moto to botocore.Stubber, this is the
easiest workaround.
@joguSD
Copy link
Contributor

joguSD commented Aug 28, 2018

@garyd203 I've been driving a lot of the changes to get botocore tracking upstream dependencies instead of using our vendored versions. What does moto require so that you guys wouldn't need to monkey patch our dependencies?

@ghost ghost mentioned this issue Aug 28, 2018
@garyd203
Copy link
Contributor

Hi @joguSD , thanks for asking. I'm just an occasional contributor to moto myself, and I'm not very familiar with the internals of moto. That said, I will try to provide some comments...

The goal of moto is to provide a fake implementation of specified AWS services which are accessed via boto3, as if they were the live services. FWIW, my understanding of the current implementation is that we have achieved this by mocking out HTTPAdapter.send in botocore's vendored version of requests,so that we can inspect each request and either pass it off to our internal handler for the fake service, or pass it through to the original send. You can see this in moto.core.models, with botocore_mock and ResponsesMockAWS

Moving forward, there seem to be a few choices for how we could do this better. I suspect we are constrained to work with the HTTP request rather than some other part of the boto3/botocore stack. So just to start the discussion, here's a couple of options:

  1. Use a pluggable HTTP client backend for botocore, so that moto can wrap the standard HTTP backend with it's own interceptor functionality
  2. Add a filter/interceptor chain in botocore's HTTP request handling, where moto can inject it's own filter early on and modify the behaviour based on what request is being made.

Do you have any opinion on these (or other) implementation choices, from botocore's perspective? My personal preference would be a pluggable backend with a standard implementation that is extensible and/or wrappable.

Again, don't take any of this discussion as definitive (I can't speak for the moto project maintainers), but I hope it helps.

@Freyja-Folkvangr
Copy link

It think we should add some documentation to the readme to help clarify how to avoid issues similar to @monkut 's. It's not great to hear that your unit tests manipulated your real environment, and that is a major problem.

I'm a big fan of pytest and pytest fixtures. For all of my moto tests, I have a conftest.py file where I define the following fixtures:

@pytest.fixture(scope='function')
def aws_credentials():
    """Mocked AWS Credentials for moto."""
    os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
    os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
    os.environ['AWS_SECURITY_TOKEN'] = 'testing'
    os.environ['AWS_SESSION_TOKEN'] = 'testing'

@pytest.fixture(scope='function')
def s3(aws_credentials):
    with mock_s3():
        yield boto3.client('s3', region_name='us-east-1')


@pytest.fixture(scope='function')
def sts(aws_credentials):
    with mock_sts():
        yield boto3.client('sts', region_name='us-east-1')


@pytest.fixture(scope='function')
def cloudwatch(aws_credentials):
    with mock_cloudwatch():
        yield boto3.client('cloudwatch', region_name='us-east-1')

... etc.

All of the AWS/mocked fixtures take in a parameter of aws_credentials, which sets the proper fake environment variables -- which is needed. Then, for when I need to do anything with the mocked AWS environment, I do something like:

def test_create_bucket(s3):
    # s3 is a fixture defined above that yields a boto3 s3 client.
    # Feel free to instantiate another boto3 S3 client -- Keep note of the region though.
    s3.create_bucket(Bucket="somebucket")
   
    result = s3.list_buckets()
    assert len(result['Buckets']) == 1
    assert result['Buckets'][0]['Name'] == 'somebucket'

Taking this approach works for all of my tests. I have had some issues with Tox and Travis CI occasionally -- and typically I need to do something along the lines of touch ~/.aws/credentials for those to work. However, using the latest moto, boto3, and the fixtures I have above seems to always work for me without issues.

Finally!

@pskowronek
Copy link

I would say, that people shouldn't be afraid to run unit tests that use moto risking their real AWS envs being modified. I don't suspect that there are some folks that would like to run tests that use both real AWS and still mock some bits of AWS.
What I'm trying to say is that currently to use moto requires a lot of attention and even that may go in vain with a newer version of AWS libs or moto. Maybe we should all together put some pressure on Amazon so they expose some mocking points to make everyone's life easier? Amazon has their feedback page IIRC.

@yitzikc
Copy link

yitzikc commented Jul 18, 2019

I can still reproduce the issue with S3 and SQS tests on
boto3 = "==1.9.189" botocore = "==1.12.189" moto = "==1.3.13"
Same with moto = "==1.3.10"

@mikegrima
Copy link
Collaborator

@yitzikc Are your mocks running before your code is executed?

@mikegrima
Copy link
Collaborator

I just added a Protip to my post above, which should assist users when designing their tests with moto.

@EdwynPeignon
Copy link

I got the same problem and my solution was to import the moto librairies before the boto3 librairie.
There are certainly some conflicts between the librairies.
Hope it'll help some people :)

@yitzikc
Copy link

yitzikc commented Aug 1, 2019

I got the same problem and my solution was to import the moto librairies before the boto3 librairie.
There are certainly some conflicts between the librairies.
Hope it'll help some people :)

Indeed when care is taken to import Moto before any imports of Boto3 or Botocore, the mocking works properly. I had to watch for imported modules which were importing Boto. Also, when running Pytest on multiple files, imports for tests in one file would interfere with the ones run subsequently. I had to import Moto in any test files that might import modules which ultimately import Boto.

@mikegrima
Copy link
Collaborator

I just made a PR to introduce more AWS Config features (#2363), and I updated the readme with the wording in my post above. Please review and let me know if there is anything else I should add:
https://github.com/spulec/moto/blob/1c268e3580b0976c9867b50124f665320c188148/README.md#very-important----recommended-usage

@jayai2014
Copy link

jayai2014 commented Sep 26, 2019

I got the same problem and my solution was to import the moto librairies before the boto3 librairie.
There are certainly some conflicts between the librairies.
Hope it'll help some people :)

Indeed when care is taken to import Moto before any imports of Boto3 or Botocore, the mocking works properly. I had to watch for imported modules which were importing Boto. Also, when running Pytest on multiple files, imports for tests in one file would interfere with the ones run subsequently. I had to import Moto in any test files that might import modules which ultimately import Boto.

Changing import order still not working to me 😢 . Using same version as boto3 = "==1.9.189" botocore = "==1.12.189" moto = "==1.3.13" / "==1.3.10"

@mikegrima
Copy link
Collaborator

The problem is not the import order -- it's the order upon which a boto client is instantiated. If it is instantiated BEFORE a mock is established, it won't work.

Please review: https://github.com/spulec/moto#very-important----recommended-usage

@pskowronek
Copy link

Hi guys, not sure if I understand everything correctly, but I allowed myself to report an issue to boto3 to make mocking easier - can anyone from moto core team comment on boto/boto3#2123 - maybe there's something boto team could do to avoid such problems? This bug is not the only one reported to moto - some of those issues are more than 1 year old and people still have problems that tests hit AWS servers.

@mikegrima
Copy link
Collaborator

I'm actually curious if I permanently fixed this issue #2578

mikegrima pushed a commit to mikegrima/moto that referenced this issue Nov 18, 2019
- Fixes getmoto#2575
- Also upgraded Travis CI to make use of Bionic instead of Xenial
- This may also address concerns raised in getmoto#1793
@mikegrima
Copy link
Collaborator

Can you all verify if the latest master fixes this issue? The changes in #2578 seems like it should fix this issue once and for all.

gruebel pushed a commit to gruebel/moto that referenced this issue Dec 17, 2019
- Fixes getmoto#2575
- Also upgraded Travis CI to make use of Bionic instead of Xenial
- This may also address concerns raised in getmoto#1793
@karthikvadla
Copy link

karthikvadla commented Mar 6, 2020

@mikegrima, Thank you very much for your responses.
i have below exception even after mocking my dynamodb with pytest fixtures.

[CPython37:setup:stdout] >           raise error_class(parsed_response, operation_name)
[CPython37:setup:stdout] E           botocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the DescribeTable operation: The security token included in the request is invalid.
[CPython37:setup:stdout] 

This is my source code file named db_utils.py

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import json
from dynamodb_json import json_util as db_json

import boto3
from botocore.exceptions import ClientError

from .converters import DecimalEncoder


class DynamoDbOperations:
    """
    Class to perform dynamo_db common operations.
    By default connects to us-west-2 (Oregon) region.
    """
    def __init__(self, table_name: str, region_name: str = 'us-west-2'):
        dynamo_db_resource = boto3.resource('dynamodb', region_name=region_name)

        # Use client to handle exceptions and resolve the error.
        dynamo_db_client = boto3.client('dynamodb', region_name=region_name)
        try:
            self._table = dynamo_db_resource.Table(table_name)
            print("{} table created on {}".format(table_name, self._table.creation_date_time))
        except dynamo_db_client.exceptions.ResourceNotFoundException as e:
            # Log error
            print("Error: {}".format(e))

    def insert_item(self, json_item):
        """
        Inserts item into dynamo_db table

        :param json_item: Item as Json object

        :return:
        """
        if type(json_item) is not dict:
            raise ValueError("Insert Item: {} must be json object".format(json_item))

        try:
            response = self._table.put_item(Item=json_item)
        except ClientError as ce:
            print(ce.response['Error']['Message'])
        else:
            print("PutItem succeeded")
            clean_response = db_json.loads(response)
            print(json.dumps(clean_response, indent=4, cls=DecimalEncoder))
            return clean_response

    def get_item(self, primary_key: dict):
        """
        Get item from table based on primary_key

        :param primary_key: Dictionary of partition_key and sort_key(optional)

        :return: Returns json object with primary key
        """
        if type(primary_key) is not dict:
            raise ValueError("primary_key: {} must be dictionary \
            of partition_key and sort_key(optional)".format(primary_key))

        try:
            response = self._table.get_item(Key=primary_key)
        except ClientError as ce:
            print(ce.response['Error']['Message'])
        else:
            item = response['Item']
            print("GetItem succeeded")
            clean_response = db_json.loads(item)
            print(json.dumps(clean_response, indent=4, cls=DecimalEncoder))
            return clean_response

    def modify_item(self, primary_key: dict,
                    update_expression: str,
                    expression_attribute_values: dict,
                    condition_expression: str = None,
                    return_values: str = "UPDATED_NEW"
                    ):
        """
        Update/Modify item based on primary_key.
        You can update values of existing attributes, add new attributes, or remove attributes,

        More info:
        Update Expression: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html
        Condition Expression: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html#Expressions.ConditionExpressions.SimpleComparisons

        :param primary_key: Dictionary of partition_key and sort_key(optional)
        :param update_expression: An update expression consists of one or more clauses.
        Each clause begins with a SET, REMOVE, ADD, or DELETE keyword. You can include any of these clauses
        in an update expression, in any order. However, each action keyword can appear only once
        :param condition_expression: To perform a conditional update
        :param expression_attribute_values: Attribute values to be updated/add/delete
        :param return_values: Return type after performing update

        :return: Returns updated json attributes
        """

        if type(primary_key) is not dict:
            raise ValueError("primary_key: {} must be dictionary \
            of partition_key and sort_key(optional)".format(primary_key))

        if 0 < len(primary_key) > 2:
            raise Exception("primary_key: {} must contain \
            partition_key and sort_key(optional) only".format(primary_key))

        if type(expression_attribute_values) is not dict:
            raise ValueError("expression_attribute_values: {} must be dictionary".format(expression_attribute_values))

        try:
            if condition_expression:
                response = self._table.update_item(
                    Key=primary_key,
                    UpdateExpression=update_expression,
                    ConditionExpression=condition_expression,
                    ExpressionAttributeValues=expression_attribute_values,
                    ReturnValues=return_values
                )
            else:
                response = self._table.update_item(
                    Key=primary_key,
                    UpdateExpression=update_expression,
                    ExpressionAttributeValues=expression_attribute_values,
                    ReturnValues=return_values
                )
        except ClientError as e:
            if e.response['Error']['Code'] == "ConditionalCheckFailedException":
                print(e.response['Error']['Message'])
            else:
                raise
        else:
            print("UpdateItem succeeded:")
            clean_response = db_json.loads(response)
            print(json.dumps(clean_response, indent=4, cls=DecimalEncoder))
            return clean_response

    def delete_item(self, primary_key:dict,
                    expression_attribute_values: dict = None,
                    condition_expression: str = None):
        """
        Deletes an item from table

        :param primary_key: Dictionary of partition_key and sort_key(optional)
        :param expression_attribute_values: Items with matching Attribute values to be deleted
        :param condition_expression: To perform a conditional delete
        :return:
        """
        if type(primary_key) is not dict:
            raise ValueError("primary_key: {} must be dictionary \
            of partition_key and sort_key(optional)".format(primary_key))

        if condition_expression:
            if expression_attribute_values is None:
                raise ValueError("expression_attribute_values: {} \
                must be provided".format(expression_attribute_values))
            elif type(expression_attribute_values) is not dict:
                raise ValueError("expression_attribute_values: {} must be a dictionary")

        try:
            if condition_expression:
                response = self._table.delete_item(
                    Key=primary_key,
                    ConditionExpression=condition_expression,
                    ExpressionAttributeValues=expression_attribute_values
                )
            else:
                response = self._table.delete_item(Key=primary_key)

        except ClientError as ce:
            if ce.response['Error']['Code'] == "ConditionalCheckFailedException":
                print(ce.response['Error']['Message'])
            else:
                raise
        else:
            print("DeleteItem succeeded:")
            clean_response = db_json.loads(response)
            print(json.dumps(clean_response, indent=4, cls=DecimalEncoder))
            return clean_response

here is my test file

from db_utils import DynamoDbOperations
from moto import mock_dynamodb2
import pytest
import boto3
import os

TEST_DYNAMO_TABLE_NAME = 'test'


@pytest.fixture(scope='function')
def aws_credentials():
    """Mocked AWS Credentials for moto."""
    os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
    os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
    os.environ['AWS_SECURITY_TOKEN'] = 'testing'
    os.environ['AWS_SESSION_TOKEN'] = 'testing'
    os.environ['AWS_DEFAULT_REGION'] = 'us-west-2'


@pytest.fixture
def dynamo_db_table(aws_credentials):
    def _table(table_name):
        with mock_dynamodb2():
            boto3.client('dynamodb').create_table(
                AttributeDefinitions=[
                    {'AttributeName': 'id', 'AttributeType': 'S'}
                ],
                TableName=f'{table_name}',
                KeySchema=[{'AttributeName': 'id', 'KeyType': 'HASH'}],
                ProvisionedThroughput={
                    'ReadCapacityUnits': 5,
                    'WriteCapacityUnits': 5,
                },
            )
            yield boto3.resource('dynamodb').Table(f'{table_name}')
    yield _table

def test_dynamo_db_utils_init(dynamo_db_table):
    DynamoDbOperations(TEST_DYNAMO_TABLE_NAME)

def test_dynamo_db_utils_insert_item(dynamo_db_table):
    json_item = {
        'id': '123',
        'name': 'karthik'
    }
    dynamo_db_table(TEST_DYNAMO_TABLE_NAME)
    db = DynamoDbOperations(TEST_DYNAMO_TABLE_NAME)
    response = db.insert_item(json_item)
    assert 200 in response

Anything i'm missing here. Appreciate your help.

@karthikvadla
Copy link

karthikvadla commented Mar 6, 2020

I used decorator around

@mock_dynamodb2
def test_dynamo_db_utils_init(dynamo_db_table):
    DynamoDbOperations(TEST_DYNAMO_TABLE_NAME)

The above invalid security token went off.
but now i see

test_dynamo_db_utils.py Error: An error occurred (ResourceNotFoundException) when calling the DescribeTable operation: Requested resource not found

Any idea how to mock this line
print("{} table created on {}".format(table_name, self._table.creation_date_time)) in fixtures

Can we mock self._table.creation_date_time???

@bblommers
Copy link
Collaborator

Hi @karthikvadla, where in your test are you calling the 'DescribeTable' operation, i.e. on which line is it failing? I can't see it in the code you provided

The creation time can be accessed like this:

table_description = conn.describe_table(TableName=name)
created_on = table_description["Table"]["CreationDateTime"]

@drewmullen
Copy link

drewmullen commented Mar 18, 2020

Hi all im getting the same issue reported above. any advice would be greatly appreciated:

from moto import mock_s3
import boto3
import pytest
import os

@pytest.fixture(scope='function')
def aws_credentials():
    """Mocked AWS Credentials for moto."""
    os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
    os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
    os.environ['AWS_SECURITY_TOKEN'] = 'testing'
    os.environ['AWS_SESSION_TOKEN'] = 'testing'


@pytest.fixture(scope='function')
def s3(aws_credentials):
    with mock_s3():
        yield boto3.client('s3', region_name='us-east-1')


def test_create_bucket(s3):
    # s3 is a fixture defined above that yields a boto3 s3 client.
    # Feel free to instantiate another boto3 S3 client -- Keep note of the region though.
    s3.create_bucket(Bucket="somebucket")

    result = s3.list_buckets()
    assert len(result['Buckets']) == 1
    assert result['Buckets'][0]['Name'] == 'somebucket'

$ pytest moto.py

output:

______ERROR collecting moto.py _________________
ImportError while importing test module '/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
moto.py:1: in <module>
    from moto import mock_s3
E   ImportError: cannot import name 'mock_s3' from partially initialized module 'moto' (most likely due to a circular import) (/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py)

versions:

  • moto 1.3.14
  • boto3 1.11.16
  • botocore 1.14.16
  • pytest 5.4.1

@msmolens
Copy link

______ERROR collecting moto.py _________________
ImportError while importing test module '/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
moto.py:1: in <module>
    from moto import mock_s3
E   ImportError: cannot import name 'mock_s3' from partially initialized module 'moto' (most likely due to a circular import) (/Users/dmullen/scratch/py-serverless/fixture-tests/moto.py)

@drewmullen The name of your test file conflicts with the moto package. Rename your file to something other than moto.py.

@bjmc
Copy link

bjmc commented Jun 25, 2020

Edit: I think I just need to be more careful with import order

Is this meant to be fixed? I'm seeing what looks to be a similar issue trying to mock out DynamoDB calls.

moto==1.3.14
boto==2.49.0
boto3==1.14.7
botocore==1.17.7

Partial example:

def setup_table():
    ddb_client = boto3.client('dynamodb')
    ddb_client.create_table(
        AttributeDefinitions=[
            {'AttributeName': 'email', 'AttributeType': 'S'},
            {'AttributeName': 'timestamp', 'AttributeType': 'S'},
        ],
        TableName='contact-form-submissions',
        KeySchema=[
            {'AttributeName': 'email', 'KeyType': 'HASH'},
            {'AttributeName': 'timestamp', 'KeyType': 'RANGE'},
        ],
        ProvisionedThroughput={'ReadCapacityUnits': 1, 'WriteCapacityUnits': 1},
    )

@mock_dynamodb2
def test_save_to_db():
    setup_table()
    result = save_to_db(DATA)
    assert result is True

I'm getting an error:

botocore.errorfactory.ResourceInUseException: An error occurred (ResourceInUseException) when calling the CreateTable operation: Table already exists: contact-form-submissions

And it looks like that table is getting created on real AWS, not mocked as I'd expected. Am I doing something wrong or is there a regression?

@mickog
Copy link

mickog commented Sep 9, 2020

This is happening for me with moto 1.3.16 - botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the Publish operation: No account found for the given parameters

rolling back and using 1.13.10 seems the resolve the issue

@bblommers
Copy link
Collaborator

@mickog
Copy link

mickog commented Sep 9, 2020

Cheers @bblommers will take a look at refactoring my tests,

@gene1wood
Copy link

The link from @bblommers in the comment above is now contained in the documentation here : https://docs.getmoto.org/en/latest/docs/getting_started.html#recommended-usage

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests