Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using async uploads, S3Mock incorrectly stores unsigned, chunked uploads without removing meta (chunk, checksum) information. #1662

Closed
vlbaluk opened this issue Feb 21, 2024 · 14 comments
Assignees
Labels

Comments

@vlbaluk
Copy link

vlbaluk commented Feb 21, 2024

S3Mock is not functioning as expected with S3TransferManager.

  1. File content on the S3 mock Docker container contains data and metadata, leading to incorrect downloads.
  2. S3Mock always returns incorrect length for multipart downloads, too.

Experiment: the file content is "BBBBB"

  • Async CRT client with multipart and S3TransferManager stores the following file in S3Mock docker container:

BBBBBAsyncAttachment

  • Synchronous client stores correct content:

ffe7b8b720bd49a98902fdb950d5bc97

Code snippet:

store.write("test data".getBytes(UTF_8));
 // under the hood, s3TransferManager.uploadFile(uploadFileRequest) with CRT async client


s3TransferManager.download(downloadFileRequest).completionFuture().join().result().response().contentLength() 
// download returns incorrect content, including additional lines at the beginning and at the end, as on the screen above.  
                

Checked with 3.4.0 and 2.11.0 docker versions.

Originally posted by @vlbaluk in #1613 (comment)

@afranken afranken self-assigned this Feb 21, 2024
@afranken afranken added the bug label Feb 21, 2024
@afranken
Copy link
Member

@vlbaluk I added an integration test using TransferManager that is working just fine, see linked PR.

@vlbaluk
Copy link
Author

vlbaluk commented Feb 23, 2024

@afranken, Big thanks for looking at it. I see you set explicitly .checksumAlgorithm(ChecksumAlgorithm.SHA256).
By default, the CRT client uses CRC32(visible in the screenshot above). I did the same trick for my upload configuration, and it fixed the problem. 🙂
But I didn't see anywhere in AWS docs recommendation to set the checksum algorithm explicitly.

Is it safe to change it for uploads to S3, or could it potentially disrupt any processes?
Can you explain why the default CRC32 algorithm is causing the files to break?

@afranken
Copy link
Member

@vlbaluk when I change the line to .checksumAlgorithm(ChecksumAlgorithm.CRC32) or remove it alltogether, the test still runs without problems.

@afranken
Copy link
Member

I added checksum support in version 2.17.0:
https://github.com/adobe/S3Mock/releases/tag/2.17.0
maybe you tested with an older version?

Older versions would ignore the additional bytes the AWS SDK adds to the payload when asking for checksum validation of (mulitpart) uploads.
The old behaviour would match your screenshot.

@afranken
Copy link
Member

see also #1123

@afranken afranken removed the bug label Feb 24, 2024
@glennvdv
Copy link

glennvdv commented Mar 1, 2024

Got the same problem using 3.5.1
Sample code

@Test
    void testPutAndGetObject() throws Exception {
        URL resource = Thread.currentThread().getContextClassLoader().getResource("jon.png");
        var uploadFile = new File(resource.toURI());
        s3AsyncClient.putObject(PutObjectRequest.builder().bucket("eojt").key(uploadFile.getName()).build(),AsyncRequestBody.fromFile(uploadFile)).get();
        var response =
                s3Client.getObject(
                        GetObjectRequest.builder().bucket("eojt").key(uploadFile.getName()).build());

        var uploadFileIs = Files.newInputStream(uploadFile.toPath());
        var uploadDigest = hexDigest(uploadFileIs);
        var downloadedDigest = hexDigest(response);
        uploadFileIs.close();
        response.close();

        Assertions.assertThat(uploadDigest).isEqualTo(downloadedDigest);
    }

Test output:
Expected :"dcd37a339ac2f037a7498b9fc63048bb" Actual :"930c76274807e15e0873cb30a9d0d012"
In the file following content is added
0 x-amz-checksum-crc32:ntdN8g==

@afranken
Copy link
Member

afranken commented Mar 5, 2024

@glennvdv how do you construct the async client?
I can't reproduce these errors locally.

@glennvdv
Copy link

glennvdv commented Mar 5, 2024

Using auto configuration from spring boot and Spring Cloud for Amazon Web Services

@vlbaluk
Copy link
Author

vlbaluk commented Mar 5, 2024

@afranken We used crtBuilder() for constructing crtClient:

 final S3CrtAsyncClientBuilder s3AsyncClientBuilder = S3AsyncClient.crtBuilder()
            .maxConcurrency(100) 
            .minimumPartSizeInBytes(10 * MB)
            .thresholdInBytes(100 * MB)
            .region(Region.of(...))
            .credentialsProvider(...);

You may be using a different HTTP client, which could explain the difference in our setups.

@afranken
Copy link
Member

afranken commented Mar 6, 2024

@glennvdv I meant the actual code you are using to construct the client. As I said, I can't reproduce the problem locally.
I'm using several different clients in the integration tests:
https://github.com/adobe/S3Mock/blob/main/integration-tests/src/test/kotlin/com/adobe/testing/s3mock/its/S3TestBase.kt#L115

@vlbaluk that looks almost exactly the same as the client I'm using in the integration tests:
https://github.com/adobe/S3Mock/blob/main/integration-tests/src/test/kotlin/com/adobe/testing/s3mock/its/S3TestBase.kt#L231

@glennvdv
Copy link

glennvdv commented Mar 7, 2024

@afranken i let spring boot create the async client. From there source they do something like this

	@Bean
	@ConditionalOnMissingBean
	S3AsyncClient s3AsyncClient(AwsCredentialsProvider credentialsProvider) {
		S3CrtAsyncClientBuilder builder = S3AsyncClient.crtBuilder().credentialsProvider(credentialsProvider)
				.region(this.awsClientBuilderConfigurer.resolveRegion(this.properties));
		Optional.ofNullable(this.awsProperties.getEndpoint()).ifPresent(builder::endpointOverride);
		Optional.ofNullable(this.properties.getEndpoint()).ifPresent(builder::endpointOverride);
		Optional.ofNullable(this.properties.getCrossRegionEnabled()).ifPresent(builder::crossRegionAccessEnabled);
		Optional.ofNullable(this.properties.getPathStyleAccessEnabled()).ifPresent(builder::forcePathStyle);

		if (this.properties.getCrt() != null) {
			S3CrtClientProperties crt = this.properties.getCrt();
			PropertyMapper propertyMapper = PropertyMapper.get();
			propertyMapper.from(crt::getMaxConcurrency).whenNonNull().to(builder::maxConcurrency);
			propertyMapper.from(crt::getTargetThroughputInGbps).whenNonNull().to(builder::targetThroughputInGbps);
			propertyMapper.from(crt::getMinimumPartSizeInBytes).whenNonNull().to(builder::minimumPartSizeInBytes);
			propertyMapper.from(crt::getInitialReadBufferSizeInBytes).whenNonNull()
					.to(builder::initialReadBufferSizeInBytes);
		}

		return builder.build();
	}

afranken added a commit that referenced this issue Apr 12, 2024
Many hard-coded paths, sizes etc make it hard to test with files other
than the default "sampleFile.txt" which is 36bytes in size.
Using even slightly larger payloads leads to errors during uploads.

#1662
afranken added a commit that referenced this issue Apr 12, 2024
Many hard-coded paths, sizes etc make it hard to test with files other
than the default "sampleFile.txt" which is 36bytes in size.
Using even slightly larger payloads leads to errors during uploads.

#1662
afranken added a commit that referenced this issue Apr 12, 2024
Many hard-coded paths, sizes etc make it hard to test with files other
than the default "sampleFile.txt" which is 36bytes in size.
Using even slightly larger payloads leads to errors during uploads.

#1662
afranken added a commit that referenced this issue Apr 12, 2024
Some http clients (like the AWS SDKs) do not cope well if we return
errors from APIs before consuming the incoming stream.
Make sure we always consume all bytes before processing the streams.

#1662
afranken added a commit that referenced this issue Apr 12, 2024
Some http clients (like the AWS SDKs) do not cope well if we return
errors from APIs before consuming the incoming stream.
Make sure we always consume all bytes before processing the streams.

#1662
afranken added a commit that referenced this issue Apr 12, 2024
Some http clients (like the AWS SDKs) do not cope well if we return
errors from APIs before consuming the incoming stream.
Make sure we always consume all bytes before processing the streams.

#1662
afranken added a commit that referenced this issue Apr 12, 2024
Some http clients (like the AWS SDKs) do not cope well if we return
errors from APIs before consuming the incoming stream.
Make sure we always consume all bytes before processing the streams.

#1662
afranken added a commit that referenced this issue Apr 12, 2024
@afranken afranken added the bug label Apr 17, 2024
@afranken
Copy link
Member

after testing with different configurations of clients and upload files in different sizes, I may have found the problem:
Some clients decide dynamically whether to use chunked uploads or not unless explicitly configured.
Signing is also dynamically decided upon, unless explicitly configured.

We currently do not handle chunked, unsigned uploads correctly, either we cut off some of the chunks before persisting the bytes to disk, or we write the chunks together with their chunk boundaries to disk.
Either way, we persist the wrong data to disk and later return the wrong data.

@afranken
Copy link
Member

uploading chunked, signed data works without issues, BTW.

@afranken afranken changed the title S3TransferManager and multiPart upload store content incorrectly When using async uploads, S3Mock incorrectly stores unsigned, chunked uploads without removing meta (chunk, checksum) information. Apr 26, 2024
@afranken
Copy link
Member

@glennvdv / @vlbaluk I just released 3.7.1 which now correctly handles unsigned, chunked uploads when using async http clients, as long as the uploaded files are below 16KB.

See #1818

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants