From d0c7f912d66a499516f497710e52a64034dbecb6 Mon Sep 17 00:00:00 2001 From: Scott Gerring Date: Fri, 4 Aug 2023 13:43:56 +0100 Subject: [PATCH] feat: Add Batch Processor module (#1317) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Starting to sketch out shape of API for batch processor * Variant 1 * Some more examples * Add extra bit for handling message-specific mutation * Make clear what's not public * test with interfaces * move tests * refactoring a bit * refactoring and adding FIFO * refactoring and adding FIFO * adding FIFO management * cleanup * add javadoc * Flesh out builder option a bit * Flesh out a bit more * more changes * Leaning into the builder style. needs some more thought * The shape of it is rightish * Working working * Work * Work on kinesis batch handler * More tests * More tests and starting to add an example * Working on batch * feat(batch): initial DdbBatchMessageHandler implementation * more * fix pom.xml for powertools-examples-batch * Add dynamodb example * Move template into subdir * Better structure * tidy up * Trying to get kinesis going * Kinesis demo working * Updated readme * Deprecated everywhere * Address initial review comments * Add success tests for Kinesis/S3 * Increase DDB coverage * Tell sonar to ignore dupes in examples * Add docs * Add warning * More doco * Format * Docs good * Disabling formatting check for now as its breaking the build and I can't work out how to autoapply it from intellij properly * Make checkstyle happy * Add docs from heitor * More docs changes * move ddb template in the right folder * Changes * add items updates and deletions to ddb example * Will it blend? * More changes * e2e test handler * Try work for SQS only * More greatness * Almost good * SQS works * Also kinesis e2e * Lets try doing it with streams * Try make it work with streams * Streams? * Make SQS test work * SQS and Kinesis work * DynamoDB E2E works * Formatting * Try exclude e2e-tests from dupe checking * Rename sonar file * Formatting * Update docs/utilities/batch.md Co-authored-by: Jérôme Van Der Linden <117538+jeromevdl@users.noreply.github.com> * Update docs/utilities/batch.md Co-authored-by: Jérôme Van Der Linden <117538+jeromevdl@users.noreply.github.com> * Address review comments * Missed one * Formatting * Cleanup doc linking * More doco * Update docs/utilities/batch.md Co-authored-by: Jérôme Van Der Linden <117538+jeromevdl@users.noreply.github.com> * Update batch.md Address review comments * Skip aspectj run --------- Co-authored-by: Scott Gerring Co-authored-by: Jerome Van Der Linden Co-authored-by: Michele Ricciardi Co-authored-by: Jérôme Van Der Linden <117538+jeromevdl@users.noreply.github.com> --- .sonarcloud.properties | 2 + docs/utilities/batch.md | 805 ++++++++++-------- docs/utilities/sqs_batch.md | 489 +++++++++++ docs/utilities/sqs_large_message_handling.md | 8 +- examples/README.md | 3 +- examples/pom.xml | 1 + examples/powertools-examples-batch/README.md | 35 + .../deploy/ddb-streams/template.yaml | 74 ++ .../deploy/kinesis/template.yml | 83 ++ .../deploy/sqs/template.yml | 147 ++++ examples/powertools-examples-batch/pom.xml | 198 +++++ .../dynamo/DynamoDBStreamBatchHandler.java | 32 + .../org/demo/batch/dynamo/DynamoDBWriter.java | 108 +++ .../batch/kinesis/KinesisBatchHandler.java | 33 + .../batch/kinesis/KinesisBatchSender.java | 78 ++ .../java/org/demo/batch/model/DdbProduct.java | 88 ++ .../java/org/demo/batch/model/Product.java | 84 ++ .../org/demo/batch/sqs/SqsBatchHandler.java | 33 + .../org/demo/batch/sqs/SqsBatchSender.java | 77 ++ .../src/main/resources/log4j2.xml | 16 + mkdocs.yml | 3 + pom.xml | 1 + powertools-batch/pom.xml | 68 ++ .../batch/BatchMessageHandlerBuilder.java | 58 ++ .../AbstractBatchMessageHandlerBuilder.java | 142 +++ .../DynamoDbBatchMessageHandlerBuilder.java | 55 ++ .../KinesisBatchMessageHandlerBuilder.java | 58 ++ .../SqsBatchMessageHandlerBuilder.java | 64 ++ .../DeserializationNotSupportedException.java | 28 + .../batch/handler/BatchMessageHandler.java | 38 + .../handler/DynamoDbBatchMessageHandler.java | 80 ++ .../KinesisStreamsBatchMessageHandler.java | 98 +++ .../batch/handler/SqsBatchMessageHandler.java | 124 +++ .../batch/DdbBatchProcessorTest.java | 127 +++ .../batch/KinesisBatchProcessorTest.java | 156 ++++ .../batch/SQSBatchProcessorTest.java | 171 ++++ .../lambda/powertools/batch/model/Basket.java | 67 ++ .../powertools/batch/model/Product.java | 84 ++ .../src/test/resources/dynamo_event.json | 97 +++ .../src/test/resources/kinesis_event.json | 38 + .../src/test/resources/sqs_event.json | 55 ++ .../src/test/resources/sqs_fifo_event.json | 58 ++ powertools-e2e-tests/handlers/batch/pom.xml | 72 ++ .../lambda/powertools/e2e/Function.java | 172 ++++ .../lambda/powertools/e2e/model/Product.java | 56 ++ .../batch/src/main/resources/log4j2.xml | 16 + powertools-e2e-tests/handlers/pom.xml | 11 + powertools-e2e-tests/pom.xml | 7 +- .../amazon/lambda/powertools/BatchE2ET.java | 277 ++++++ .../powertools/testutils/Infrastructure.java | 74 +- .../utilities/EventDeserializer.java | 17 + .../lambda/powertools/sqs/SqsBatch.java | 5 + .../lambda/powertools/sqs/SqsUtils.java | 47 + 53 files changed, 4460 insertions(+), 358 deletions(-) create mode 100644 .sonarcloud.properties create mode 100644 docs/utilities/sqs_batch.md create mode 100644 examples/powertools-examples-batch/README.md create mode 100644 examples/powertools-examples-batch/deploy/ddb-streams/template.yaml create mode 100644 examples/powertools-examples-batch/deploy/kinesis/template.yml create mode 100644 examples/powertools-examples-batch/deploy/sqs/template.yml create mode 100644 examples/powertools-examples-batch/pom.xml create mode 100644 examples/powertools-examples-batch/src/main/java/org/demo/batch/dynamo/DynamoDBStreamBatchHandler.java create mode 100644 examples/powertools-examples-batch/src/main/java/org/demo/batch/dynamo/DynamoDBWriter.java create mode 100644 examples/powertools-examples-batch/src/main/java/org/demo/batch/kinesis/KinesisBatchHandler.java create mode 100644 examples/powertools-examples-batch/src/main/java/org/demo/batch/kinesis/KinesisBatchSender.java create mode 100644 examples/powertools-examples-batch/src/main/java/org/demo/batch/model/DdbProduct.java create mode 100644 examples/powertools-examples-batch/src/main/java/org/demo/batch/model/Product.java create mode 100644 examples/powertools-examples-batch/src/main/java/org/demo/batch/sqs/SqsBatchHandler.java create mode 100644 examples/powertools-examples-batch/src/main/java/org/demo/batch/sqs/SqsBatchSender.java create mode 100644 examples/powertools-examples-batch/src/main/resources/log4j2.xml create mode 100644 powertools-batch/pom.xml create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/BatchMessageHandlerBuilder.java create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/AbstractBatchMessageHandlerBuilder.java create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/DynamoDbBatchMessageHandlerBuilder.java create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/KinesisBatchMessageHandlerBuilder.java create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/SqsBatchMessageHandlerBuilder.java create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/exception/DeserializationNotSupportedException.java create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/BatchMessageHandler.java create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/DynamoDbBatchMessageHandler.java create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/KinesisStreamsBatchMessageHandler.java create mode 100644 powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/SqsBatchMessageHandler.java create mode 100644 powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/DdbBatchProcessorTest.java create mode 100644 powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/KinesisBatchProcessorTest.java create mode 100644 powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/SQSBatchProcessorTest.java create mode 100644 powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/model/Basket.java create mode 100644 powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/model/Product.java create mode 100644 powertools-batch/src/test/resources/dynamo_event.json create mode 100644 powertools-batch/src/test/resources/kinesis_event.json create mode 100644 powertools-batch/src/test/resources/sqs_event.json create mode 100644 powertools-batch/src/test/resources/sqs_fifo_event.json create mode 100644 powertools-e2e-tests/handlers/batch/pom.xml create mode 100644 powertools-e2e-tests/handlers/batch/src/main/java/software/amazon/lambda/powertools/e2e/Function.java create mode 100644 powertools-e2e-tests/handlers/batch/src/main/java/software/amazon/lambda/powertools/e2e/model/Product.java create mode 100644 powertools-e2e-tests/handlers/batch/src/main/resources/log4j2.xml create mode 100644 powertools-e2e-tests/src/test/java/software/amazon/lambda/powertools/BatchE2ET.java diff --git a/.sonarcloud.properties b/.sonarcloud.properties new file mode 100644 index 000000000..1bd93ed9e --- /dev/null +++ b/.sonarcloud.properties @@ -0,0 +1,2 @@ +# Ignore code duplicates in the examples +sonar.cpd.exclusions=examples/**/*,powertools-e2e-tests/**/* \ No newline at end of file diff --git a/docs/utilities/batch.md b/docs/utilities/batch.md index 95704b8a0..6b38b438c 100644 --- a/docs/utilities/batch.md +++ b/docs/utilities/batch.md @@ -1,460 +1,571 @@ --- -title: SQS Batch Processing +title: Batch Processing description: Utility --- -The SQS batch processing utility provides a way to handle partial failures when processing batches of messages from SQS. -The utility handles batch processing for both -[standard](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html) and -[FIFO](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html) SQS queues. +The batch processing utility provides a way to handle partial failures when processing batches of messages from SQS queues, +SQS FIFO queues, Kinesis Streams, or DynamoDB Streams. + +```mermaid +stateDiagram-v2 + direction LR + BatchSource: Amazon SQS

Amazon Kinesis Data Streams

Amazon DynamoDB Streams

+ LambdaInit: Lambda invocation + BatchProcessor: Batch Processor + RecordHandler: Record Handler function + YourLogic: Your logic to process each batch item + LambdaResponse: Lambda response + BatchSource --> LambdaInit + LambdaInit --> BatchProcessor + BatchProcessor --> RecordHandler + state BatchProcessor { + [*] --> RecordHandler: Your function + RecordHandler --> YourLogic + } + RecordHandler --> BatchProcessor: Collect results + BatchProcessor --> LambdaResponse: Report items that failed processing +``` **Key Features** -* Prevent successfully processed messages from being returned to SQS -* A simple interface for individually processing messages from a batch +* Reports batch item failures to reduce number of retries for a record upon errors +* Simple interface to process each batch record +* Integrates with Java Events library and the deserialization module +* Build your own batch processor by extending primitives **Background** -When using SQS as a Lambda event source mapping, Lambda functions can be triggered with a batch of messages from SQS. -If your function fails to process any message from the batch, the entire batch returns to your SQS queue, and your -Lambda function will be triggered with the same batch again. With this utility, messages within a batch will be handled individually - only messages that were not successfully processed -are returned to the queue. +When using SQS, Kinesis Data Streams, or DynamoDB Streams as a Lambda event source, your Lambda functions are +triggered with a batch of messages. +If your function fails to process any message from the batch, the entire batch returns to your queue or stream. +This same batch is then retried until either condition happens first: +**a)** your Lambda function returns a successful response, +**b)** record reaches maximum retry attempts, or +**c)** records expire. + +```mermaid +journey + section Conditions + Successful response: 5: Success + Maximum retries: 3: Failure + Records expired: 1: Failure +``` + +This behavior changes when you enable Report Batch Item Failures feature in your Lambda function event source configuration: + + +* [**SQS queues**](#sqs-standard). Only messages reported as failure will return to the queue for a retry, while successful ones will be deleted. +* [**Kinesis data streams**](#kinesis-and-dynamodb-streams) and [**DynamoDB streams**](#kinesis-and-dynamodb-streams). +Single reported failure will use its sequence number as the stream checkpoint. +Multiple reported failures will use the lowest sequence number as checkpoint. + +With this utility, batch records are processed individually – only messages that failed to be processed +return to the queue or stream for a further retry. You simply build a `BatchProcessor` in your handler, +and return its response from the handler's `processMessage` implementation. Exceptions are handled +internally and an appropriate partial response for the message source is returned to Lambda for you. !!! warning - While this utility lowers the chance of processing messages more than once, it is not guaranteed. We recommend implementing processing logic in an idempotent manner wherever possible. + While this utility lowers the chance of processing messages more than once, it is still not guaranteed. + We recommend implementing processing logic in an idempotent manner wherever possible, for instance, + by taking advantage of [the idempotency module](idempotency.md). More details on how Lambda works with SQS can be found in the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html) ## Install -Depending on your version of Java (either Java 1.8 or 11+), the configuration slightly changes. +We simply add `powertools-batch` to our build dependencies. Note - if you are using other Powertools +modules that require code-weaving, such as `powertools-core`, you will need to configure that also. -=== "Maven Java 11+" +=== "Maven" - ```xml hl_lines="3-7 16 18 24-27"" + ```xml ... software.amazon.lambda - powertools-sqs + powertools-batch {{ powertools.version }} ... - ... - - - - ... - - dev.aspectj - aspectj-maven-plugin - 1.13.1 - - 11 - 11 - 11 - - - software.amazon.lambda - powertools-sqs - - - - - - - compile - - - - - ... - - - ``` - -=== "Maven Java 1.8" - - ```xml hl_lines="3-7 16 18 24-27" - - ... - - software.amazon.lambda - powertools-sqs - {{ powertools.version }} - - ... - - ... - - - - ... - - org.codehaus.mojo - aspectj-maven-plugin - 1.14.0 - - 1.8 - 1.8 - 1.8 - - - software.amazon.lambda - powertools-sqs - - - - - - - compile - - - - - ... - - - ``` - -=== "Gradle Java 11+" - - ```groovy hl_lines="3 11" - plugins { - id 'java' - id 'io.freefair.aspectj.post-compile-weaving' version '8.1.0' - } - - repositories { - mavenCentral() - } - - dependencies { - aspect 'software.amazon.lambda:powertools-sqs:{{ powertools.version }}' - } - - sourceCompatibility = 11 // or higher - targetCompatibility = 11 // or higher ``` -=== "Gradle Java 1.8" +=== "Gradle" - ```groovy hl_lines="3 11" - plugins { - id 'java' - id 'io.freefair.aspectj.post-compile-weaving' version '6.6.3' - } + ```groovy repositories { mavenCentral() } dependencies { - aspect 'software.amazon.lambda:powertools-sqs:{{ powertools.version }}' + implementation 'software.amazon.lambda:powertools-batch:{{ powertools.version }}' } - - sourceCompatibility = 1.8 - targetCompatibility = 1.8 ``` +## Getting Started -## IAM Permissions - -This utility requires additional permissions to work as expected. Lambda functions using this utility require the `sqs:DeleteMessageBatch` permission. - -If you are also using [nonRetryableExceptions](#move-non-retryable-messages-to-a-dead-letter-queue) attribute, utility will need additional permission of `sqs:GetQueueAttributes` on source SQS. -It also needs `sqs:SendMessage` and `sqs:SendMessageBatch` on configured dead letter queue. - -If source or dead letter queue is configured to use encryption at rest using [AWS Key Management Service (KMS)](https://aws.amazon.com/kms/), function will need additional permissions of -`kms:GenerateDataKey` and `kms:Decrypt` on the KMS key being used for encryption. Refer [docs](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-management.html#compatibility-with-aws-services) for more details. - -Refer [example project](https://github.com/aws-samples/aws-lambda-powertools-examples/blob/main/java/SqsBatchProcessing/template.yaml#L105) for policy details example. - - -## Processing messages from SQS - -You can use either **[SqsBatch annotation](#sqsbatch-annotation)**, or **[SqsUtils Utility API](#sqsutils-utility-api)** as a fluent API. - -Both have nearly the same behaviour when it comes to processing messages from the batch: +For this feature to work, you need to **(1)** configure your Lambda function event source to use `ReportBatchItemFailures`, +and **(2)** return a specific response to report which records failed to be processed. -* **Entire batch has been successfully processed**, where your Lambda handler returned successfully, we will let SQS delete the batch to optimize your cost -* **Entire Batch has been partially processed successfully**, where exceptions were raised within your `SqsMessageHandler` interface implementation, we will: - - **1)** Delete successfully processed messages from the queue by directly calling `sqs:DeleteMessageBatch` - - **2)** If a message with a [message group ID](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagegroupid-property.html) fails, - the processing of the batch will be stopped and the remainder of the messages will be returned to SQS. - This behaviour [is required to handle SQS FIFO queues](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting). - - **3)** if non retryable exceptions occur, messages resulting in configured exceptions during processing will be immediately moved to the dead letter queue associated to the source SQS queue or deleted from the source SQS queue if `deleteNonRetryableMessageFromQueue` is set to `true`. - - **4)** Raise `SQSBatchProcessingException` to ensure failed messages return to your SQS queue +You can use your preferred deployment framework to set the correct configuration while this utility, +while the `powertools-batch` module handles generating the response, which simply needs to be returned as the result of +your Lambda handler. -The only difference is that **SqsUtils Utility API** will give you access to return from the processed messages if you need. Exception `SQSBatchProcessingException` thrown from the -utility will have access to both successful and failed messaged along with failure exceptions. +A complete [Serverless Application Model](https://aws.amazon.com/serverless/sam/) example can be found +[here](https://github.com/aws-powertools/powertools-lambda-java/tree/main/examples/powertools-examples-batch) covering +all of the batch sources. -## Functional Interface SqsMessageHandler +For more information on configuring `ReportBatchItemFailures`, +see the details for [SQS](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting), +[Kinesis](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html#services-kinesis-batchfailurereporting),and +[DynamoDB Streams](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html#services-ddb-batchfailurereporting). -Both [annotation](#sqsbatch-annotation) and [SqsUtils Utility API](#sqsutils-utility-api) requires an implementation of functional interface `SqsMessageHandler`. -This implementation is responsible for processing each individual message from the batch, and to raise an exception if unable to process any of the messages sent. -**Any non-exception/successful return from your record handler function** will instruct utility to queue up each individual message for deletion. -### SqsBatch annotation +!!! note "You do not need any additional IAM permissions to use this utility, except for what each event source requires." -When using this annotation, you need provide a class implementation of `SqsMessageHandler` that will process individual messages from the batch - It should raise an exception if it is unable to process the record. +### Processing messages from SQS -All records in the batch will be passed to this handler for processing, even if exceptions are thrown - Here's the behaviour after completing the batch: - -* **Any successfully processed messages**, we will delete them from the queue via `sqs:DeleteMessageBatch`. -* **if, nonRetryableExceptions attribute is used**, messages resulting in configured exceptions during processing will be immediately moved to the dead letter queue associated to the source SQS queue or deleted from the source SQS queue if `deleteNonRetryableMessageFromQueue` is set to `true`. -* **Any unprocessed messages detected**, we will raise `SQSBatchProcessingException` to ensure failed messages return to your SQS queue. - -!!! warning - You will not have access to the **processed messages** within the Lambda Handler - all processing logic will and should be performed by the implemented `#!java SqsMessageHandler#process()` function. - -=== "AppSqsEvent.java" +=== "SQSBatchHandler" + + ```java hl_lines="10 13-15 20 25" + import com.amazonaws.services.lambda.runtime.Context; + import com.amazonaws.services.lambda.runtime.RequestHandler; + import com.amazonaws.services.lambda.runtime.events.SQSBatchResponse; + import com.amazonaws.services.lambda.runtime.events.SQSEvent; + import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder; + import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; + + public class SqsBatchHandler implements RequestHandler { - ```java hl_lines="7" - import software.amazon.lambda.powertools.sqs.SqsBatch; - import software.amazon.lambda.powertools.sqs.SqsMessageHandler; - import software.amazon.lambda.powertools.sqs.SqsUtils; + private final BatchMessageHandler handler; + + public SqsBatchHandler() { + handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .buildWithMessageHandler(this::processMessage, Product.class); + } - public class AppSqsEvent implements RequestHandler { @Override - @SqsBatch(SampleMessageHandler.class) - public String handleRequest(SQSEvent input, Context context) { - return "{\"statusCode\": 200}"; + public SQSBatchResponse handleRequest(SQSEvent sqsEvent, Context context) { + return handler.processBatch(sqsEvent, context); } - public class SampleMessageHandler implements SqsMessageHandler { - @Override - public String process(SQSMessage message) { - // This will be called for each individual message from a batch - // It should raise an exception if the message was not processed successfully - String returnVal = doSomething(message.getBody()); - return returnVal; - } + private void processMessage(Product p, Context c) { + // Process the product } + } ``` -=== "AppSqsEventWithNonRetryableExceptions.java" - - ```java hl_lines="7 21" - import software.amazon.lambda.powertools.sqs.SqsBatch; - import software.amazon.lambda.powertools.sqs.SqsMessageHandler; - import software.amazon.lambda.powertools.sqs.SqsUtils; +=== "SQS Product" - public class AppSqsEvent implements RequestHandler { - @Override - @SqsBatch(value = SampleMessageHandler.class, nonRetryableExceptions = {IllegalArgumentException.class}) - public String handleRequest(SQSEvent input, Context context) { - return "{\"statusCode\": 200}"; + ```java + public class Product { + private long id; + + private String name; + + private double price; + + public Product() { } - public class SampleMessageHandler implements SqsMessageHandler { + public Product(long id, String name, double price) { + this.id = id; + this.name = name; + this.price = price; + } - @Override - public String process(SQSMessage message) { - // This will be called for each individual message from a batch - // It should raise an exception if the message was not processed successfully - String returnVal = doSomething(message.getBody()); - - if(/**Business validation failure**/) { - throw new IllegalArgumentException("Failed business validation. No point of retrying. Move me to DLQ." + message.getMessageId()); - } - - return returnVal; - } + public long getId() { + return id; } - } - ``` - - -### SqsUtils Utility API - -If you require access to the result of processed messages, you can use this utility. The result from calling **`#!java SqsUtils#batchProcessor()`** on the context manager will be a list of all the return values -from your **`#!java SqsMessageHandler#process()`** function. - -You can also use the utility in functional way by providing inline implementation of functional interface **`#!java SqsMessageHandler#process()`** - - -=== "Utility API" - ```java hl_lines="4" - public class AppSqsEvent implements RequestHandler> { - @Override - public List handleRequest(SQSEvent input, Context context) { - List returnValues = SqsUtils.batchProcessor(input, SampleMessageHandler.class); + public void setId(long id) { + this.id = id; + } + + public String getName() { + return name; + } - return returnValues; + public void setName(String name) { + this.name = name; } - public class SampleMessageHandler implements SqsMessageHandler { + public double getPrice() { + return price; + } - @Override - public String process(SQSMessage message) { - // This will be called for each individual message from a batch - // It should raise an exception if the message was not processed successfully - String returnVal = doSomething(message.getBody()); - return returnVal; - } + public void setPrice(double price) { + this.price = price; } } + ``` + +=== "SQS Example Event" + + ```json + { + "Records": [ + { + "messageId": "d9144555-9a4f-4ec3-99a0-34ce359b4b54", + "receiptHandle": "13e7f7851d2eaa5c01f208ebadbf1e72==", + "body": "{\n \"id\": 1234,\n \"name\": \"product\",\n \"price\": 42\n}", + "attributes": { + "ApproximateReceiveCount": "1", + "SentTimestamp": "1601975706495", + "SenderId": "AROAIFU437PVZ5L2J53F5", + "ApproximateFirstReceiveTimestamp": "1601975706499" + }, + "messageAttributes": { + }, + "md5OfBody": "13e7f7851d2eaa5c01f208ebadbf1e72", + "eventSource": "aws:sqs", + "eventSourceARN": "arn:aws:sqs:eu-central-1:123456789012:TestLambda", + "awsRegion": "eu-central-1" + }, + { + "messageId": "e9144555-9a4f-4ec3-99a0-34ce359b4b54", + "receiptHandle": "13e7f7851d2eaa5c01f208ebadbf1e72==", + "body": "{\n \"id\": 12345,\n \"name\": \"product5\",\n \"price\": 45\n}", + "attributes": { + "ApproximateReceiveCount": "1", + "SentTimestamp": "1601975706495", + "SenderId": "AROAIFU437PVZ5L2J53F5", + "ApproximateFirstReceiveTimestamp": "1601975706499" + }, + "messageAttributes": { + }, + "md5OfBody": "13e7f7851d2eaa5c01f208ebadbf1e72", + "eventSource": "aws:sqs", + "eventSourceARN": "arn:aws:sqs:eu-central-1:123456789012:TestLambda", + "awsRegion": "eu-central-1" + }] + } ``` -=== "Function implementation" +### Processing messages from Kinesis Streams - ```java hl_lines="5 6 7 8 9 10" - public class AppSqsEvent implements RequestHandler> { +=== "KinesisBatchHandler" + + ```java hl_lines="10 13-15 20 24" + import com.amazonaws.services.lambda.runtime.Context; + import com.amazonaws.services.lambda.runtime.RequestHandler; + import com.amazonaws.services.lambda.runtime.events.KinesisEvent; + import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; + import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder; + import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; + + public class KinesisBatchHandler implements RequestHandler { + + private final BatchMessageHandler handler; + + public KinesisBatchHandler() { + handler = new BatchMessageHandlerBuilder() + .withKinesisBatchHandler() + .buildWithMessageHandler(this::processMessage, Product.class); + } @Override - public List handleRequest(SQSEvent input, Context context) { - List returnValues = SqsUtils.batchProcessor(input, (message) -> { - // This will be called for each individual message from a batch - // It should raise an exception if the message was not processed successfully - String returnVal = doSomething(message.getBody()); - return returnVal; - }); + public StreamsEventResponse handleRequest(KinesisEvent kinesisEvent, Context context) { + return handler.processBatch(kinesisEvent, context); + } - return returnValues; + private void processMessage(Product p, Context c) { + // process the product } + } ``` -## Passing custom SqsClient - -If you need to pass custom SqsClient such as region to the SDK, you can pass your own `SqsClient` to be used by utility either for -**[SqsBatch annotation](#sqsbatch-annotation)**, or **[SqsUtils Utility API](#sqsutils-utility-api)**. - -=== "App.java" - - ```java hl_lines="3 4" - public class AppSqsEvent implements RequestHandler> { - static { - SqsUtils.overrideSqsClient(SqsClient.builder() - .build()); +=== "Kinesis Product" + + ```java + public class Product { + private long id; + + private String name; + + private double price; + + public Product() { } - @Override - public List handleRequest(SQSEvent input, Context context) { - List returnValues = SqsUtils.batchProcessor(input, SampleMessageHandler.class); + public Product(long id, String name, double price) { + this.id = id; + this.name = name; + this.price = price; + } + + public long getId() { + return id; + } - return returnValues; + public void setId(long id) { + this.id = id; } - public class SampleMessageHandler implements SqsMessageHandler { + public String getName() { + return name; + } - @Override - public String process(SQSMessage message) { - // This will be called for each individual message from a batch - // It should raise an exception if the message was not processed successfully - String returnVal = doSomething(message.getBody()); - return returnVal; - } + public void setName(String name) { + this.name = name; + } + + public double getPrice() { + return price; + } + + public void setPrice(double price) { + this.price = price; } } + ``` + +=== "Kinesis Example Event" + + ```json + { + "Records": [ + { + "kinesis": { + "partitionKey": "partitionKey-03", + "kinesisSchemaVersion": "1.0", + "data": "eyJpZCI6MTIzNCwgIm5hbWUiOiJwcm9kdWN0IiwgInByaWNlIjo0Mn0=", + "sequenceNumber": "49545115243490985018280067714973144582180062593244200961", + "approximateArrivalTimestamp": 1428537600, + "encryptionType": "NONE" + }, + "eventSource": "aws:kinesis", + "eventID": "shardId-000000000000:49545115243490985018280067714973144582180062593244200961", + "invokeIdentityArn": "arn:aws:iam::EXAMPLE", + "eventVersion": "1.0", + "eventName": "aws:kinesis:record", + "eventSourceARN": "arn:aws:kinesis:EXAMPLE", + "awsRegion": "eu-central-1" + }, + { + "kinesis": { + "partitionKey": "partitionKey-03", + "kinesisSchemaVersion": "1.0", + "data": "eyJpZCI6MTIzNDUsICJuYW1lIjoicHJvZHVjdDUiLCAicHJpY2UiOjQ1fQ==", + "sequenceNumber": "49545115243490985018280067714973144582180062593244200962", + "approximateArrivalTimestamp": 1428537600, + "encryptionType": "NONE" + }, + "eventSource": "aws:kinesis", + "eventID": "shardId-000000000000:49545115243490985018280067714973144582180062593244200961", + "invokeIdentityArn": "arn:aws:iam::EXAMPLE", + "eventVersion": "1.0", + "eventName": "aws:kinesis:record", + "eventSourceARN": "arn:aws:kinesis:EXAMPLE", + "awsRegion": "eu-central-1" + } + ] + } ``` +### Processing messages from DynamoDB Streams -## Suppressing exceptions - -If you want to disable the default behavior where `SQSBatchProcessingException` is raised if there are any exception, you can pass the `suppressException` boolean argument. - -=== "Within SqsBatch annotation" - - ```java hl_lines="2" +=== "DynamoDBStreamBatchHandler" + + ```java hl_lines="10 13-15 20 24" + import com.amazonaws.services.lambda.runtime.Context; + import com.amazonaws.services.lambda.runtime.RequestHandler; + import com.amazonaws.services.lambda.runtime.events.DynamodbEvent; + import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; + import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder; + import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; + + public class DynamoDBStreamBatchHandler implements RequestHandler { + + private final BatchMessageHandler handler; + + public DynamoDBStreamBatchHandler() { + handler = new BatchMessageHandlerBuilder() + .withDynamoDbBatchHandler() + .buildWithRawMessageHandler(this::processMessage); + } + @Override - @SqsBatch(value = SampleMessageHandler.class, suppressException = true) - public String handleRequest(SQSEvent input, Context context) { - return "{\"statusCode\": 200}"; + public StreamsEventResponse handleRequest(DynamodbEvent ddbEvent, Context context) { + return handler.processBatch(ddbEvent, context); } - ``` -=== "Within SqsUtils Utility API" + private void processMessage(DynamodbEvent.DynamodbStreamRecord dynamodbStreamRecord, Context context) { + // Process the change record + } + } + ``` - ```java hl_lines="3" - @Override - public List handleRequest(SQSEvent input, Context context) { - List returnValues = SqsUtils.batchProcessor(input, true, SampleMessageHandler.class); - - return returnValues; +=== "DynamoDB Example Event" + + ```json + { + "Records": [ + { + "eventID": "c4ca4238a0b923820dcc509a6f75849b", + "eventName": "INSERT", + "eventVersion": "1.1", + "eventSource": "aws:dynamodb", + "awsRegion": "eu-central-1", + "dynamodb": { + "Keys": { + "Id": { + "N": "101" + } + }, + "NewImage": { + "Message": { + "S": "New item!" + }, + "Id": { + "N": "101" + } + }, + "ApproximateCreationDateTime": 1428537600, + "SequenceNumber": "4421584500000000017450439091", + "SizeBytes": 26, + "StreamViewType": "NEW_AND_OLD_IMAGES" + }, + "eventSourceARN": "arn:aws:dynamodb:eu-central-1:123456789012:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899", + "userIdentity": { + "principalId": "dynamodb.amazonaws.com", + "type": "Service" + } + }, + { + "eventID": "c81e728d9d4c2f636f067f89cc14862c", + "eventName": "MODIFY", + "eventVersion": "1.1", + "eventSource": "aws:dynamodb", + "awsRegion": "eu-central-1", + "dynamodb": { + "Keys": { + "Id": { + "N": "101" + } + }, + "NewImage": { + "Message": { + "S": "This item has changed" + }, + "Id": { + "N": "101" + } + }, + "OldImage": { + "Message": { + "S": "New item!" + }, + "Id": { + "N": "101" + } + }, + "ApproximateCreationDateTime": 1428537600, + "SequenceNumber": "4421584500000000017450439092", + "SizeBytes": 59, + "StreamViewType": "NEW_AND_OLD_IMAGES" + }, + "eventSourceARN": "arn:aws:dynamodb:eu-central-1:123456789012:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899" + } + ] } ``` -## Move non retryable messages to a dead letter queue -If you want certain exceptions to be treated as permanent failures during batch processing, i.e. exceptions where the result of retrying will -always be a failure and want these can be immediately moved to the dead letter queue associated to the source SQS queue, you can use `SqsBatch#nonRetryableExceptions()` -to configure such exceptions. +## Handling Messages -If you want such messages to be deleted instead, set `SqsBatch#deleteNonRetryableMessageFromQueue()` to `true`. By default, its value is `false`. +### Raw message and deserialized message handlers +You must provide either a raw message handler, or a deserialized message handler. The raw message handler receives +the envelope record type relevant for the particular event source - for instance, the SQS event source provides +[SQSMessage](https://javadoc.io/doc/com.amazonaws/aws-lambda-java-events/2.2.2/com/amazonaws/services/lambda/runtime/events/SQSEvent.html) +instances. The deserialized message handler extracts the body from this envelope, and deserializes it to a user-defined +type. Note that deserialized message handlers are not relevant for the DynamoDB provider, as the format of the inner +message is fixed by DynamoDB. -Same capability is also provided by [SqsUtils Utility API](#sqsutils-utility-api). +In general, the deserialized message handler should be used unless you need access to information on the envelope. -!!! info - Make sure the lambda function has required permissions needed by utility. Refer [this section](#iam-permissions). +=== "Raw Message Handler" -=== "SqsBatch annotation" + ```java + public void setup() { + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .buildWithRawMessageHandler(this::processRawMessage); + } - ```java hl_lines="7 21" - import software.amazon.lambda.powertools.sqs.SqsBatch; - import software.amazon.lambda.powertools.sqs.SqsMessageHandler; - import software.amazon.lambda.powertools.sqs.SqsUtils; - - public class AppSqsEvent implements RequestHandler { - @Override - @SqsBatch(value = SampleMessageHandler.class, nonRetryableExceptions = {IllegalArgumentException.class}) - public String handleRequest(SQSEvent input, Context context) { - return "{\"statusCode\": 200}"; - } - - public class SampleMessageHandler implements SqsMessageHandler { + private void processRawMessage(SQSEvent.SQSMessage sqsMessage) { + // Do something with the raw message + } - @Override - public String process(SQSMessage message) { - // This will be called for each individual message from a batch - // It should raise an exception if the message was not processed successfully - String returnVal = doSomething(message.getBody()); + ``` - if(/**Business validation failure**/) { - throw new IllegalArgumentException("Failed business validation. No point of retrying. Move me to DLQ." + message.getMessageId()); - } +=== "Deserialized Message Handler" - return returnVal; - } - } + ```java + public void setup() { + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .buildWitMessageHandler(this::processRawMessage, Product.class); + } + + private void processMessage(Product product) { + // Do something with the deserialized message } + ``` -=== "SqsBatch API" +### Success and failure handlers + +You can register a success or failure handler which will be invoked as each message is processed by the batch +module. This may be useful for reporting - for instance, writing metrics or logging failures. + +These handlers are optional. Batch failures are handled by the module regardless of whether or not you +provide a custom failure handler. + +Handlers can be provided when building the batch processor and are available for all event sources. +For instance for DynamoDB: + +```java +BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withDynamoDbBatchHandler() + .withSuccessHandler((m) -> { + // Success handler receives the raw message + LOGGER.info("Message with sequenceNumber {} was successfully processed", + m.getDynamodb().getSequenceNumber()); + }) + .withFailureHandler((m, e) -> { + // Failure handler receives the raw message and the exception thrown. + LOGGER.info("Message with sequenceNumber {} failed to be processed: {}" + , e.getDynamodb().getSequenceNumber(), e); + }) + .buildWithMessageHander(this::processMessage); +``` - ```java hl_lines="9 23" - import software.amazon.lambda.powertools.sqs.SqsBatch; - import software.amazon.lambda.powertools.sqs.SqsMessageHandler; - import software.amazon.lambda.powertools.sqs.SqsUtils; - - public class AppSqsEvent implements RequestHandler { - @Override - public String handleRequest(SQSEvent input, Context context) { - - SqsUtils.batchProcessor(input, BatchProcessor.class, IllegalArgumentException.class); - - return "{\"statusCode\": 200}"; - } - - public class SampleMessageHandler implements SqsMessageHandler { - - @Override - public String process(SQSMessage message) { - // This will be called for each individual message from a batch - // It should raise an exception if the message was not processed successfully - String returnVal = doSomething(message.getBody()); +!!! info + If the success handler throws an exception, the item it is processing will be marked as failed by the + batch processor. + If the failure handler throws, the batch processing will continue; the item it is processing has + already been marked as failed. - if(/**Business validation failure**/) { - throw new IllegalArgumentException("Failed business validation. No point of retrying. Move me to DLQ." + message.getMessageId()); - } - return returnVal; - } +### Lambda Context + +Both raw and deserialized message handlers can choose to take the Lambda context as an argument if they +need it, or not: + +```java + public class ClassWithHandlers { + + private void processMessage(Product product) { + // Do something with the raw message + } + + private void processMessageWithContext(Product product, Context context) { + // Do something with the raw message and the lambda Context } } - ``` +``` diff --git a/docs/utilities/sqs_batch.md b/docs/utilities/sqs_batch.md new file mode 100644 index 000000000..658f7b085 --- /dev/null +++ b/docs/utilities/sqs_batch.md @@ -0,0 +1,489 @@ +--- +title: SQS Batch Processing (Deprecated) +description: Utility +--- + +!!! warning + The SQS batch module is now deprecated and will be removed in v2 of the library. Use the [batch module](batch.md), + and check out **[migrating to the batch library](#migrating-to-the-batch-library)** for migration instructions. + +The SQS batch processing utility provides a way to handle partial failures when processing batches of messages from SQS. +The utility handles batch processing for both +[standard](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html) and +[FIFO](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html) SQS queues. + +**Key Features** + +* Prevent successfully processed messages from being returned to SQS +* A simple interface for individually processing messages from a batch + +**Background** + +When using SQS as a Lambda event source mapping, Lambda functions can be triggered with a batch of messages from SQS. +If your function fails to process any message from the batch, the entire batch returns to your SQS queue, and your +Lambda function will be triggered with the same batch again. With this utility, messages within a batch will be handled individually - only messages that were not successfully processed +are returned to the queue. + +!!! warning + While this utility lowers the chance of processing messages more than once, it is not guaranteed. We recommend implementing processing logic in an idempotent manner wherever possible. + More details on how Lambda works with SQS can be found in the [AWS documentation](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html) + +## Install + +Depending on your version of Java (either Java 1.8 or 11+), the configuration slightly changes. + +=== "Maven Java 11+" + + ```xml hl_lines="3-7 16 18 24-27"" + + ... + + software.amazon.lambda + powertools-sqs + {{ powertools.version }} + + ... + + ... + + + + ... + + dev.aspectj + aspectj-maven-plugin + 1.13.1 + + 11 + 11 + 11 + + + software.amazon.lambda + powertools-sqs + + + + + + + compile + + + + + ... + + + ``` + +=== "Maven Java 1.8" + + ```xml hl_lines="3-7 16 18 24-27" + + ... + + software.amazon.lambda + powertools-sqs + {{ powertools.version }} + + ... + + ... + + + + ... + + org.codehaus.mojo + aspectj-maven-plugin + 1.14.0 + + 1.8 + 1.8 + 1.8 + + + software.amazon.lambda + powertools-sqs + + + + + + + compile + + + + + ... + + + ``` + +=== "Gradle Java 11+" + + ```groovy hl_lines="3 11" + plugins { + id 'java' + id 'io.freefair.aspectj.post-compile-weaving' version '8.1.0' + } + + repositories { + mavenCentral() + } + + dependencies { + aspect 'software.amazon.lambda:powertools-sqs:{{ powertools.version }}' + } + + sourceCompatibility = 11 // or higher + targetCompatibility = 11 // or higher + ``` + +=== "Gradle Java 1.8" + + ```groovy hl_lines="3 11" + plugins { + id 'java' + id 'io.freefair.aspectj.post-compile-weaving' version '6.6.3' + } + + repositories { + mavenCentral() + } + + dependencies { + aspect 'software.amazon.lambda:powertools-sqs:{{ powertools.version }}' + } + + sourceCompatibility = 1.8 + targetCompatibility = 1.8 + ``` + +## IAM Permissions + +This utility requires additional permissions to work as expected. Lambda functions using this utility require the `sqs:DeleteMessageBatch` permission. + +If you are also using [nonRetryableExceptions](#move-non-retryable-messages-to-a-dead-letter-queue) attribute, utility will need additional permission of `sqs:GetQueueAttributes` on source SQS. +It also needs `sqs:SendMessage` and `sqs:SendMessageBatch` on configured dead letter queue. + +If source or dead letter queue is configured to use encryption at rest using [AWS Key Management Service (KMS)](https://aws.amazon.com/kms/), function will need additional permissions of +`kms:GenerateDataKey` and `kms:Decrypt` on the KMS key being used for encryption. Refer [docs](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-management.html#compatibility-with-aws-services) for more details. + +Refer [example project](https://github.com/aws-samples/aws-lambda-powertools-examples/blob/main/java/SqsBatchProcessing/template.yaml#L105) for policy details example. + + +## Processing messages from SQS + +You can use either **[SqsBatch annotation](#sqsbatch-annotation)**, or **[SqsUtils Utility API](#sqsutils-utility-api)** as a fluent API. + +Both have nearly the same behaviour when it comes to processing messages from the batch: + +* **Entire batch has been successfully processed**, where your Lambda handler returned successfully, we will let SQS delete the batch to optimize your cost +* **Entire Batch has been partially processed successfully**, where exceptions were raised within your `SqsMessageHandler` interface implementation, we will: + - **1)** Delete successfully processed messages from the queue by directly calling `sqs:DeleteMessageBatch` + - **2)** If a message with a [message group ID](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagegroupid-property.html) fails, + the processing of the batch will be stopped and the remainder of the messages will be returned to SQS. + This behaviour [is required to handle SQS FIFO queues](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting). + - **3)** if non retryable exceptions occur, messages resulting in configured exceptions during processing will be immediately moved to the dead letter queue associated to the source SQS queue or deleted from the source SQS queue if `deleteNonRetryableMessageFromQueue` is set to `true`. + - **4)** Raise `SQSBatchProcessingException` to ensure failed messages return to your SQS queue + +The only difference is that **SqsUtils Utility API** will give you access to return from the processed messages if you need. Exception `SQSBatchProcessingException` thrown from the +utility will have access to both successful and failed messaged along with failure exceptions. + +## Functional Interface SqsMessageHandler + +Both [annotation](#sqsbatch-annotation) and [SqsUtils Utility API](#sqsutils-utility-api) requires an implementation of functional interface `SqsMessageHandler`. + +This implementation is responsible for processing each individual message from the batch, and to raise an exception if unable to process any of the messages sent. + +**Any non-exception/successful return from your record handler function** will instruct utility to queue up each individual message for deletion. + +### SqsBatch annotation + +When using this annotation, you need provide a class implementation of `SqsMessageHandler` that will process individual messages from the batch - It should raise an exception if it is unable to process the record. + +All records in the batch will be passed to this handler for processing, even if exceptions are thrown - Here's the behaviour after completing the batch: + +* **Any successfully processed messages**, we will delete them from the queue via `sqs:DeleteMessageBatch`. +* **if, nonRetryableExceptions attribute is used**, messages resulting in configured exceptions during processing will be immediately moved to the dead letter queue associated to the source SQS queue or deleted from the source SQS queue if `deleteNonRetryableMessageFromQueue` is set to `true`. +* **Any unprocessed messages detected**, we will raise `SQSBatchProcessingException` to ensure failed messages return to your SQS queue. + +!!! warning + You will not have access to the **processed messages** within the Lambda Handler - all processing logic will and should be performed by the implemented `#!java SqsMessageHandler#process()` function. + +=== "AppSqsEvent.java" + + ```java hl_lines="7" + import software.amazon.lambda.powertools.sqs.SqsBatch; + import software.amazon.lambda.powertools.sqs.SqsMessageHandler; + import software.amazon.lambda.powertools.sqs.SqsUtils; + + public class AppSqsEvent implements RequestHandler { + @Override + @SqsBatch(SampleMessageHandler.class) + public String handleRequest(SQSEvent input, Context context) { + return "{\"statusCode\": 200}"; + } + + public class SampleMessageHandler implements SqsMessageHandler { + + @Override + public String process(SQSMessage message) { + // This will be called for each individual message from a batch + // It should raise an exception if the message was not processed successfully + String returnVal = doSomething(message.getBody()); + return returnVal; + } + } + } + ``` + +=== "AppSqsEventWithNonRetryableExceptions.java" + + ```java hl_lines="7 21" + import software.amazon.lambda.powertools.sqs.SqsBatch; + import software.amazon.lambda.powertools.sqs.SqsMessageHandler; + import software.amazon.lambda.powertools.sqs.SqsUtils; + + public class AppSqsEvent implements RequestHandler { + @Override + @SqsBatch(value = SampleMessageHandler.class, nonRetryableExceptions = {IllegalArgumentException.class}) + public String handleRequest(SQSEvent input, Context context) { + return "{\"statusCode\": 200}"; + } + + public class SampleMessageHandler implements SqsMessageHandler { + + @Override + public String process(SQSMessage message) { + // This will be called for each individual message from a batch + // It should raise an exception if the message was not processed successfully + String returnVal = doSomething(message.getBody()); + + if(/**Business validation failure**/) { + throw new IllegalArgumentException("Failed business validation. No point of retrying. Move me to DLQ." + message.getMessageId()); + } + + return returnVal; + } + } + } + ``` + + +### SqsUtils Utility API + +If you require access to the result of processed messages, you can use this utility. The result from calling **`#!java SqsUtils#batchProcessor()`** on the context manager will be a list of all the return values +from your **`#!java SqsMessageHandler#process()`** function. + +You can also use the utility in functional way by providing inline implementation of functional interface **`#!java SqsMessageHandler#process()`** + + +=== "Utility API" + + ```java hl_lines="4" + public class AppSqsEvent implements RequestHandler> { + @Override + public List handleRequest(SQSEvent input, Context context) { + List returnValues = SqsUtils.batchProcessor(input, SampleMessageHandler.class); + + return returnValues; + } + + public class SampleMessageHandler implements SqsMessageHandler { + + @Override + public String process(SQSMessage message) { + // This will be called for each individual message from a batch + // It should raise an exception if the message was not processed successfully + String returnVal = doSomething(message.getBody()); + return returnVal; + } + } + } + ``` + +=== "Function implementation" + + ```java hl_lines="5 6 7 8 9 10" + public class AppSqsEvent implements RequestHandler> { + + @Override + public List handleRequest(SQSEvent input, Context context) { + List returnValues = SqsUtils.batchProcessor(input, (message) -> { + // This will be called for each individual message from a batch + // It should raise an exception if the message was not processed successfully + String returnVal = doSomething(message.getBody()); + return returnVal; + }); + + return returnValues; + } + } + ``` + +## Passing custom SqsClient + +If you need to pass custom SqsClient such as region to the SDK, you can pass your own `SqsClient` to be used by utility either for +**[SqsBatch annotation](#sqsbatch-annotation)**, or **[SqsUtils Utility API](#sqsutils-utility-api)**. + +=== "App.java" + + ```java hl_lines="3 4" + public class AppSqsEvent implements RequestHandler> { + static { + SqsUtils.overrideSqsClient(SqsClient.builder() + .build()); + } + + @Override + public List handleRequest(SQSEvent input, Context context) { + List returnValues = SqsUtils.batchProcessor(input, SampleMessageHandler.class); + + return returnValues; + } + + public class SampleMessageHandler implements SqsMessageHandler { + + @Override + public String process(SQSMessage message) { + // This will be called for each individual message from a batch + // It should raise an exception if the message was not processed successfully + String returnVal = doSomething(message.getBody()); + return returnVal; + } + } + } + ``` + +## Suppressing exceptions + +If you want to disable the default behavior where `SQSBatchProcessingException` is raised if there are any exception, you can pass the `suppressException` boolean argument. + +=== "Within SqsBatch annotation" + + ```java hl_lines="2" + @Override + @SqsBatch(value = SampleMessageHandler.class, suppressException = true) + public String handleRequest(SQSEvent input, Context context) { + return "{\"statusCode\": 200}"; + } + ``` + +=== "Within SqsUtils Utility API" + + ```java hl_lines="3" + @Override + public List handleRequest(SQSEvent input, Context context) { + List returnValues = SqsUtils.batchProcessor(input, true, SampleMessageHandler.class); + + return returnValues; + } + ``` + +## Move non retryable messages to a dead letter queue + +If you want certain exceptions to be treated as permanent failures during batch processing, i.e. exceptions where the result of retrying will +always be a failure and want these can be immediately moved to the dead letter queue associated to the source SQS queue, you can use `SqsBatch#nonRetryableExceptions()` +to configure such exceptions. + +If you want such messages to be deleted instead, set `SqsBatch#deleteNonRetryableMessageFromQueue()` to `true`. By default, its value is `false`. + +Same capability is also provided by [SqsUtils Utility API](#sqsutils-utility-api). + +!!! info + Make sure the lambda function has required permissions needed by utility. Refer [this section](#iam-permissions). + +=== "SqsBatch annotation" + + ```java hl_lines="7 21" + import software.amazon.lambda.powertools.sqs.SqsBatch; + import software.amazon.lambda.powertools.sqs.SqsMessageHandler; + import software.amazon.lambda.powertools.sqs.SqsUtils; + + public class AppSqsEvent implements RequestHandler { + @Override + @SqsBatch(value = SampleMessageHandler.class, nonRetryableExceptions = {IllegalArgumentException.class}) + public String handleRequest(SQSEvent input, Context context) { + return "{\"statusCode\": 200}"; + } + + public class SampleMessageHandler implements SqsMessageHandler { + + @Override + public String process(SQSMessage message) { + // This will be called for each individual message from a batch + // It should raise an exception if the message was not processed successfully + String returnVal = doSomething(message.getBody()); + + if(/**Business validation failure**/) { + throw new IllegalArgumentException("Failed business validation. No point of retrying. Move me to DLQ." + message.getMessageId()); + } + + return returnVal; + } + } + } + ``` + +=== "SqsBatch API" + + ```java hl_lines="9 23" + import software.amazon.lambda.powertools.sqs.SqsBatch; + import software.amazon.lambda.powertools.sqs.SqsMessageHandler; + import software.amazon.lambda.powertools.sqs.SqsUtils; + + public class AppSqsEvent implements RequestHandler { + @Override + public String handleRequest(SQSEvent input, Context context) { + + SqsUtils.batchProcessor(input, BatchProcessor.class, IllegalArgumentException.class); + + return "{\"statusCode\": 200}"; + } + + public class SampleMessageHandler implements SqsMessageHandler { + + @Override + public String process(SQSMessage message) { + // This will be called for each individual message from a batch + // It should raise an exception if the message was not processed successfully + String returnVal = doSomething(message.getBody()); + + if(/**Business validation failure**/) { + throw new IllegalArgumentException("Failed business validation. No point of retrying. Move me to DLQ." + message.getMessageId()); + } + + return returnVal; + } + } + } + ``` + +## Migrating to the Batch Library +The [batch processing library](batch.md) provides a way to process messages and gracefully handle partial failures for +SQS, Kinesis Streams, and DynamoDB Streams batch sources. In comparison the legacy SQS Batch library, it relies on +[Lambda partial batch responses](https://aws.amazon.com/about-aws/whats-new/2021/11/aws-lambda-partial-batch-response-sqs-event-source/), +which allows the library to provide a simpler, reliable interface for processing batches. + +In order to get started, check out the [processing messages from SQS](batch/#processing-messages-from-sqs) documentation. +In most cases, you will simply be able to retain your existing batch message handler function, and wrap it with the new +batch processing interface. Unlike this module, As the batch processor uses *partial batch responses* to communicate to +Lambda which messages have been processed and must be removed from the queue, the return of the handler's process function +must be returned to Lambda. + +The new library also no longer requires the `SQS:DeleteMessage` action on the Lambda function's role policy, as Lambda +itself now manages removal of messages from the queue. + +!!! info + Some tuneables from this library are no longer provided. + + * **Non-retryable Exceptions** - there is no mechanism to indicate in a partial batch response that a particular message + should not be retried and instead moved to DLQ - a message either succeeds, or fails and is retried. A message + will be moved to the DLQ once the normal retry process has expired. + * **Suppress Exception** - The new batch processor does not throw an exception on failure of a handler. Instead, + its result must be returned by your code from your message handler to Lambda, so that Lambda can manage + the completed messages and retry behaviour. \ No newline at end of file diff --git a/docs/utilities/sqs_large_message_handling.md b/docs/utilities/sqs_large_message_handling.md index 6308f1c79..0924d01cf 100644 --- a/docs/utilities/sqs_large_message_handling.md +++ b/docs/utilities/sqs_large_message_handling.md @@ -1,11 +1,13 @@ --- -title: SQS Large Message Handling +title: SQS Large Message Handling (Deprecated) description: Utility --- !!! warning -This module is now deprecated and will be removed in version 2. -See [Large Message Handling](large_messages.md) for the new module (`powertools-large-messages`) documentation. + This module is now deprecated and will be removed in version 2. + See [Large Message Handling](large_messages.md) and + [the migration guide](http://localhost:8000/lambda-java/utilities/large_messages/#migration-from-the-sqs-large-message-utility) + for the new module (`powertools-large-messages`) documentation The large message handling utility handles SQS messages which have had their payloads offloaded to S3 due to them being larger than the SQS maximum. diff --git a/examples/README.md b/examples/README.md index b44ff2433..9b76faa82 100644 --- a/examples/README.md +++ b/examples/README.md @@ -9,9 +9,10 @@ Each example can be copied from its subdirectory and used independently of the r * [powertools-examples-idempotency](powertools-examples-idempotency) - An idempotent HTTP API * [powertools-examples-parameters](powertools-examples-parameters) - Uses the parameters module to provide runtime parameters to a function * [powertools-examples-serialization](powertools-examples-serialization) - Uses the serialization module to serialize and deserialize API Gateway & SQS payloads -* [powertools-examples-sqs](powertools-examples-sqs) - Processes SQS batch requests +* [powertools-examples-sqs](powertools-examples-sqs) - Processes SQS batch requests (**Deprecated** - will be replaced by `powertools-examples-batch` in version 2 of this library) * [powertools-examples-validation](powertools-examples-validation) - Uses the validation module to validate user requests received via API Gateway * [powertools-examples-cloudformation](powertools-examples-cloudformation) - Deploys a Cloudformation custom resource +* [powertools-examples-batch](powertools-examples-batch) - Examples for each of the different batch processing deployments ## Working with AWS Serverless Application Model (SAM) Examples Many of the examples use [AWS Serverless Application Model](https://aws.amazon.com/serverless/sam/) (SAM). To get started diff --git a/examples/pom.xml b/examples/pom.xml index 72f1dc03b..5d19a20fb 100644 --- a/examples/pom.xml +++ b/examples/pom.xml @@ -34,6 +34,7 @@ powertools-examples-parameters powertools-examples-serialization powertools-examples-sqs + powertools-examples-batch powertools-examples-validation powertools-examples-cloudformation diff --git a/examples/powertools-examples-batch/README.md b/examples/powertools-examples-batch/README.md new file mode 100644 index 000000000..d65fb584a --- /dev/null +++ b/examples/powertools-examples-batch/README.md @@ -0,0 +1,35 @@ + # Powertools for AWS Lambda (Java) - Batch Example + +This project contains examples of Lambda function using the batch processing module of Powertools for AWS Lambda (Java). +For more information on this module, please refer to the +[documentation](https://docs.powertools.aws.dev/lambda-java/utilities/batch/). + +Three different examples and SAM deployments are included, covering each of the batch sources: + +* [SQS](src/main/java/org/demo/batch/sqs) - SQS batch processing +* [Kinesis Streams](src/main/java/org/demo/batch/kinesis) - Kinesis Streams batch processing +* [DynamoDB Streams](src/main/java/org/demo/batch/dynamo) - DynamoDB Streams batch processing + +## Deploy the sample application + +This sample is based on Serverless Application Model (SAM). To deploy it, check out the instructions for getting +started with SAM in [the examples directory](../README.md) + +This sample contains three different deployments, depending on which batch processor you'd like to use, you can +change to the subdirectory containing the example SAM template, and deploy. For instance, for the SQS batch +deployment: + +```bash +cd deploy/sqs +sam build +sam deploy --guided +``` + +## Test the application + +Each of the examples uses a Lambda scheduled every 5 minutes to push a batch, and a separate lambda to read it. To +see this in action, we can simply tail the logs of our stack: + +```bash +sam logs --tail $STACK_NAME +``` \ No newline at end of file diff --git a/examples/powertools-examples-batch/deploy/ddb-streams/template.yaml b/examples/powertools-examples-batch/deploy/ddb-streams/template.yaml new file mode 100644 index 000000000..91f8799c4 --- /dev/null +++ b/examples/powertools-examples-batch/deploy/ddb-streams/template.yaml @@ -0,0 +1,74 @@ +AWSTemplateFormatVersion: '2010-09-09' +Transform: AWS::Serverless-2016-10-31 +Description: > + DynamoDB Streams batch processing demo + +Globals: + Function: + Timeout: 20 + Runtime: java11 + MemorySize: 512 + Tracing: Active + Architectures: + - x86_64 + Environment: + Variables: + POWERTOOLS_LOG_LEVEL: INFO + POWERTOOLS_LOGGER_SAMPLE_RATE: 1.0 + POWERTOOLS_LOGGER_LOG_EVENT: true + +Resources: + DynamoDBTable: + Type: AWS::DynamoDB::Table + Properties: + AttributeDefinitions: + - AttributeName: id + AttributeType: S + KeySchema: + - AttributeName: id + KeyType: HASH + ProvisionedThroughput: + ReadCapacityUnits: 5 + WriteCapacityUnits: 5 + StreamSpecification: + StreamViewType: NEW_IMAGE + + + DemoDynamoDBWriter: + Type: AWS::Serverless::Function + Properties: + CodeUri: ../.. + Handler: org.demo.batch.dynamo.DynamoDBWriter::handleRequest + Environment: + Variables: + POWERTOOLS_SERVICE_NAME: ddbstreams-demo + TABLE_NAME: !Ref DynamoDBTable + Policies: + - DynamoDBCrudPolicy: + TableName: !Ref DynamoDBTable + Events: + CWSchedule: + Type: Schedule + Properties: + Schedule: 'rate(1 minute)' + Name: !Join [ "-", [ "ddb-writer-schedule", !Select [ 0, !Split [ -, !Select [ 2, !Split [ /, !Ref AWS::StackId ] ] ] ] ] ] + Description: Write records to DynamoDB via a Lambda function + Enabled: true + + DemoDynamoDBStreamsConsumerFunction: + Type: AWS::Serverless::Function + Properties: + CodeUri: ../.. + Handler: org.demo.batch.dynamo.DynamoDBStreamBatchHandler::handleRequest + Environment: + Variables: + POWERTOOLS_SERVICE_NAME: ddbstreams-batch-demo + Policies: AWSLambdaDynamoDBExecutionRole + Events: + Stream: + Type: DynamoDB + Properties: + Stream: !GetAtt DynamoDBTable.StreamArn + BatchSize: 100 + StartingPosition: TRIM_HORIZON + diff --git a/examples/powertools-examples-batch/deploy/kinesis/template.yml b/examples/powertools-examples-batch/deploy/kinesis/template.yml new file mode 100644 index 000000000..dcece61b8 --- /dev/null +++ b/examples/powertools-examples-batch/deploy/kinesis/template.yml @@ -0,0 +1,83 @@ +AWSTemplateFormatVersion: '2010-09-09' +Transform: AWS::Serverless-2016-10-31 +Description: > + Kinesis batch processing demo + +Globals: + Function: + Timeout: 20 + Runtime: java11 + MemorySize: 512 + Tracing: Active + Environment: + Variables: + POWERTOOLS_LOG_LEVEL: INFO + POWERTOOLS_LOGGER_SAMPLE_RATE: 1.0 + POWERTOOLS_LOGGER_LOG_EVENT: true + +Resources: + + DemoKinesisStream: + Type: AWS::Kinesis::Stream + Properties: + ShardCount: 1 + + StreamConsumer: + Type: "AWS::Kinesis::StreamConsumer" + Properties: + StreamARN: !GetAtt DemoKinesisStream.Arn + ConsumerName: KinesisBatchHandlerConsumer + + DemoKinesisSenderFunction: + Type: AWS::Serverless::Function + Properties: + CodeUri: ../.. + Handler: org.demo.batch.kinesis.KinesisBatchSender::handleRequest + Environment: + Variables: + POWERTOOLS_SERVICE_NAME: kinesis-batch-demo + STREAM_NAME: !Ref DemoKinesisStream + Policies: + - Statement: + - Sid: WriteToKinesis + Effect: Allow + Action: + - kinesis:PutRecords + - kinesis:DescribeStream + Resource: !GetAtt DemoKinesisStream.Arn + Events: + CWSchedule: + Type: Schedule + Properties: + Schedule: 'rate(5 minutes)' + Name: !Join [ "-", [ "message-producer-schedule", !Select [ 0, !Split [ -, !Select [ 2, !Split [ /, !Ref AWS::StackId ] ] ] ] ] ] + Description: Produce message to Kinesis via a Lambda function + Enabled: true + + DemoKinesisConsumerFunction: + Type: AWS::Serverless::Function + Properties: + CodeUri: ../.. + Handler: org.demo.batch.kinesis.KinesisBatchHandler::handleRequest + Environment: + Variables: + POWERTOOLS_SERVICE_NAME: kinesis-demo + Events: + Kinesis: + Type: Kinesis + Properties: + Stream: !GetAtt StreamConsumer.ConsumerARN + StartingPosition: LATEST + BatchSize: 2 + +Outputs: + DemoKinesisQueue: + Description: "ARN for Kinesis Stream" + Value: !GetAtt DemoKinesisStream.Arn + DemoKinesisSenderFunction: + Description: "Kinesis Batch Sender - Lambda Function ARN" + Value: !GetAtt DemoKinesisSenderFunction.Arn + DemoSQSConsumerFunction: + Description: "SQS Batch Handler - Lambda Function ARN" + Value: !GetAtt DemoKinesisConsumerFunction.Arn + diff --git a/examples/powertools-examples-batch/deploy/sqs/template.yml b/examples/powertools-examples-batch/deploy/sqs/template.yml new file mode 100644 index 000000000..764ba4863 --- /dev/null +++ b/examples/powertools-examples-batch/deploy/sqs/template.yml @@ -0,0 +1,147 @@ +AWSTemplateFormatVersion: '2010-09-09' +Transform: AWS::Serverless-2016-10-31 +Description: > + sqs batch processing demo + +Globals: + Function: + Timeout: 20 + Runtime: java11 + MemorySize: 512 + Tracing: Active + Environment: + Variables: + POWERTOOLS_LOG_LEVEL: INFO + POWERTOOLS_LOGGER_SAMPLE_RATE: 1.0 + POWERTOOLS_LOGGER_LOG_EVENT: true + +Resources: + CustomerKey: + Type: AWS::KMS::Key + Properties: + Description: KMS key for encrypted queues + Enabled: true + KeyPolicy: + Version: '2012-10-17' + Statement: + - Sid: Enable IAM User Permissions + Effect: Allow + Principal: + AWS: !Sub 'arn:aws:iam::${AWS::AccountId}:root' + Action: 'kms:*' + Resource: '*' + - Sid: Allow use of the key + Effect: Allow + Principal: + Service: lambda.amazonaws.com + Action: + - kms:Decrypt + - kms:GenerateDataKey + Resource: '*' + + CustomerKeyAlias: + Type: AWS::KMS::Alias + Properties: + AliasName: alias/powertools-batch-sqs-demo + TargetKeyId: !Ref CustomerKey + + DemoDlqSqsQueue: + Type: AWS::SQS::Queue + Properties: + KmsMasterKeyId: !Ref CustomerKey + + DemoSqsQueue: + Type: AWS::SQS::Queue + Properties: + RedrivePolicy: + deadLetterTargetArn: + Fn::GetAtt: + - "DemoDlqSqsQueue" + - "Arn" + maxReceiveCount: 2 + KmsMasterKeyId: !Ref CustomerKey + + DemoSQSSenderFunction: + Type: AWS::Serverless::Function + Properties: + CodeUri: ../.. + Handler: org.demo.batch.sqs.SqsBatchSender::handleRequest + Environment: + Variables: + POWERTOOLS_SERVICE_NAME: sqs-batch-demo + QUEUE_URL: !Ref DemoSqsQueue + Policies: + - Statement: + - Sid: SQSSendMessageBatch + Effect: Allow + Action: + - sqs:SendMessageBatch + - sqs:SendMessage + Resource: !GetAtt DemoSqsQueue.Arn + - Sid: SQSKMSKey + Effect: Allow + Action: + - kms:GenerateDataKey + - kms:Decrypt + Resource: !GetAtt CustomerKey.Arn + Events: + CWSchedule: + Type: Schedule + Properties: + Schedule: 'rate(5 minutes)' + Name: !Join [ "-", [ "message-producer-schedule", !Select [ 0, !Split [ -, !Select [ 2, !Split [ /, !Ref AWS::StackId ] ] ] ] ] ] + Description: Produce message to SQS via a Lambda function + Enabled: true + + DemoSQSConsumerFunction: + Type: AWS::Serverless::Function + Properties: + CodeUri: ../.. + Handler: org.demo.batch.sqs.SqsBatchHandler::handleRequest + Environment: + Variables: + POWERTOOLS_SERVICE_NAME: sqs-demo + Policies: + - Statement: + - Sid: SQSDeleteGetAttribute + Effect: Allow + Action: + - sqs:DeleteMessageBatch + - sqs:GetQueueAttributes + Resource: !GetAtt DemoSqsQueue.Arn + - Sid: SQSSendMessageBatch + Effect: Allow + Action: + - sqs:SendMessageBatch + - sqs:SendMessage + Resource: !GetAtt DemoDlqSqsQueue.Arn + - Sid: SQSKMSKey + Effect: Allow + Action: + - kms:GenerateDataKey + - kms:Decrypt + Resource: !GetAtt CustomerKey.Arn + Events: + MySQSEvent: + Type: SQS + Properties: + Queue: !GetAtt DemoSqsQueue.Arn + BatchSize: 2 + MaximumBatchingWindowInSeconds: 300 + +Outputs: + DemoSqsQueue: + Description: "ARN for main SQS queue" + Value: !GetAtt DemoSqsQueue.Arn + DemoDlqSqsQueue: + Description: "ARN for DLQ" + Value: !GetAtt DemoDlqSqsQueue.Arn + DemoSQSSenderFunction: + Description: "SQS Batch Sender - Lambda Function ARN" + Value: !GetAtt DemoSQSSenderFunction.Arn + DemoSQSConsumerFunction: + Description: "SQS Batch Handler - Lambda Function ARN" + Value: !GetAtt DemoSQSConsumerFunction.Arn + DemoSQSConsumerFunctionRole: + Description: "Implicit IAM Role created for SQS Lambda Function ARN" + Value: !GetAtt DemoSQSConsumerFunctionRole.Arn diff --git a/examples/powertools-examples-batch/pom.xml b/examples/powertools-examples-batch/pom.xml new file mode 100644 index 000000000..d3c4bc49b --- /dev/null +++ b/examples/powertools-examples-batch/pom.xml @@ -0,0 +1,198 @@ + + + 4.0.0 + + software.amazon.lambda.examples + 1.17.0-SNAPSHOT + powertools-examples-batch + jar + Powertools for AWS Lambda (Java) library Examples - Batch + + + 2.20.0 + 1.8 + 1.8 + true + 2.20.109 + + + + + software.amazon.lambda + powertools-tracing + ${project.version} + + + software.amazon.lambda + powertools-logging + ${project.version} + + + software.amazon.lambda + powertools-batch + ${project.version} + + + com.amazonaws + aws-lambda-java-core + 1.2.2 + + + software.amazon.awssdk + sdk-core + ${sdk.version} + + + software.amazon.awssdk + sqs + ${sdk.version} + + + software.amazon.awssdk + url-connection-client + ${sdk.version} + + + software.amazon.awssdk + dynamodb-enhanced + ${sdk.version} + + + software.amazon.awssdk + kinesis + ${sdk.version} + + + + + + + dev.aspectj + aspectj-maven-plugin + 1.13.1 + + ${maven.compiler.source} + ${maven.compiler.target} + ${maven.compiler.target} + + + software.amazon.lambda + powertools-tracing + + + software.amazon.lambda + powertools-logging + + + + + + + compile + + + + + + org.apache.maven.plugins + maven-shade-plugin + 3.5.0 + + + package + + shade + + + + + + + + + + + + com.github.edwgiz + maven-shade-plugin.log4j2-cachefile-transformer + 2.15 + + + + + + + + + jdk8 + + (,11) + + + 1.9.7 + + + + + org.aspectj + aspectjtools + ${aspectj.version} + + + + + + + + dev.aspectj + aspectj-maven-plugin + ${aspectj.plugin.version} + + ${maven.compiler.source} + ${maven.compiler.target} + ${maven.compiler.target} + + + software.amazon.lambda + powertools-tracing + + + software.amazon.lambda + powertools-logging + + + + + + + compile + test-compile + + + + + + + org.aspectj + aspectjtools + ${aspectj.version} + + + + + + + + + \ No newline at end of file diff --git a/examples/powertools-examples-batch/src/main/java/org/demo/batch/dynamo/DynamoDBStreamBatchHandler.java b/examples/powertools-examples-batch/src/main/java/org/demo/batch/dynamo/DynamoDBStreamBatchHandler.java new file mode 100644 index 000000000..988c49e86 --- /dev/null +++ b/examples/powertools-examples-batch/src/main/java/org/demo/batch/dynamo/DynamoDBStreamBatchHandler.java @@ -0,0 +1,32 @@ +package org.demo.batch.dynamo; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.RequestHandler; +import com.amazonaws.services.lambda.runtime.events.DynamodbEvent; +import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; + +public class DynamoDBStreamBatchHandler implements RequestHandler { + + private final static Logger LOGGER = LogManager.getLogger(DynamoDBStreamBatchHandler.class); + private final BatchMessageHandler handler; + + public DynamoDBStreamBatchHandler() { + handler = new BatchMessageHandlerBuilder() + .withDynamoDbBatchHandler() + .buildWithRawMessageHandler(this::processMessage); + } + + @Override + public StreamsEventResponse handleRequest(DynamodbEvent ddbEvent, Context context) { + return handler.processBatch(ddbEvent, context); + } + + private void processMessage(DynamodbEvent.DynamodbStreamRecord dynamodbStreamRecord, Context context) { + LOGGER.info("Processing DynamoDB Stream Record" + dynamodbStreamRecord); + } + +} diff --git a/examples/powertools-examples-batch/src/main/java/org/demo/batch/dynamo/DynamoDBWriter.java b/examples/powertools-examples-batch/src/main/java/org/demo/batch/dynamo/DynamoDBWriter.java new file mode 100644 index 000000000..953ba8f23 --- /dev/null +++ b/examples/powertools-examples-batch/src/main/java/org/demo/batch/dynamo/DynamoDBWriter.java @@ -0,0 +1,108 @@ +package org.demo.batch.dynamo; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.RequestHandler; +import com.amazonaws.services.lambda.runtime.events.ScheduledEvent; +import java.security.SecureRandom; +import java.util.List; +import java.util.UUID; +import java.util.stream.Collectors; +import java.util.stream.IntStream; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.demo.batch.model.DdbProduct; +import software.amazon.awssdk.enhanced.dynamodb.DynamoDbEnhancedClient; +import software.amazon.awssdk.enhanced.dynamodb.TableSchema; +import software.amazon.awssdk.enhanced.dynamodb.model.BatchWriteItemEnhancedRequest; +import software.amazon.awssdk.enhanced.dynamodb.model.BatchWriteResult; +import software.amazon.awssdk.enhanced.dynamodb.model.WriteBatch; +import software.amazon.awssdk.http.urlconnection.UrlConnectionHttpClient; +import software.amazon.awssdk.services.dynamodb.DynamoDbClient; + +public class DynamoDBWriter implements RequestHandler { + + private static final Logger LOGGER = LogManager.getLogger(DynamoDBWriter.class); + + private final DynamoDbEnhancedClient enhancedClient; + + private final SecureRandom random; + + public DynamoDBWriter() { + random = new SecureRandom(); + DynamoDbClient dynamoDbClient = DynamoDbClient.builder() + .httpClientBuilder(UrlConnectionHttpClient.builder()) + .build(); + + enhancedClient = DynamoDbEnhancedClient.builder() + .dynamoDbClient(dynamoDbClient) + .build(); + } + + @Override + public String handleRequest(ScheduledEvent scheduledEvent, Context context) { + String tableName = System.getenv("TABLE_NAME"); + + LOGGER.info("handleRequest"); + + List products = createProducts(tableName); + List updatedProducts = updateProducts(tableName, products); + deleteProducts(tableName, updatedProducts); + + return "Success"; + } + + private void deleteProducts(String tableName, List updatedProducts) { + WriteBatch.Builder productDeleteBuilder = WriteBatch.builder(DdbProduct.class) + .mappedTableResource(enhancedClient.table(tableName, TableSchema.fromBean(DdbProduct.class))); + + updatedProducts.forEach(productDeleteBuilder::addDeleteItem); + + BatchWriteResult batchDeleteResult = enhancedClient + .batchWriteItem(BatchWriteItemEnhancedRequest.builder().writeBatches( + productDeleteBuilder.build()) + .build()); + LOGGER.info("Deleted batch of objects from DynamoDB: {}", batchDeleteResult); + } + + private List updateProducts(String tableName, List products) { + WriteBatch.Builder productUpdateBuilder = WriteBatch.builder(DdbProduct.class) + .mappedTableResource(enhancedClient.table(tableName, TableSchema.fromBean(DdbProduct.class))); + + List updatedProducts = products.stream().map(product -> { + // Update the price of the product and add it to the batch + LOGGER.info("Updating product: {}", product); + float price = random.nextFloat(); + DdbProduct updatedProduct = new DdbProduct(product.getId(), "updated-product-" + product.getId(), price); + productUpdateBuilder.addPutItem(updatedProduct); + return updatedProduct; + }).collect(Collectors.toList()); + + BatchWriteResult batchUpdateResult = enhancedClient + .batchWriteItem(BatchWriteItemEnhancedRequest.builder().writeBatches( + productUpdateBuilder.build()) + .build()); + LOGGER.info("Updated batch of objects to DynamoDB: {}", batchUpdateResult); + return updatedProducts; + } + + public List createProducts(String tableName) { + WriteBatch.Builder productBuilder = WriteBatch.builder(DdbProduct.class) + .mappedTableResource(enhancedClient.table(tableName, TableSchema.fromBean(DdbProduct.class))); + + List ddbProductStream = IntStream.range(0, 5).mapToObj(i -> { + String id = UUID.randomUUID().toString(); + float price = random.nextFloat(); + // Create a new product and add it to the batch + final DdbProduct product = new DdbProduct(id, "product-" + id, price); + productBuilder.addPutItem(product); + return product; + }).collect(Collectors.toList()); + + BatchWriteResult batchWriteResult = enhancedClient + .batchWriteItem(BatchWriteItemEnhancedRequest.builder().writeBatches( + productBuilder.build()) + .build()); + LOGGER.info("Wrote batch of objects to DynamoDB: {}", batchWriteResult); + return ddbProductStream; + } +} diff --git a/examples/powertools-examples-batch/src/main/java/org/demo/batch/kinesis/KinesisBatchHandler.java b/examples/powertools-examples-batch/src/main/java/org/demo/batch/kinesis/KinesisBatchHandler.java new file mode 100644 index 000000000..d9339549b --- /dev/null +++ b/examples/powertools-examples-batch/src/main/java/org/demo/batch/kinesis/KinesisBatchHandler.java @@ -0,0 +1,33 @@ +package org.demo.batch.kinesis; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.RequestHandler; +import com.amazonaws.services.lambda.runtime.events.KinesisEvent; +import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.demo.batch.model.Product; +import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; + +public class KinesisBatchHandler implements RequestHandler { + + private final static Logger LOGGER = LogManager.getLogger(org.demo.batch.sqs.SqsBatchHandler.class); + private final BatchMessageHandler handler; + + public KinesisBatchHandler() { + handler = new BatchMessageHandlerBuilder() + .withKinesisBatchHandler() + .buildWithMessageHandler(this::processMessage, Product.class); + } + + @Override + public StreamsEventResponse handleRequest(KinesisEvent kinesisEvent, Context context) { + return handler.processBatch(kinesisEvent, context); + } + + private void processMessage(Product p, Context c) { + LOGGER.info("Processing product " + p); + } + +} diff --git a/examples/powertools-examples-batch/src/main/java/org/demo/batch/kinesis/KinesisBatchSender.java b/examples/powertools-examples-batch/src/main/java/org/demo/batch/kinesis/KinesisBatchSender.java new file mode 100644 index 000000000..0bc7dc42c --- /dev/null +++ b/examples/powertools-examples-batch/src/main/java/org/demo/batch/kinesis/KinesisBatchSender.java @@ -0,0 +1,78 @@ +package org.demo.batch.kinesis; + +import static java.util.stream.Collectors.toList; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.RequestHandler; +import com.amazonaws.services.lambda.runtime.events.ScheduledEvent; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import java.security.SecureRandom; +import java.util.List; +import java.util.stream.IntStream; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.demo.batch.model.Product; +import software.amazon.awssdk.core.SdkBytes; +import software.amazon.awssdk.http.urlconnection.UrlConnectionHttpClient; +import software.amazon.awssdk.services.kinesis.KinesisClient; +import software.amazon.awssdk.services.kinesis.model.PutRecordsRequest; +import software.amazon.awssdk.services.kinesis.model.PutRecordsRequestEntry; +import software.amazon.awssdk.services.kinesis.model.PutRecordsResponse; + + +/** + * A Lambda handler used to send message batches to Kinesis Streams. This is only here + * to produce an end-to-end demo, so that the {{@link org.demo.batch.kinesis.KinesisBatchHandler}} + * has some data to consume. + */ +public class KinesisBatchSender implements RequestHandler { + + private static final Logger LOGGER = LogManager.getLogger(KinesisBatchSender.class); + + private final KinesisClient kinesisClient; + private final SecureRandom random; + private final ObjectMapper objectMapper; + + public KinesisBatchSender() { + kinesisClient = KinesisClient.builder() + .httpClient(UrlConnectionHttpClient.create()) + .build(); + random = new SecureRandom(); + objectMapper = new ObjectMapper(); + } + + @Override + public String handleRequest(ScheduledEvent scheduledEvent, Context context) { + String streamName = System.getenv("STREAM_NAME"); + + LOGGER.info("handleRequest"); + + // Push 5 messages on each invoke. + List records = IntStream.range(0, 5) + .mapToObj(value -> { + long id = random.nextLong(); + float price = random.nextFloat(); + Product product = new Product(id, "product-" + id, price); + try { + SdkBytes data = SdkBytes.fromUtf8String(objectMapper.writeValueAsString(product)); + return PutRecordsRequestEntry.builder() + .partitionKey(String.format("%d", id)) + .data(data) + .build(); + } catch (JsonProcessingException e) { + LOGGER.error("Failed serializing body", e); + throw new RuntimeException(e); + } + }).collect(toList()); + + PutRecordsResponse putRecordsResponse = kinesisClient.putRecords(PutRecordsRequest.builder() + .streamName(streamName) + .records(records) + .build()); + + LOGGER.info("Sent Message {}", putRecordsResponse); + + return "Success"; + } +} diff --git a/examples/powertools-examples-batch/src/main/java/org/demo/batch/model/DdbProduct.java b/examples/powertools-examples-batch/src/main/java/org/demo/batch/model/DdbProduct.java new file mode 100644 index 000000000..9d69eac5d --- /dev/null +++ b/examples/powertools-examples-batch/src/main/java/org/demo/batch/model/DdbProduct.java @@ -0,0 +1,88 @@ +/* + * Copyright 2022 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.demo.batch.model; + +import java.util.Objects; +import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbBean; +import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbPartitionKey; + +@DynamoDbBean +public class DdbProduct { + private String id; + + private String name; + + private double price; + + public DdbProduct() { + } + + public DdbProduct(String id, String name, double price) { + this.id = id; + this.name = name; + this.price = price; + } + + @DynamoDbPartitionKey + public String getId() { + return id; + } + + public void setId(String id) { + this.id = id; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public double getPrice() { + return price; + } + + public void setPrice(double price) { + this.price = price; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + DdbProduct that = (DdbProduct) o; + return Double.compare(that.price, price) == 0 && Objects.equals(id, that.id) && Objects.equals(name, that.name); + } + + @Override + public int hashCode() { + return Objects.hash(id, name, price); + } + + @Override + public String toString() { + return "Product{" + + "id=" + id + + ", name='" + name + '\'' + + ", price=" + price + + '}'; + } +} diff --git a/examples/powertools-examples-batch/src/main/java/org/demo/batch/model/Product.java b/examples/powertools-examples-batch/src/main/java/org/demo/batch/model/Product.java new file mode 100644 index 000000000..64da1804e --- /dev/null +++ b/examples/powertools-examples-batch/src/main/java/org/demo/batch/model/Product.java @@ -0,0 +1,84 @@ +/* + * Copyright 2022 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package org.demo.batch.model; + +import java.util.Objects; + +public class Product { + private long id; + + private String name; + + private double price; + + public Product() { + } + + public Product(long id, String name, double price) { + this.id = id; + this.name = name; + this.price = price; + } + + public long getId() { + return id; + } + + public void setId(long id) { + this.id = id; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public double getPrice() { + return price; + } + + public void setPrice(double price) { + this.price = price; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + Product product = (Product) o; + return id == product.id && Double.compare(product.price, price) == 0 && Objects.equals(name, product.name); + } + + @Override + public int hashCode() { + return Objects.hash(id, name, price); + } + + @Override + public String toString() { + return "Product{" + + "id=" + id + + ", name='" + name + '\'' + + ", price=" + price + + '}'; + } +} diff --git a/examples/powertools-examples-batch/src/main/java/org/demo/batch/sqs/SqsBatchHandler.java b/examples/powertools-examples-batch/src/main/java/org/demo/batch/sqs/SqsBatchHandler.java new file mode 100644 index 000000000..bb9d704d3 --- /dev/null +++ b/examples/powertools-examples-batch/src/main/java/org/demo/batch/sqs/SqsBatchHandler.java @@ -0,0 +1,33 @@ +package org.demo.batch.sqs; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.RequestHandler; +import com.amazonaws.services.lambda.runtime.events.SQSBatchResponse; +import com.amazonaws.services.lambda.runtime.events.SQSEvent; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.demo.batch.model.Product; +import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; + +public class SqsBatchHandler implements RequestHandler { + private final static Logger LOGGER = LogManager.getLogger(SqsBatchHandler.class); + private final BatchMessageHandler handler; + + public SqsBatchHandler() { + handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .buildWithMessageHandler(this::processMessage, Product.class); + } + + @Override + public SQSBatchResponse handleRequest(SQSEvent sqsEvent, Context context) { + return handler.processBatch(sqsEvent, context); + } + + + private void processMessage(Product p, Context c) { + LOGGER.info("Processing product " + p); + } + +} diff --git a/examples/powertools-examples-batch/src/main/java/org/demo/batch/sqs/SqsBatchSender.java b/examples/powertools-examples-batch/src/main/java/org/demo/batch/sqs/SqsBatchSender.java new file mode 100644 index 000000000..af78bed5a --- /dev/null +++ b/examples/powertools-examples-batch/src/main/java/org/demo/batch/sqs/SqsBatchSender.java @@ -0,0 +1,77 @@ +package org.demo.batch.sqs; + +import static java.util.stream.Collectors.toList; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.RequestHandler; +import com.amazonaws.services.lambda.runtime.events.ScheduledEvent; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import java.security.SecureRandom; +import java.util.List; +import java.util.stream.IntStream; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.demo.batch.model.Product; +import software.amazon.awssdk.http.urlconnection.UrlConnectionHttpClient; +import software.amazon.awssdk.services.sqs.SqsClient; +import software.amazon.awssdk.services.sqs.model.SendMessageBatchRequest; +import software.amazon.awssdk.services.sqs.model.SendMessageBatchRequestEntry; +import software.amazon.awssdk.services.sqs.model.SendMessageBatchResponse; + + +/** + * A Lambda handler used to send message batches to SQS. This is only here + * to produce an end-to-end demo, so that the {{@link org.demo.batch.sqs.SqsBatchHandler}} + * has some data to consume. + */ +public class SqsBatchSender implements RequestHandler { + + private static final Logger LOGGER = LogManager.getLogger(SqsBatchSender.class); + + private final SqsClient sqsClient; + private final SecureRandom random; + private final ObjectMapper objectMapper; + + public SqsBatchSender() { + sqsClient = SqsClient.builder() + .httpClient(UrlConnectionHttpClient.create()) + .build(); + random = new SecureRandom(); + objectMapper = new ObjectMapper(); + } + + @Override + public String handleRequest(ScheduledEvent scheduledEvent, Context context) { + String queueUrl = System.getenv("QUEUE_URL"); + + LOGGER.info("handleRequest"); + + // Push 5 messages on each invoke. + List batchRequestEntries = IntStream.range(0, 5) + .mapToObj(value -> { + long id = random.nextLong(); + float price = random.nextFloat(); + Product product = new Product(id, "product-" + id, price); + try { + + return SendMessageBatchRequestEntry.builder() + .id(scheduledEvent.getId() + value) + .messageBody(objectMapper.writeValueAsString(product)) + .build(); + } catch (JsonProcessingException e) { + LOGGER.error("Failed serializing body", e); + throw new RuntimeException(e); + } + }).collect(toList()); + + SendMessageBatchResponse sendMessageBatchResponse = sqsClient.sendMessageBatch(SendMessageBatchRequest.builder() + .queueUrl(queueUrl) + .entries(batchRequestEntries) + .build()); + + LOGGER.info("Sent Message {}", sendMessageBatchResponse); + + return "Success"; + } +} diff --git a/examples/powertools-examples-batch/src/main/resources/log4j2.xml b/examples/powertools-examples-batch/src/main/resources/log4j2.xml new file mode 100644 index 000000000..ea3ecf474 --- /dev/null +++ b/examples/powertools-examples-batch/src/main/resources/log4j2.xml @@ -0,0 +1,16 @@ + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index 62d8d75ce..d54ece508 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -18,6 +18,9 @@ nav: - utilities/validation.md - utilities/custom_resources.md - utilities/serialization.md + - Deprecated: + - utilities/sqs_large_message_handling.md + - utilities/sqs_batch.md - Processes: - processes/maintainers.md diff --git a/pom.xml b/pom.xml index 38e061dc7..8c9e540f5 100644 --- a/pom.xml +++ b/pom.xml @@ -54,6 +54,7 @@ powertools-idempotency powertools-large-messages powertools-e2e-tests + powertools-batch examples diff --git a/powertools-batch/pom.xml b/powertools-batch/pom.xml new file mode 100644 index 000000000..9e25dabd8 --- /dev/null +++ b/powertools-batch/pom.xml @@ -0,0 +1,68 @@ + + + 4.0.0 + + software.amazon.lambda + powertools-parent + 1.17.0-SNAPSHOT + + + + + + dev.aspectj + aspectj-maven-plugin + + true + + + + + + powertools-batch + + + com.amazonaws + aws-lambda-java-events + + + com.amazonaws + aws-lambda-java-core + + + software.amazon.lambda + powertools-serialization + ${project.version} + + + + + org.junit.jupiter + junit-jupiter-api + test + + + org.assertj + assertj-core + test + + + com.amazonaws + aws-lambda-java-tests + test + + + org.mockito + mockito-core + test + + + org.mockito + mockito-inline + test + + + + \ No newline at end of file diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/BatchMessageHandlerBuilder.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/BatchMessageHandlerBuilder.java new file mode 100644 index 000000000..4ed44453b --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/BatchMessageHandlerBuilder.java @@ -0,0 +1,58 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch; + +import software.amazon.lambda.powertools.batch.builder.DynamoDbBatchMessageHandlerBuilder; +import software.amazon.lambda.powertools.batch.builder.KinesisBatchMessageHandlerBuilder; +import software.amazon.lambda.powertools.batch.builder.SqsBatchMessageHandlerBuilder; + +/** + * A builder-style interface we can use to build batch processing handlers for SQS, Kinesis Streams, + * and DynamoDB Streams batches. The batch processing handlers that are returned allow + * the user to easily process batches of messages, one-by-one, while offloading + * the common issues - failure handling, partial responses, deserialization - + * to the library. + * + * @see Powertools for AWS Lambda (Java) Batch Documentation + **/ +public class BatchMessageHandlerBuilder { + + /** + * Build an SQS-batch message handler. + * + * @return A fluent builder interface to continue the building + */ + public SqsBatchMessageHandlerBuilder withSqsBatchHandler() { + return new SqsBatchMessageHandlerBuilder(); + } + + /** + * Build a DynamoDB streams batch message handler. + * + * @return A fluent builder interface to continue the building + */ + public DynamoDbBatchMessageHandlerBuilder withDynamoDbBatchHandler() { + return new DynamoDbBatchMessageHandlerBuilder(); + } + + /** + * Builds a Kinesis streams batch message handler. + * + * @return a fluent builder interface to continue the building + */ + public KinesisBatchMessageHandlerBuilder withKinesisBatchHandler() { + return new KinesisBatchMessageHandlerBuilder(); + } +} diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/AbstractBatchMessageHandlerBuilder.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/AbstractBatchMessageHandlerBuilder.java new file mode 100644 index 000000000..9b0647770 --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/AbstractBatchMessageHandlerBuilder.java @@ -0,0 +1,142 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.builder; + +import com.amazonaws.services.lambda.runtime.Context; +import java.util.function.BiConsumer; +import java.util.function.Consumer; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; + +/** + * An abstract class to capture common arguments used across all the message-binding-specific batch processing + * builders. The builders provide a fluent interface to configure the batch processors. Any arguments specific + * to a particular batch binding can be added to the child builder. + *

+ * We capture types for the various messages involved, so that we can provide an interface that makes + * sense for the concrete child. + * + * @param The type of a single message in the batch + * @param The type of the child builder. We need this to provide a fluent interface - see also getThis() + * @param The type of the Lambda batch event + * @param The type of the batch response we return to Lambda + */ +abstract class AbstractBatchMessageHandlerBuilder { + protected BiConsumer failureHandler; + protected Consumer successHandler; + + /** + * Provides an (Optional!) success handler. A success handler is invoked + * once for each message after it has been processed by the user-provided + * handler. + *

+ * If the success handler throws, the item in the batch will be + * marked failed. + * + * @param handler The handler to invoke + */ + public C withSuccessHandler(Consumer handler) { + this.successHandler = handler; + return getThis(); + } + + /** + * Provides an (Optional!) failure handler. A failure handler is invoked + * once for each message after it has failed to be processed by the + * user-provided handler. This gives the user's code a useful hook to do + * anything else that might have to be done in response to a failure - for + * instance, updating a metric, or writing a detailed log. + *

+ * Please note that this method has nothing to do with the partial batch + * failure mechanism. Regardless of whether a failure handler is + * specified, partial batch failures and responses to the Lambda environment + * are handled by the batch utility separately. + * + * @param handler The handler to invoke on failure + */ + public C withFailureHandler(BiConsumer handler) { + this.failureHandler = handler; + return getThis(); + } + + /** + * Builds a BatchMessageHandler that can be used to process batches, given + * a user-defined handler to process each item in the batch. This variant + * takes a function that consumes a raw message and the Lambda context. This + * is useful for handlers that need access to the entire message object, not + * just the deserialized contents of the body. + *

+ * Note: If you don't need the Lambda context, use the variant of this function + * that does not require it. + * + * @param handler Takes a raw message - the underlying AWS Events Library event - to process. + * For instance for SQS this would be an SQSMessage. + * @return A BatchMessageHandler for processing the batch + */ + public abstract BatchMessageHandler buildWithRawMessageHandler(BiConsumer handler); + + /** + * Builds a BatchMessageHandler that can be used to process batches, given + * a user-defined handler to process each item in the batch. This variant + * takes a function that consumes a raw message and the Lambda context. This + * is useful for handlers that need access to the entire message object, not + * just the deserialized contents of the body. + * + * @param handler Takes a raw message - the underlying AWS Events Library event - to process. + * For instance for SQS this would be an SQSMessage. + * @return A BatchMessageHandler for processing the batch + */ + public BatchMessageHandler buildWithRawMessageHandler(Consumer handler) { + return buildWithRawMessageHandler((f, c) -> handler.accept(f)); + } + + /** + * Builds a BatchMessageHandler that can be used to process batches, given + * a user-defined handler to process each item in the batch. This variant + * takes a function that consumes the deserialized body of the given message + * and the lambda context. If deserialization fails, it will be treated as + * failure of the processing of that item in the batch. + * Note: If you don't need the Lambda context, use the variant of this function + * that does not require it. + * + * @param handler Processes the deserialized body of the message + * @return A BatchMessageHandler for processing the batch + */ + public abstract BatchMessageHandler buildWithMessageHandler(BiConsumer handler, + Class messageClass); + + /** + * Builds a BatchMessageHandler that can be used to process batches, given + * a user-defined handler to process each item in the batch. This variant + * takes a function that consumes the deserialized body of the given message + * If deserialization fails, it will be treated as + * failure of the processing of that item in the batch. + * Note: If you don't need the Lambda context, use the variant of this function + * that does not require it. + * + * @param handler Processes the deserialized body of the message + * @return A BatchMessageHandler for processing the batch + */ + public BatchMessageHandler buildWithMessageHandler(Consumer handler, Class messageClass) { + return buildWithMessageHandler((f, c) -> handler.accept(f), messageClass); + } + + + /** + * Used to chain the fluent builder interface through the child classes. + * + * @return This + */ + protected abstract C getThis(); +} diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/DynamoDbBatchMessageHandlerBuilder.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/DynamoDbBatchMessageHandlerBuilder.java new file mode 100644 index 000000000..8513322b3 --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/DynamoDbBatchMessageHandlerBuilder.java @@ -0,0 +1,55 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.builder; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.events.DynamodbEvent; +import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; +import java.util.function.BiConsumer; +import software.amazon.lambda.powertools.batch.exception.DeserializationNotSupportedException; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; +import software.amazon.lambda.powertools.batch.handler.DynamoDbBatchMessageHandler; + +/** + * Builds a batch processor for processing DynamoDB Streams batch events + **/ +public class DynamoDbBatchMessageHandlerBuilder + extends AbstractBatchMessageHandlerBuilder { + + + @Override + public BatchMessageHandler buildWithRawMessageHandler( + BiConsumer rawMessageHandler) { + return new DynamoDbBatchMessageHandler( + this.successHandler, + this.failureHandler, + rawMessageHandler); + } + + @Override + public BatchMessageHandler buildWithMessageHandler( + BiConsumer handler, Class messageClass) { + // The DDB provider streams DynamoDB changes, and therefore does not have a customizable payload + throw new DeserializationNotSupportedException(); + } + + @Override + protected DynamoDbBatchMessageHandlerBuilder getThis() { + return this; + } +} diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/KinesisBatchMessageHandlerBuilder.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/KinesisBatchMessageHandlerBuilder.java new file mode 100644 index 000000000..30bfcab65 --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/KinesisBatchMessageHandlerBuilder.java @@ -0,0 +1,58 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.builder; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.events.KinesisEvent; +import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; +import java.util.function.BiConsumer; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; +import software.amazon.lambda.powertools.batch.handler.KinesisStreamsBatchMessageHandler; + +/** + * Builds a batch processor for processing Kinesis Streams batch events + */ +public class KinesisBatchMessageHandlerBuilder + extends AbstractBatchMessageHandlerBuilder { + @Override + public BatchMessageHandler buildWithRawMessageHandler( + BiConsumer rawMessageHandler) { + return new KinesisStreamsBatchMessageHandler( + rawMessageHandler, + null, + null, + successHandler, + failureHandler); + } + + @Override + public BatchMessageHandler buildWithMessageHandler( + BiConsumer messageHandler, Class messageClass) { + return new KinesisStreamsBatchMessageHandler<>( + null, + messageHandler, + messageClass, + successHandler, + failureHandler); + } + + @Override + protected KinesisBatchMessageHandlerBuilder getThis() { + return this; + } +} diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/SqsBatchMessageHandlerBuilder.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/SqsBatchMessageHandlerBuilder.java new file mode 100644 index 000000000..ee2dc23f6 --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/builder/SqsBatchMessageHandlerBuilder.java @@ -0,0 +1,64 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.builder; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.events.SQSBatchResponse; +import com.amazonaws.services.lambda.runtime.events.SQSEvent; +import java.util.function.BiConsumer; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; +import software.amazon.lambda.powertools.batch.handler.SqsBatchMessageHandler; + +/** + * Builds a batch processor for the SQS event source. + */ +public class SqsBatchMessageHandlerBuilder extends AbstractBatchMessageHandlerBuilder { + + + @Override + public BatchMessageHandler buildWithRawMessageHandler( + BiConsumer rawMessageHandler) { + return new SqsBatchMessageHandler( + null, + null, + rawMessageHandler, + successHandler, + failureHandler + ); + } + + @Override + public BatchMessageHandler buildWithMessageHandler( + BiConsumer messageHandler, Class messageClass) { + return new SqsBatchMessageHandler<>( + messageHandler, + messageClass, + null, + successHandler, + failureHandler + ); + } + + + @Override + protected SqsBatchMessageHandlerBuilder getThis() { + return this; + } + + +} diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/exception/DeserializationNotSupportedException.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/exception/DeserializationNotSupportedException.java new file mode 100644 index 000000000..6f3206c99 --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/exception/DeserializationNotSupportedException.java @@ -0,0 +1,28 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.exception; + +/** + * Thrown by message handlers that do not support deserializing arbitrary payload + * contents. This is the case for instance with DynamoDB Streams, which stream + * changesets about user-defined data, but not the user-defined data models themselves. + */ +public class DeserializationNotSupportedException extends RuntimeException { + + public DeserializationNotSupportedException() { + super("This BatchMessageHandler has a fixed schema and does not support user-defined types"); + } + +} diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/BatchMessageHandler.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/BatchMessageHandler.java new file mode 100644 index 000000000..730211feb --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/BatchMessageHandler.java @@ -0,0 +1,38 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.handler; + +import com.amazonaws.services.lambda.runtime.Context; + +/** + * The basic interface a batch message handler must meet. + * + * @param The type of the Lambda batch event + * @param The type of the lambda batch response + */ +public interface BatchMessageHandler { + + /** + * Processes the given batch returning a partial batch + * response indicating the success and failure of individual + * messages within the batch. + * + * @param event The Lambda event containing the batch to process + * @param context The lambda context + * @return A partial batch response + */ + public abstract R processBatch(E event, Context context); + +} diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/DynamoDbBatchMessageHandler.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/DynamoDbBatchMessageHandler.java new file mode 100644 index 000000000..aa6eba839 --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/DynamoDbBatchMessageHandler.java @@ -0,0 +1,80 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.handler; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.events.DynamodbEvent; +import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; +import java.util.ArrayList; +import java.util.List; +import java.util.function.BiConsumer; +import java.util.function.Consumer; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * A batch message processor for DynamoDB Streams batches. + * + * @see DynamoDB Streams batch failure reporting + */ +public class DynamoDbBatchMessageHandler implements BatchMessageHandler { + private final static Logger LOGGER = LoggerFactory.getLogger(DynamoDbBatchMessageHandler.class); + + private final Consumer successHandler; + private final BiConsumer failureHandler; + private final BiConsumer rawMessageHandler; + + public DynamoDbBatchMessageHandler(Consumer successHandler, + BiConsumer failureHandler, + BiConsumer rawMessageHandler) { + this.successHandler = successHandler; + this.failureHandler = failureHandler; + this.rawMessageHandler = rawMessageHandler; + } + + @Override + public StreamsEventResponse processBatch(DynamodbEvent event, Context context) { + List batchFailures = new ArrayList<>(); + + for (DynamodbEvent.DynamodbStreamRecord record : event.getRecords()) { + try { + + rawMessageHandler.accept(record, context); + // Report success if we have a handler + if (this.successHandler != null) { + this.successHandler.accept(record); + } + } catch (Throwable t) { + String sequenceNumber = record.getDynamodb().getSequenceNumber(); + LOGGER.error("Error while processing record with id {}: {}, adding it to batch item failures", + sequenceNumber, t.getMessage()); + LOGGER.error("Error was", t); + batchFailures.add(new StreamsEventResponse.BatchItemFailure(sequenceNumber)); + + // Report failure if we have a handler + if (this.failureHandler != null) { + // A failing failure handler is no reason to fail the batch + try { + this.failureHandler.accept(record, t); + } catch (Throwable t2) { + LOGGER.warn("failureHandler threw handling failure", t2); + } + } + } + } + + return new StreamsEventResponse(batchFailures); + } +} diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/KinesisStreamsBatchMessageHandler.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/KinesisStreamsBatchMessageHandler.java new file mode 100644 index 000000000..fe1aaf354 --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/KinesisStreamsBatchMessageHandler.java @@ -0,0 +1,98 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.handler; + + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.events.KinesisEvent; +import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; +import java.util.ArrayList; +import java.util.List; +import java.util.function.BiConsumer; +import java.util.function.Consumer; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.lambda.powertools.utilities.EventDeserializer; + +/** + * A batch message processor for Kinesis Streams batch processing. + *

+ * Refer to Kinesis Batch failure reporting + * + * @param The user-defined type of the Kinesis record payload + */ +public class KinesisStreamsBatchMessageHandler implements BatchMessageHandler { + private final static Logger LOGGER = LoggerFactory.getLogger(KinesisStreamsBatchMessageHandler.class); + + private final BiConsumer rawMessageHandler; + private final BiConsumer messageHandler; + private final Class messageClass; + private final Consumer successHandler; + private final BiConsumer failureHandler; + + public KinesisStreamsBatchMessageHandler(BiConsumer rawMessageHandler, + BiConsumer messageHandler, + Class messageClass, + Consumer successHandler, + BiConsumer failureHandler) { + + this.rawMessageHandler = rawMessageHandler; + this.messageHandler = messageHandler; + this.messageClass = messageClass; + this.successHandler = successHandler; + this.failureHandler = failureHandler; + } + + @Override + public StreamsEventResponse processBatch(KinesisEvent event, Context context) { + List batchFailures = new ArrayList<>(); + + for (KinesisEvent.KinesisEventRecord record : event.getRecords()) { + try { + if (this.rawMessageHandler != null) { + rawMessageHandler.accept(record, context); + } else { + M messageDeserialized = EventDeserializer.extractDataFrom(record).as(messageClass); + messageHandler.accept(messageDeserialized, context); + } + + // Report success if we have a handler + if (this.successHandler != null) { + this.successHandler.accept(record); + } + } catch (Throwable t) { + String sequenceNumber = record.getEventID(); + LOGGER.error("Error while processing record with eventID {}: {}, adding it to batch item failures", + sequenceNumber, t.getMessage()); + LOGGER.error("Error was", t); + + batchFailures.add(new StreamsEventResponse.BatchItemFailure(record.getKinesis().getSequenceNumber())); + + // Report failure if we have a handler + if (this.failureHandler != null) { + // A failing failure handler is no reason to fail the batch + try { + this.failureHandler.accept(record, t); + } catch (Throwable t2) { + LOGGER.warn("failureHandler threw handling failure", t2); + } + } + } + } + + return new StreamsEventResponse(batchFailures); + } +} + diff --git a/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/SqsBatchMessageHandler.java b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/SqsBatchMessageHandler.java new file mode 100644 index 000000000..b3c416a69 --- /dev/null +++ b/powertools-batch/src/main/java/software/amazon/lambda/powertools/batch/handler/SqsBatchMessageHandler.java @@ -0,0 +1,124 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.handler; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.events.SQSBatchResponse; +import com.amazonaws.services.lambda.runtime.events.SQSEvent; +import java.util.ArrayList; +import java.util.function.BiConsumer; +import java.util.function.Consumer; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.lambda.powertools.utilities.EventDeserializer; + +/** + * A batch message processor for SQS batches. + * + * @param The user-defined type of the message payload + * @see SQS Batch failure reporting + */ +public class SqsBatchMessageHandler implements BatchMessageHandler { + private final static Logger LOGGER = LoggerFactory.getLogger(SqsBatchMessageHandler.class); + + // The attribute on an SQS-FIFO message used to record the message group ID + // https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#sample-fifo-queues-message-event + private final static String MESSAGE_GROUP_ID_KEY = "MessageGroupId"; + + private final Class messageClass; + private final BiConsumer messageHandler; + private final BiConsumer rawMessageHandler; + private final Consumer successHandler; + private final BiConsumer failureHandler; + + public SqsBatchMessageHandler(BiConsumer messageHandler, Class messageClass, + BiConsumer rawMessageHandler, + Consumer successHandler, + BiConsumer failureHandler) { + this.messageHandler = messageHandler; + this.messageClass = messageClass; + this.rawMessageHandler = rawMessageHandler; + this.successHandler = successHandler; + this.failureHandler = failureHandler; + } + + @Override + public SQSBatchResponse processBatch(SQSEvent event, Context context) { + SQSBatchResponse response = SQSBatchResponse.builder().withBatchItemFailures(new ArrayList<>()).build(); + + // If we are working on a FIFO queue, when any message fails we should stop processing and return the + // rest of the batch as failed too. We use this variable to track when that has happened. + // https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting + boolean failWholeBatch = false; + + int messageCursor = 0; + for (; messageCursor < event.getRecords().size() && !failWholeBatch; messageCursor++) { + SQSEvent.SQSMessage message = event.getRecords().get(messageCursor); + + String messageGroupId = message.getAttributes() != null ? + message.getAttributes().get(MESSAGE_GROUP_ID_KEY) : null; + + try { + if (this.rawMessageHandler != null) { + rawMessageHandler.accept(message, context); + } else { + M messageDeserialized = EventDeserializer.extractDataFrom(message).as(messageClass); + messageHandler.accept(messageDeserialized, context); + } + + // Report success if we have a handler + if (this.successHandler != null) { + this.successHandler.accept(message); + } + + } catch (Throwable t) { + LOGGER.error("Error while processing message with messageId {}: {}, adding it to batch item failures", + message.getMessageId(), t.getMessage()); + LOGGER.error("Error was", t); + + response.getBatchItemFailures() + .add(SQSBatchResponse.BatchItemFailure.builder().withItemIdentifier(message.getMessageId()) + .build()); + if (messageGroupId != null) { + failWholeBatch = true; + LOGGER.info( + "A message in a batch with messageGroupId {} and messageId {} failed; failing the rest of the batch too" + , messageGroupId, message.getMessageId()); + } + + // Report failure if we have a handler + if (this.failureHandler != null) { + // A failing failure handler is no reason to fail the batch + try { + this.failureHandler.accept(message, t); + } catch (Throwable t2) { + LOGGER.warn("failureHandler threw handling failure", t2); + } + } + + } + } + + if (failWholeBatch) { + // Add the remaining messages to the batch item failures + event.getRecords() + .subList(messageCursor, event.getRecords().size()) + .forEach(message -> response.getBatchItemFailures() + .add(SQSBatchResponse.BatchItemFailure.builder().withItemIdentifier(message.getMessageId()) + .build())); + } + return response; + } +} diff --git a/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/DdbBatchProcessorTest.java b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/DdbBatchProcessorTest.java new file mode 100644 index 000000000..9e2c211e2 --- /dev/null +++ b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/DdbBatchProcessorTest.java @@ -0,0 +1,127 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch; + +import static org.assertj.core.api.Assertions.assertThat; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.events.DynamodbEvent; +import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; +import com.amazonaws.services.lambda.runtime.tests.annotations.Event; +import java.util.concurrent.atomic.AtomicBoolean; +import org.junit.jupiter.params.ParameterizedTest; +import org.mockito.Mock; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; + +public class DdbBatchProcessorTest { + + @Mock + private Context context; + + private void processMessageSucceeds(DynamodbEvent.DynamodbStreamRecord record, Context context) { + // Great success + } + + private void processMessageFailsForFixedMessage(DynamodbEvent.DynamodbStreamRecord record, Context context) { + if (record.getDynamodb().getSequenceNumber().equals("4421584500000000017450439091")) { + throw new RuntimeException("fake exception"); + } + } + + @ParameterizedTest + @Event(value = "dynamo_event.json", type = DynamodbEvent.class) + public void batchProcessingSucceedsAndReturns(DynamodbEvent event) { + // Arrange + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withDynamoDbBatchHandler() + .buildWithRawMessageHandler(this::processMessageSucceeds); + + // Act + StreamsEventResponse dynamodbBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(dynamodbBatchResponse.getBatchItemFailures()).hasSize(0); + } + + @ParameterizedTest + @Event(value = "dynamo_event.json", type = DynamodbEvent.class) + public void shouldAddMessageToBatchFailure_whenException_withMessage(DynamodbEvent event) { + // Arrange + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withDynamoDbBatchHandler() + .buildWithRawMessageHandler(this::processMessageFailsForFixedMessage); + + // Act + StreamsEventResponse dynamodbBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(dynamodbBatchResponse.getBatchItemFailures()).hasSize(1); + StreamsEventResponse.BatchItemFailure batchItemFailure = dynamodbBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo("4421584500000000017450439091"); + } + + @ParameterizedTest + @Event(value = "dynamo_event.json", type = DynamodbEvent.class) + public void failingFailureHandlerShouldntFailBatch(DynamodbEvent event) { + // Arrange + AtomicBoolean wasCalledAndFailed = new AtomicBoolean(false); + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withDynamoDbBatchHandler() + .withFailureHandler((m, e) -> { + if (m.getDynamodb().getSequenceNumber().equals("4421584500000000017450439091")) { + wasCalledAndFailed.set(true); + throw new RuntimeException("Success handler throws"); + } + }) + .buildWithRawMessageHandler(this::processMessageFailsForFixedMessage); + + // Act + StreamsEventResponse dynamodbBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(dynamodbBatchResponse).isNotNull(); + assertThat(dynamodbBatchResponse.getBatchItemFailures().size()).isEqualTo(1); + assertThat(wasCalledAndFailed.get()).isTrue(); + StreamsEventResponse.BatchItemFailure batchItemFailure = dynamodbBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo("4421584500000000017450439091"); + } + + @ParameterizedTest + @Event(value = "dynamo_event.json", type = DynamodbEvent.class) + public void failingSuccessHandlerShouldntFailBatchButShouldFailMessage(DynamodbEvent event) { + // Arrange + AtomicBoolean wasCalledAndFailed = new AtomicBoolean(false); + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withDynamoDbBatchHandler() + .withSuccessHandler((e) -> { + if (e.getDynamodb().getSequenceNumber().equals("4421584500000000017450439091")) { + wasCalledAndFailed.set(true); + throw new RuntimeException("Success handler throws"); + } + }) + .buildWithRawMessageHandler(this::processMessageSucceeds); + + // Act + StreamsEventResponse dynamodbBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(dynamodbBatchResponse).isNotNull(); + assertThat(dynamodbBatchResponse.getBatchItemFailures().size()).isEqualTo(1); + assertThat(wasCalledAndFailed.get()).isTrue(); + StreamsEventResponse.BatchItemFailure batchItemFailure = dynamodbBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo("4421584500000000017450439091"); + } + +} diff --git a/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/KinesisBatchProcessorTest.java b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/KinesisBatchProcessorTest.java new file mode 100644 index 000000000..d78638e1d --- /dev/null +++ b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/KinesisBatchProcessorTest.java @@ -0,0 +1,156 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch; + +import static org.assertj.core.api.Assertions.assertThat; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.events.KinesisEvent; +import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; +import com.amazonaws.services.lambda.runtime.tests.annotations.Event; +import java.util.concurrent.atomic.AtomicBoolean; +import org.junit.jupiter.params.ParameterizedTest; +import org.mockito.Mock; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; +import software.amazon.lambda.powertools.batch.model.Product; + +public class KinesisBatchProcessorTest { + + @Mock + private Context context; + + private void processMessageSucceeds(KinesisEvent.KinesisEventRecord record, Context context) { + // Great success + } + + private void processMessageFailsForFixedMessage(KinesisEvent.KinesisEventRecord record, Context context) { + if (record.getKinesis().getSequenceNumber() + .equals("49545115243490985018280067714973144582180062593244200961")) { + throw new RuntimeException("fake exception"); + } + } + + // A handler that throws an exception for _one_ of the deserialized products in the same messages + public void processMessageFailsForFixedProduct(Product product, Context context) { + if (product.getId() == 1234) { + throw new RuntimeException("fake exception"); + } + } + + @ParameterizedTest + @Event(value = "kinesis_event.json", type = KinesisEvent.class) + public void batchProcessingSucceedsAndReturns(KinesisEvent event) { + // Arrange + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withKinesisBatchHandler() + .buildWithRawMessageHandler(this::processMessageSucceeds); + + // Act + StreamsEventResponse kinesisBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(kinesisBatchResponse.getBatchItemFailures()).hasSize(0); + } + + @ParameterizedTest + @Event(value = "kinesis_event.json", type = KinesisEvent.class) + public void shouldAddMessageToBatchFailure_whenException_withMessage(KinesisEvent event) { + // Arrange + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withKinesisBatchHandler() + .buildWithRawMessageHandler(this::processMessageFailsForFixedMessage); + + // Act + StreamsEventResponse kinesisBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(kinesisBatchResponse.getBatchItemFailures()).hasSize(1); + StreamsEventResponse.BatchItemFailure batchItemFailure = kinesisBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo( + "49545115243490985018280067714973144582180062593244200961"); + } + + @ParameterizedTest + @Event(value = "kinesis_event.json", type = KinesisEvent.class) + public void shouldAddMessageToBatchFailure_whenException_withProduct(KinesisEvent event) { + // Arrange + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withKinesisBatchHandler() + .buildWithMessageHandler(this::processMessageFailsForFixedProduct, Product.class); + + // Act + StreamsEventResponse kinesisBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(kinesisBatchResponse.getBatchItemFailures()).hasSize(1); + StreamsEventResponse.BatchItemFailure batchItemFailure = kinesisBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo( + "49545115243490985018280067714973144582180062593244200961"); + } + + @ParameterizedTest + @Event(value = "kinesis_event.json", type = KinesisEvent.class) + public void failingFailureHandlerShouldntFailBatch(KinesisEvent event) { + // Arrange + AtomicBoolean wasCalled = new AtomicBoolean(false); + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withKinesisBatchHandler() + .withFailureHandler((e, ex) -> { + wasCalled.set(true); + throw new RuntimeException("Well, this doesn't look great"); + }) + .buildWithMessageHandler(this::processMessageFailsForFixedProduct, Product.class); + + // Act + StreamsEventResponse kinesisBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(kinesisBatchResponse).isNotNull(); + assertThat(kinesisBatchResponse.getBatchItemFailures().size()).isEqualTo(1); + assertThat(wasCalled.get()).isTrue(); + StreamsEventResponse.BatchItemFailure batchItemFailure = kinesisBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo( + "49545115243490985018280067714973144582180062593244200961"); + } + + @ParameterizedTest + @Event(value = "kinesis_event.json", type = KinesisEvent.class) + public void failingSuccessHandlerShouldntFailBatchButShouldFailMessage(KinesisEvent event) { + // Arrange + AtomicBoolean wasCalledAndFailed = new AtomicBoolean(false); + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withKinesisBatchHandler() + .withSuccessHandler((e) -> { + if (e.getKinesis().getSequenceNumber() + .equals("49545115243490985018280067714973144582180062593244200961")) { + wasCalledAndFailed.set(true); + throw new RuntimeException("Success handler throws"); + } + }) + .buildWithRawMessageHandler(this::processMessageSucceeds); + + // Act + StreamsEventResponse kinesisBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(kinesisBatchResponse).isNotNull(); + assertThat(kinesisBatchResponse.getBatchItemFailures().size()).isEqualTo(1); + assertThat(wasCalledAndFailed.get()).isTrue(); + StreamsEventResponse.BatchItemFailure batchItemFailure = kinesisBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo( + "49545115243490985018280067714973144582180062593244200961"); + } + +} diff --git a/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/SQSBatchProcessorTest.java b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/SQSBatchProcessorTest.java new file mode 100644 index 000000000..2f9429fa3 --- /dev/null +++ b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/SQSBatchProcessorTest.java @@ -0,0 +1,171 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch; + +import static org.assertj.core.api.Assertions.assertThat; + +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.events.SQSBatchResponse; +import com.amazonaws.services.lambda.runtime.events.SQSEvent; +import com.amazonaws.services.lambda.runtime.tests.annotations.Event; +import java.util.concurrent.atomic.AtomicBoolean; +import org.junit.jupiter.params.ParameterizedTest; +import org.mockito.Mock; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; +import software.amazon.lambda.powertools.batch.model.Product; + +public class SQSBatchProcessorTest { + @Mock + private Context context; + + // A handler that works + private void processMessageSucceeds(SQSEvent.SQSMessage sqsMessage) { + } + + // A handler that throws an exception for _one_ of the sample messages + private void processMessageFailsForFixedMessage(SQSEvent.SQSMessage message, Context context) { + if (message.getMessageId().equals("e9144555-9a4f-4ec3-99a0-34ce359b4b54")) { + throw new RuntimeException("fake exception"); + } + } + + // A handler that throws an exception for _one_ of the deserialized products in the same messages + public void processMessageFailsForFixedProduct(Product product, Context context) { + if (product.getId() == 12345) { + throw new RuntimeException("fake exception"); + } + } + + @ParameterizedTest + @Event(value = "sqs_event.json", type = SQSEvent.class) + public void batchProcessingSucceedsAndReturns(SQSEvent event) { + // Arrange + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .buildWithRawMessageHandler(this::processMessageSucceeds); + + // Act + SQSBatchResponse sqsBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(sqsBatchResponse.getBatchItemFailures()).hasSize(0); + } + + + @ParameterizedTest + @Event(value = "sqs_event.json", type = SQSEvent.class) + public void shouldAddMessageToBatchFailure_whenException_withMessage(SQSEvent event) { + // Arrange + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .buildWithRawMessageHandler(this::processMessageFailsForFixedMessage); + + // Act + SQSBatchResponse sqsBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(sqsBatchResponse.getBatchItemFailures()).hasSize(1); + SQSBatchResponse.BatchItemFailure batchItemFailure = sqsBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo("e9144555-9a4f-4ec3-99a0-34ce359b4b54"); + } + + @ParameterizedTest + @Event(value = "sqs_fifo_event.json", type = SQSEvent.class) + public void shouldAddMessageToBatchFailure_whenException_withSQSFIFO(SQSEvent event) { + // Arrange + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .buildWithRawMessageHandler(this::processMessageFailsForFixedMessage); + + // Act + SQSBatchResponse sqsBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(sqsBatchResponse.getBatchItemFailures()).hasSize(2); + SQSBatchResponse.BatchItemFailure batchItemFailure = sqsBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo("e9144555-9a4f-4ec3-99a0-34ce359b4b54"); + batchItemFailure = sqsBatchResponse.getBatchItemFailures().get(1); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo("f9144555-9a4f-4ec3-99a0-34ce359b4b54"); + } + + + @ParameterizedTest + @Event(value = "sqs_event.json", type = SQSEvent.class) + public void shouldAddMessageToBatchFailure_whenException_withProduct(SQSEvent event) { + + // Arrange + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .buildWithMessageHandler(this::processMessageFailsForFixedProduct, Product.class); + + // Act + SQSBatchResponse sqsBatchResponse = handler.processBatch(event, context); + assertThat(sqsBatchResponse.getBatchItemFailures()).hasSize(1); + + // Assert + SQSBatchResponse.BatchItemFailure batchItemFailure = sqsBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo("e9144555-9a4f-4ec3-99a0-34ce359b4b54"); + } + + @ParameterizedTest + @Event(value = "sqs_event.json", type = SQSEvent.class) + public void failingFailureHandlerShouldntFailBatch(SQSEvent event) { + // Arrange + AtomicBoolean wasCalled = new AtomicBoolean(false); + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .withFailureHandler((e, ex) -> { + wasCalled.set(true); + throw new RuntimeException("Well, this doesn't look great"); + }) + .buildWithMessageHandler(this::processMessageFailsForFixedProduct, Product.class); + + // Act + SQSBatchResponse sqsBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(sqsBatchResponse).isNotNull(); + assertThat(wasCalled.get()).isTrue(); + SQSBatchResponse.BatchItemFailure batchItemFailure = sqsBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo("e9144555-9a4f-4ec3-99a0-34ce359b4b54"); + } + + @ParameterizedTest + @Event(value = "sqs_event.json", type = SQSEvent.class) + public void failingSuccessHandlerShouldntFailBatchButShouldFailMessage(SQSEvent event) { + // Arrange + AtomicBoolean wasCalledAndFailed = new AtomicBoolean(false); + BatchMessageHandler handler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .withSuccessHandler((e) -> { + if (e.getMessageId().equals("e9144555-9a4f-4ec3-99a0-34ce359b4b54")) { + wasCalledAndFailed.set(true); + throw new RuntimeException("Success handler throws"); + } + }) + .buildWithRawMessageHandler(this::processMessageSucceeds); + + // Act + SQSBatchResponse sqsBatchResponse = handler.processBatch(event, context); + + // Assert + assertThat(sqsBatchResponse).isNotNull(); + assertThat(wasCalledAndFailed.get()).isTrue(); + SQSBatchResponse.BatchItemFailure batchItemFailure = sqsBatchResponse.getBatchItemFailures().get(0); + assertThat(batchItemFailure.getItemIdentifier()).isEqualTo("e9144555-9a4f-4ec3-99a0-34ce359b4b54"); + } + + +} diff --git a/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/model/Basket.java b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/model/Basket.java new file mode 100644 index 000000000..6009e79d6 --- /dev/null +++ b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/model/Basket.java @@ -0,0 +1,67 @@ +/* + * Copyright 2022 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.batch.model; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.Objects; + +public class Basket { + private List products = new ArrayList<>(); + + public Basket() { + } + + public Basket(Product... p) { + products.addAll(Arrays.asList(p)); + } + + public List getProducts() { + return products; + } + + public void setProducts(List products) { + this.products = products; + } + + public void add(Product product) { + products.add(product); + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + Basket basket = (Basket) o; + return products.equals(basket.products); + } + + @Override + public String toString() { + return "Basket{" + + "products=" + products + + '}'; + } + + @Override + public int hashCode() { + return Objects.hash(products); + } +} diff --git a/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/model/Product.java b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/model/Product.java new file mode 100644 index 000000000..2695578f9 --- /dev/null +++ b/powertools-batch/src/test/java/software/amazon/lambda/powertools/batch/model/Product.java @@ -0,0 +1,84 @@ +package software.amazon.lambda.powertools.batch.model; + +/* + * Copyright 2022 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +import java.util.Objects; + +public class Product { + private long id; + + private String name; + + private double price; + + public Product() { + } + + public Product(long id, String name, double price) { + this.id = id; + this.name = name; + this.price = price; + } + + public long getId() { + return id; + } + + public void setId(long id) { + this.id = id; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public double getPrice() { + return price; + } + + public void setPrice(double price) { + this.price = price; + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + Product product = (Product) o; + return id == product.id && Double.compare(product.price, price) == 0 && Objects.equals(name, product.name); + } + + @Override + public int hashCode() { + return Objects.hash(id, name, price); + } + + @Override + public String toString() { + return "Product{" + + "id=" + id + + ", name='" + name + '\'' + + ", price=" + price + + '}'; + } +} diff --git a/powertools-batch/src/test/resources/dynamo_event.json b/powertools-batch/src/test/resources/dynamo_event.json new file mode 100644 index 000000000..f28ce0e6e --- /dev/null +++ b/powertools-batch/src/test/resources/dynamo_event.json @@ -0,0 +1,97 @@ +{ + "Records": [ + { + "eventID": "c4ca4238a0b923820dcc509a6f75849b", + "eventName": "INSERT", + "eventVersion": "1.1", + "eventSource": "aws:dynamodb", + "awsRegion": "eu-central-1", + "dynamodb": { + "Keys": { + "Id": { + "N": "101" + } + }, + "NewImage": { + "Message": { + "S": "New item!" + }, + "Id": { + "N": "101" + } + }, + "ApproximateCreationDateTime": 1428537600, + "SequenceNumber": "4421584500000000017450439091", + "SizeBytes": 26, + "StreamViewType": "NEW_AND_OLD_IMAGES" + }, + "eventSourceARN": "arn:aws:dynamodb:eu-central-1:123456789012:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899", + "userIdentity": { + "principalId": "dynamodb.amazonaws.com", + "type": "Service" + } + }, + { + "eventID": "c81e728d9d4c2f636f067f89cc14862c", + "eventName": "MODIFY", + "eventVersion": "1.1", + "eventSource": "aws:dynamodb", + "awsRegion": "eu-central-1", + "dynamodb": { + "Keys": { + "Id": { + "N": "101" + } + }, + "NewImage": { + "Message": { + "S": "This item has changed" + }, + "Id": { + "N": "101" + } + }, + "OldImage": { + "Message": { + "S": "New item!" + }, + "Id": { + "N": "101" + } + }, + "ApproximateCreationDateTime": 1428537600, + "SequenceNumber": "4421584500000000017450439092", + "SizeBytes": 59, + "StreamViewType": "NEW_AND_OLD_IMAGES" + }, + "eventSourceARN": "arn:aws:dynamodb:eu-central-1:123456789012:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899" + }, + { + "eventID": "eccbc87e4b5ce2fe28308fd9f2a7baf3", + "eventName": "REMOVE", + "eventVersion": "1.1", + "eventSource": "aws:dynamodb", + "awsRegion": "eu-central-1", + "dynamodb": { + "Keys": { + "Id": { + "N": "101" + } + }, + "OldImage": { + "Message": { + "S": "This item has changed" + }, + "Id": { + "N": "101" + } + }, + "ApproximateCreationDateTime": 1428537600, + "SequenceNumber": "4421584500000000017450439093", + "SizeBytes": 38, + "StreamViewType": "NEW_AND_OLD_IMAGES" + }, + "eventSourceARN": "arn:aws:dynamodb:eu-central-1:123456789012:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899" + } + ] +} \ No newline at end of file diff --git a/powertools-batch/src/test/resources/kinesis_event.json b/powertools-batch/src/test/resources/kinesis_event.json new file mode 100644 index 000000000..c9068da9b --- /dev/null +++ b/powertools-batch/src/test/resources/kinesis_event.json @@ -0,0 +1,38 @@ +{ + "Records": [ + { + "kinesis": { + "partitionKey": "partitionKey-03", + "kinesisSchemaVersion": "1.0", + "data": "eyJpZCI6MTIzNCwgIm5hbWUiOiJwcm9kdWN0IiwgInByaWNlIjo0Mn0=", + "sequenceNumber": "49545115243490985018280067714973144582180062593244200961", + "approximateArrivalTimestamp": 1428537600, + "encryptionType": "NONE" + }, + "eventSource": "aws:kinesis", + "eventID": "shardId-000000000000:49545115243490985018280067714973144582180062593244200961", + "invokeIdentityArn": "arn:aws:iam::EXAMPLE", + "eventVersion": "1.0", + "eventName": "aws:kinesis:record", + "eventSourceARN": "arn:aws:kinesis:EXAMPLE", + "awsRegion": "eu-central-1" + }, + { + "kinesis": { + "partitionKey": "partitionKey-03", + "kinesisSchemaVersion": "1.0", + "data": "eyJpZCI6MTIzNDUsICJuYW1lIjoicHJvZHVjdDUiLCAicHJpY2UiOjQ1fQ==", + "sequenceNumber": "49545115243490985018280067714973144582180062593244200962", + "approximateArrivalTimestamp": 1428537600, + "encryptionType": "NONE" + }, + "eventSource": "aws:kinesis", + "eventID": "shardId-000000000000:49545115243490985018280067714973144582180062593244200961", + "invokeIdentityArn": "arn:aws:iam::EXAMPLE", + "eventVersion": "1.0", + "eventName": "aws:kinesis:record", + "eventSourceARN": "arn:aws:kinesis:EXAMPLE", + "awsRegion": "eu-central-1" + } + ] +} \ No newline at end of file diff --git a/powertools-batch/src/test/resources/sqs_event.json b/powertools-batch/src/test/resources/sqs_event.json new file mode 100644 index 000000000..7fdad096f --- /dev/null +++ b/powertools-batch/src/test/resources/sqs_event.json @@ -0,0 +1,55 @@ +{ + "Records": [ + { + "messageId": "d9144555-9a4f-4ec3-99a0-34ce359b4b54", + "receiptHandle": "13e7f7851d2eaa5c01f208ebadbf1e72==", + "body": "{\n \"id\": 1234,\n \"name\": \"product\",\n \"price\": 42\n}", + "attributes": { + "ApproximateReceiveCount": "1", + "SentTimestamp": "1601975706495", + "SenderId": "AROAIFU437PVZ5L2J53F5", + "ApproximateFirstReceiveTimestamp": "1601975706499" + }, + "messageAttributes": { + }, + "md5OfBody": "13e7f7851d2eaa5c01f208ebadbf1e72", + "eventSource": "aws:sqs", + "eventSourceARN": "arn:aws:sqs:eu-central-1:123456789012:TestLambda", + "awsRegion": "eu-central-1" + }, + { + "messageId": "e9144555-9a4f-4ec3-99a0-34ce359b4b54", + "receiptHandle": "13e7f7851d2eaa5c01f208ebadbf1e72==", + "body": "{\n \"id\": 12345,\n \"name\": \"product5\",\n \"price\": 45\n}", + "attributes": { + "ApproximateReceiveCount": "1", + "SentTimestamp": "1601975706495", + "SenderId": "AROAIFU437PVZ5L2J53F5", + "ApproximateFirstReceiveTimestamp": "1601975706499" + }, + "messageAttributes": { + }, + "md5OfBody": "13e7f7851d2eaa5c01f208ebadbf1e72", + "eventSource": "aws:sqs", + "eventSourceARN": "arn:aws:sqs:eu-central-1:123456789012:TestLambda", + "awsRegion": "eu-central-1" + }, + { + "messageId": "f9144555-9a4f-4ec3-99a0-34ce359b4b54", + "receiptHandle": "13e7f7851d2eaa5c01f208ebadbf1e72==", + "body": "{\n \"id\": 123456,\n \"name\": \"product6\",\n \"price\": 46\n}", + "attributes": { + "ApproximateReceiveCount": "1", + "SentTimestamp": "1601975706495", + "SenderId": "AROAIFU437PVZ5L2J53F5", + "ApproximateFirstReceiveTimestamp": "1601975706499" + }, + "messageAttributes": { + }, + "md5OfBody": "13e7f7851d2eaa5c01f208ebadbf1e72", + "eventSource": "aws:sqs", + "eventSourceARN": "arn:aws:sqs:eu-central-1:123456789012:TestLambda", + "awsRegion": "eu-central-1" + } + ] +} \ No newline at end of file diff --git a/powertools-batch/src/test/resources/sqs_fifo_event.json b/powertools-batch/src/test/resources/sqs_fifo_event.json new file mode 100644 index 000000000..e5abb1e5a --- /dev/null +++ b/powertools-batch/src/test/resources/sqs_fifo_event.json @@ -0,0 +1,58 @@ +{ + "Records": [ + { + "messageId": "d9144555-9a4f-4ec3-99a0-34ce359b4b54", + "receiptHandle": "13e7f7851d2eaa5c01f208ebadbf1e72==", + "body": "{\n \"id\": 1234,\n \"name\": \"product\",\n \"price\": 42\n}", + "attributes": { + "MessageGroupId": "groupA", + "ApproximateReceiveCount": "1", + "SentTimestamp": "1601975706495", + "SenderId": "AROAIFU437PVZ5L2J53F5", + "ApproximateFirstReceiveTimestamp": "1601975706499" + }, + "messageAttributes": { + }, + "md5OfBody": "13e7f7851d2eaa5c01f208ebadbf1e72", + "eventSource": "aws:sqs", + "eventSourceARN": "arn:aws:sqs:eu-central-1:123456789012:TestLambda", + "awsRegion": "eu-central-1" + }, + { + "messageId": "e9144555-9a4f-4ec3-99a0-34ce359b4b54", + "receiptHandle": "13e7f7851d2eaa5c01f208ebadbf1e72==", + "body": "{\n \"id\": 12345,\n \"name\": \"product5\",\n \"price\": 45\n}", + "attributes": { + "MessageGroupId": "groupA", + "ApproximateReceiveCount": "1", + "SentTimestamp": "1601975706495", + "SenderId": "AROAIFU437PVZ5L2J53F5", + "ApproximateFirstReceiveTimestamp": "1601975706499" + }, + "messageAttributes": { + }, + "md5OfBody": "13e7f7851d2eaa5c01f208ebadbf1e72", + "eventSource": "aws:sqs", + "eventSourceARN": "arn:aws:sqs:eu-central-1:123456789012:TestLambda", + "awsRegion": "eu-central-1" + }, + { + "messageId": "f9144555-9a4f-4ec3-99a0-34ce359b4b54", + "receiptHandle": "13e7f7851d2eaa5c01f208ebadbf1e72==", + "body": "{\n \"id\": 123456,\n \"name\": \"product6\",\n \"price\": 46\n}", + "attributes": { + "MessageGroupId": "groupA", + "ApproximateReceiveCount": "1", + "SentTimestamp": "1601975706495", + "SenderId": "AROAIFU437PVZ5L2J53F5", + "ApproximateFirstReceiveTimestamp": "1601975706499" + }, + "messageAttributes": { + }, + "md5OfBody": "13e7f7851d2eaa5c01f208ebadbf1e72", + "eventSource": "aws:sqs", + "eventSourceARN": "arn:aws:sqs:eu-central-1:123456789012:TestLambda", + "awsRegion": "eu-central-1" + } + ] +} \ No newline at end of file diff --git a/powertools-e2e-tests/handlers/batch/pom.xml b/powertools-e2e-tests/handlers/batch/pom.xml new file mode 100644 index 000000000..995121e2a --- /dev/null +++ b/powertools-e2e-tests/handlers/batch/pom.xml @@ -0,0 +1,72 @@ + + 4.0.0 + + + software.amazon.lambda + e2e-test-handlers-parent + 1.0.0 + + + e2e-test-handler-batch + jar + A Lambda function using Powertools for AWS Lambda (Java) batch + + + + software.amazon.lambda + powertools-batch + + + software.amazon.lambda + powertools-logging + + + com.amazonaws + aws-lambda-java-events + + + com.amazonaws + aws-lambda-java-serialization + + + org.apache.logging.log4j + log4j-slf4j2-impl + + + software.amazon.awssdk + dynamodb + + + + + + + dev.aspectj + aspectj-maven-plugin + + ${maven.compiler.source} + ${maven.compiler.target} + ${maven.compiler.target} + + + software.amazon.lambda + powertools-logging + + + + + + + compile + + + + + + org.apache.maven.plugins + maven-shade-plugin + + + + diff --git a/powertools-e2e-tests/handlers/batch/src/main/java/software/amazon/lambda/powertools/e2e/Function.java b/powertools-e2e-tests/handlers/batch/src/main/java/software/amazon/lambda/powertools/e2e/Function.java new file mode 100644 index 000000000..64f5a02c2 --- /dev/null +++ b/powertools-e2e-tests/handlers/batch/src/main/java/software/amazon/lambda/powertools/e2e/Function.java @@ -0,0 +1,172 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.e2e; + +import com.amazonaws.lambda.thirdparty.com.fasterxml.jackson.databind.ObjectMapper; +import com.amazonaws.services.lambda.runtime.Context; +import com.amazonaws.services.lambda.runtime.RequestHandler; +import com.amazonaws.services.lambda.runtime.events.DynamodbEvent; +import com.amazonaws.services.lambda.runtime.events.KinesisEvent; +import com.amazonaws.services.lambda.runtime.events.SQSBatchResponse; +import com.amazonaws.services.lambda.runtime.events.SQSEvent; +import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse; +import com.amazonaws.services.lambda.runtime.serialization.PojoSerializer; +import com.amazonaws.services.lambda.runtime.serialization.events.LambdaEventSerializers; +import com.amazonaws.services.lambda.runtime.serialization.factories.JacksonFactory; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.JsonMappingException; +import com.fasterxml.jackson.databind.JsonNode; + +import java.io.BufferedReader; +import java.io.BufferedWriter; +import java.io.IOException; +import java.io.InputStream; +import java.io.InputStreamReader; +import java.io.OutputStream; +import java.io.OutputStreamWriter; +import java.nio.charset.StandardCharsets; +import java.util.HashMap; +import java.util.Map; +import java.util.stream.Collectors; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.core.util.IOUtils; +import software.amazon.awssdk.services.dynamodb.DynamoDbClient; +import software.amazon.awssdk.services.dynamodb.model.AttributeValue; +import software.amazon.awssdk.services.dynamodb.model.PutItemRequest; +import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder; +import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler; +import software.amazon.lambda.powertools.e2e.model.Product; +import software.amazon.lambda.powertools.logging.Logging; +import software.amazon.lambda.powertools.utilities.JsonConfig; + +import javax.management.Attribute; + + +public class Function implements RequestHandler { + + private final static Logger LOGGER = LogManager.getLogger(Function.class); + + private final BatchMessageHandler sqsHandler; + private final BatchMessageHandler kinesisHandler; + private final BatchMessageHandler ddbHandler; + private final String ddbOutputTable; + private DynamoDbClient ddbClient; + + public Function() { + sqsHandler = new BatchMessageHandlerBuilder() + .withSqsBatchHandler() + .buildWithMessageHandler(this::processProductMessage, Product.class); + + kinesisHandler = new BatchMessageHandlerBuilder() + .withKinesisBatchHandler() + .buildWithMessageHandler(this::processProductMessage, Product.class); + + ddbHandler = new BatchMessageHandlerBuilder() + .withDynamoDbBatchHandler() + .buildWithRawMessageHandler(this::processDdbMessage); + + this.ddbOutputTable = System.getenv("TABLE_FOR_ASYNC_TESTS"); + } + + private void processProductMessage(Product p, Context c) { + LOGGER.info("Processing product " + p); + + // TODO - write product details to output table + ddbClient = DynamoDbClient.builder() + .build(); + Map results = new HashMap<>(); + results.put("functionName", AttributeValue.builder() + .s(c.getFunctionName()) + .build()); + results.put("id", AttributeValue.builder() + .s(Long.toString(p.getId())) + .build()); + results.put("name", AttributeValue.builder() + .s(p.getName()) + .build()); + results.put("price", AttributeValue.builder() + .n(Double.toString(p.getPrice())) + .build()); + ddbClient.putItem(PutItemRequest.builder() + .tableName(ddbOutputTable) + .item(results) + .build()); + } + + private void processDdbMessage(DynamodbEvent.DynamodbStreamRecord dynamodbStreamRecord, Context context) { + LOGGER.info("Processing DynamoDB Stream Record" + dynamodbStreamRecord); + + ddbClient = DynamoDbClient.builder() + .build(); + + String id = dynamodbStreamRecord.getDynamodb().getKeys().get("id").getS(); + LOGGER.info("Incoming ID is " + id); + + Map results = new HashMap<>(); + results.put("functionName", AttributeValue.builder() + .s(context.getFunctionName()) + .build()); + results.put("id", AttributeValue.builder() + .s(id) + .build()); + + ddbClient.putItem(PutItemRequest.builder() + .tableName(ddbOutputTable) + .item(results) + .build()); + } + + public Object createResult(String input, Context context) { + + LOGGER.info(input); + + PojoSerializer serializer = + LambdaEventSerializers.serializerFor(SQSEvent.class, this.getClass().getClassLoader()); + SQSEvent event = serializer.fromJson(input); + if (event.getRecords().get(0).getEventSource().equals("aws:sqs")) { + LOGGER.info("Running for SQS"); + LOGGER.info(event); + return sqsHandler.processBatch(event, context); + } + + PojoSerializer kinesisSerializer = + LambdaEventSerializers.serializerFor(KinesisEvent.class, this.getClass().getClassLoader()); + KinesisEvent kinesisEvent = kinesisSerializer.fromJson(input); + if (kinesisEvent.getRecords().get(0).getEventSource().equals("aws:kinesis")) { + LOGGER.info("Running for Kinesis"); + return kinesisHandler.processBatch(kinesisEvent, context); + } + + // Well, let's try dynamo + PojoSerializer ddbSerializer = + LambdaEventSerializers.serializerFor(DynamodbEvent.class, this.getClass().getClassLoader()); + LOGGER.info("Running for DynamoDB"); + DynamodbEvent ddbEvent = ddbSerializer.fromJson(input); + return ddbHandler.processBatch(ddbEvent, context); + } + + @Override + public Object handleRequest(InputStream inputStream, Context context) { + + String input = new BufferedReader( + new InputStreamReader(inputStream, StandardCharsets.UTF_8)) + .lines() + .collect(Collectors.joining("\n")); + + return createResult(input, context); + } +} diff --git a/powertools-e2e-tests/handlers/batch/src/main/java/software/amazon/lambda/powertools/e2e/model/Product.java b/powertools-e2e-tests/handlers/batch/src/main/java/software/amazon/lambda/powertools/e2e/model/Product.java new file mode 100644 index 000000000..74bb5ff9f --- /dev/null +++ b/powertools-e2e-tests/handlers/batch/src/main/java/software/amazon/lambda/powertools/e2e/model/Product.java @@ -0,0 +1,56 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools.e2e.model; + +public class Product { + private long id; + + private String name; + + private double price; + + public Product() { + } + + public Product(long id, String name, double price) { + this.id = id; + this.name = name; + this.price = price; + } + + public long getId() { + return id; + } + + public void setId(long id) { + this.id = id; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public double getPrice() { + return price; + } + + public void setPrice(double price) { + this.price = price; + } +} diff --git a/powertools-e2e-tests/handlers/batch/src/main/resources/log4j2.xml b/powertools-e2e-tests/handlers/batch/src/main/resources/log4j2.xml new file mode 100644 index 000000000..8925f70b9 --- /dev/null +++ b/powertools-e2e-tests/handlers/batch/src/main/resources/log4j2.xml @@ -0,0 +1,16 @@ + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/powertools-e2e-tests/handlers/pom.xml b/powertools-e2e-tests/handlers/pom.xml index 4dd8cbb45..6e82c7aec 100644 --- a/powertools-e2e-tests/handlers/pom.xml +++ b/powertools-e2e-tests/handlers/pom.xml @@ -16,6 +16,7 @@ 1.8 1.2.2 + 1.1.2 3.11.2 3.5.0 1.13.1 @@ -72,6 +73,11 @@ powertools-large-messages ${lambda.powertools.version} + + software.amazon.lambda + powertools-batch + ${lambda.powertools.version} + com.amazonaws aws-lambda-java-core @@ -82,6 +88,11 @@ aws-lambda-java-events ${lambda.java.events} + + com.amazonaws + aws-lambda-java-serialization + ${lambda.java.serialization} + org.apache.logging.log4j log4j-slf4j2-impl diff --git a/powertools-e2e-tests/pom.xml b/powertools-e2e-tests/pom.xml index 2c802edc3..f3194c163 100644 --- a/powertools-e2e-tests/pom.xml +++ b/powertools-e2e-tests/pom.xml @@ -57,7 +57,12 @@ ${aws.sdk.version} test - + + software.amazon.awssdk + kinesis + ${aws.sdk.version} + test + software.amazon.awssdk cloudwatch diff --git a/powertools-e2e-tests/src/test/java/software/amazon/lambda/powertools/BatchE2ET.java b/powertools-e2e-tests/src/test/java/software/amazon/lambda/powertools/BatchE2ET.java new file mode 100644 index 000000000..c5f74594d --- /dev/null +++ b/powertools-e2e-tests/src/test/java/software/amazon/lambda/powertools/BatchE2ET.java @@ -0,0 +1,277 @@ +/* + * Copyright 2023 Amazon.com, Inc. or its affiliates. + * Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0 + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + * + */ + +package software.amazon.lambda.powertools; + +import static org.assertj.core.api.Assertions.assertThat; +import static software.amazon.lambda.powertools.testutils.Infrastructure.FUNCTION_NAME_OUTPUT; + +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.UUID; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; +import org.junit.jupiter.api.AfterAll; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; +import software.amazon.awssdk.core.SdkBytes; +import software.amazon.awssdk.http.SdkHttpClient; +import software.amazon.awssdk.http.urlconnection.UrlConnectionHttpClient; +import software.amazon.awssdk.regions.Region; +import software.amazon.awssdk.services.dynamodb.DynamoDbClient; +import software.amazon.awssdk.services.dynamodb.model.AttributeValue; +import software.amazon.awssdk.services.dynamodb.model.DeleteItemRequest; +import software.amazon.awssdk.services.dynamodb.model.PutItemRequest; +import software.amazon.awssdk.services.dynamodb.model.ScanRequest; +import software.amazon.awssdk.services.dynamodb.model.ScanResponse; +import software.amazon.awssdk.services.kinesis.KinesisClient; +import software.amazon.awssdk.services.kinesis.model.PutRecordsRequest; +import software.amazon.awssdk.services.kinesis.model.PutRecordsRequestEntry; +import software.amazon.awssdk.services.kinesis.model.PutRecordsResponse; +import software.amazon.awssdk.services.sqs.SqsClient; +import software.amazon.awssdk.services.sqs.model.SendMessageBatchRequest; +import software.amazon.awssdk.services.sqs.model.SendMessageBatchRequestEntry; +import software.amazon.lambda.powertools.testutils.Infrastructure; +import software.amazon.lambda.powertools.utilities.JsonConfig; + +public class BatchE2ET { + private static final SdkHttpClient httpClient = UrlConnectionHttpClient.builder().build(); + private static final Region region = Region.of(System.getProperty("AWS_DEFAULT_REGION", "eu-west-1")); + private static Infrastructure infrastructure; + private static String functionName; + private static String queueUrl; + private static String kinesisStreamName; + + private static ObjectMapper objectMapper; + private static String outputTable; + private static DynamoDbClient ddbClient; + private static SqsClient sqsClient; + private static KinesisClient kinesisClient; + private static String ddbStreamsTestTable; + private final List testProducts; + + public BatchE2ET() { + testProducts = Arrays.asList( + new Product(1, "product1", 1.23), + new Product(2, "product2", 4.56), + new Product(3, "product3", 6.78) + ); + } + + @BeforeAll + @Timeout(value = 5, unit = TimeUnit.MINUTES) + public static void setup() { + String random = UUID.randomUUID().toString().substring(0, 6); + String queueName = "batchqueue" + random; + kinesisStreamName = "batchstream" + random; + ddbStreamsTestTable = "ddbstreams" + random; + + objectMapper = JsonConfig.get().getObjectMapper(); + + infrastructure = Infrastructure.builder() + .testName(BatchE2ET.class.getSimpleName()) + .pathToFunction("batch") + .queue(queueName) + .ddbStreamsTableName(ddbStreamsTestTable) + .kinesisStream(kinesisStreamName) + .build(); + + Map outputs = infrastructure.deploy(); + functionName = outputs.get(FUNCTION_NAME_OUTPUT); + queueUrl = outputs.get("QueueURL"); + kinesisStreamName = outputs.get("KinesisStreamName"); + outputTable = outputs.get("TableNameForAsyncTests"); + ddbStreamsTestTable = outputs.get("DdbStreamsTestTable"); + + ddbClient = DynamoDbClient.builder() + .region(region) + .httpClient(httpClient) + .build(); + + // GIVEN + sqsClient = SqsClient.builder() + .httpClient(httpClient) + .region(region) + .build(); + kinesisClient = KinesisClient.builder() + .httpClient(httpClient) + .region(region) + .build(); + } + + @AfterAll + public static void tearDown() { + if (infrastructure != null) { + infrastructure.destroy(); + } + } + + @AfterEach + public void cleanUpTest() { + // Delete everything in the output table + ScanResponse items = ddbClient.scan(ScanRequest.builder() + .tableName(outputTable) + .build()); + + for (Map item : items.items()) { + HashMap key = new HashMap() { + { + put("functionName", AttributeValue.builder() + .s(item.get("functionName").s()) + .build()); + put("id", AttributeValue.builder() + .s(item.get("id").s()) + .build()); + } + }; + + ddbClient.deleteItem(DeleteItemRequest.builder() + .tableName(outputTable) + .key(key) + .build()); + } + } + + @Test + public void sqsBatchProcessingSucceeds() throws InterruptedException { + List entries = testProducts.stream() + .map(p -> { + try { + return SendMessageBatchRequestEntry.builder() + .id(p.getName()) + .messageBody(objectMapper.writeValueAsString(p)) + .build(); + } catch (JsonProcessingException e) { + throw new RuntimeException(e); + } + }) + .collect(Collectors.toList()); + + // WHEN + sqsClient.sendMessageBatch(SendMessageBatchRequest.builder() + .entries(entries) + .queueUrl(queueUrl) + .build()); + Thread.sleep(30000); // wait for function to be executed + + // THEN + ScanResponse items = ddbClient.scan(ScanRequest.builder() + .tableName(outputTable) + .build()); + validateAllItemsHandled(items); + } + + @Test + public void kinesisBatchProcessingSucceeds() throws InterruptedException { + List entries = testProducts.stream() + .map(p -> { + try { + return PutRecordsRequestEntry.builder() + .partitionKey("1") + .data(SdkBytes.fromUtf8String(objectMapper.writeValueAsString(p))) + .build(); + } catch (JsonProcessingException e) { + throw new RuntimeException(e); + } + }) + .collect(Collectors.toList()); + + // WHEN + PutRecordsResponse result = kinesisClient.putRecords(PutRecordsRequest.builder() + .streamName(kinesisStreamName) + .records(entries) + .build()); + Thread.sleep(30000); // wait for function to be executed + + // THEN + ScanResponse items = ddbClient.scan(ScanRequest.builder() + .tableName(outputTable) + .build()); + validateAllItemsHandled(items); + } + + @Test + public void ddbStreamsBatchProcessingSucceeds() throws InterruptedException { + // GIVEN + String theId = "my-test-id"; + + // WHEN + ddbClient.putItem(PutItemRequest.builder() + .tableName(ddbStreamsTestTable) + .item(new HashMap() { + { + put("id", AttributeValue.builder() + .s(theId) + .build()); + } + }) + .build()); + Thread.sleep(90000); // wait for function to be executed + + // THEN + ScanResponse items = ddbClient.scan(ScanRequest.builder() + .tableName(outputTable) + .build()); + + assertThat(items.count()).isEqualTo(1); + assertThat(items.items().get(0).get("id").s()).isEqualTo(theId); + } + + private void validateAllItemsHandled(ScanResponse items) { + for (Product p : testProducts) { + boolean foundIt = false; + for (Map a : items.items()) { + if (a.get("id").s().equals(Long.toString(p.id))) { + foundIt = true; + } + } + assertThat(foundIt).isTrue(); + } + } + + class Product { + private long id; + + private String name; + + private double price; + + public Product() { + } + + public Product(long id, String name, double price) { + this.id = id; + this.name = name; + this.price = price; + } + + public long getId() { + return id; + } + + public String getName() { + return name; + } + + public double getPrice() { + return price; + } + } +} diff --git a/powertools-e2e-tests/src/test/java/software/amazon/lambda/powertools/testutils/Infrastructure.java b/powertools-e2e-tests/src/test/java/software/amazon/lambda/powertools/testutils/Infrastructure.java index 996f49bd4..2a1af093c 100644 --- a/powertools-e2e-tests/src/test/java/software/amazon/lambda/powertools/testutils/Infrastructure.java +++ b/powertools-e2e-tests/src/test/java/software/amazon/lambda/powertools/testutils/Infrastructure.java @@ -45,14 +45,16 @@ import software.amazon.awscdk.services.appconfig.CfnDeploymentStrategy; import software.amazon.awscdk.services.appconfig.CfnEnvironment; import software.amazon.awscdk.services.appconfig.CfnHostedConfigurationVersion; -import software.amazon.awscdk.services.dynamodb.Attribute; -import software.amazon.awscdk.services.dynamodb.AttributeType; -import software.amazon.awscdk.services.dynamodb.BillingMode; -import software.amazon.awscdk.services.dynamodb.Table; +import software.amazon.awscdk.services.dynamodb.*; import software.amazon.awscdk.services.iam.PolicyStatement; +import software.amazon.awscdk.services.kinesis.Stream; +import software.amazon.awscdk.services.kinesis.StreamMode; import software.amazon.awscdk.services.lambda.Code; import software.amazon.awscdk.services.lambda.Function; +import software.amazon.awscdk.services.lambda.StartingPosition; import software.amazon.awscdk.services.lambda.Tracing; +import software.amazon.awscdk.services.lambda.eventsources.DynamoEventSource; +import software.amazon.awscdk.services.lambda.eventsources.KinesisEventSource; import software.amazon.awscdk.services.lambda.eventsources.SqsEventSource; import software.amazon.awscdk.services.logs.LogGroup; import software.amazon.awscdk.services.logs.RetentionDays; @@ -110,8 +112,9 @@ public class Infrastructure { private final AppConfig appConfig; private final SdkHttpClient httpClient; private final String queue; + private final String kinesisStream; private final String largeMessagesBucket; - + private String ddbStreamsTableName; private String functionName; private Object cfnTemplate; private String cfnAssetDirectory; @@ -126,7 +129,9 @@ private Infrastructure(Builder builder) { this.idempotencyTable = builder.idemPotencyTable; this.appConfig = builder.appConfig; this.queue = builder.queue; + this.kinesisStream = builder.kinesisStream; this.largeMessagesBucket = builder.largeMessagesBucket; + this.ddbStreamsTableName = builder.ddbStreamsTableName; this.app = new App(); this.stack = createStackWithLambda(); @@ -279,7 +284,12 @@ private Stack createStackWithLambda() { .maxReceiveCount(1) // do not retry in case of error .build(); sqsQueue.grantConsumeMessages(function); - SqsEventSource sqsEventSource = SqsEventSource.Builder.create(sqsQueue).enabled(true).batchSize(1).build(); + SqsEventSource sqsEventSource = SqsEventSource.Builder + .create(sqsQueue) + .enabled(true) + .reportBatchItemFailures(true) + .batchSize(1) + .build(); function.addEventSource(sqsEventSource); CfnOutput.Builder .create(stack, "QueueURL") @@ -287,6 +297,46 @@ private Stack createStackWithLambda() { .build(); createTableForAsyncTests = true; } + if (!StringUtils.isEmpty(kinesisStream)) { + Stream stream = Stream.Builder + .create(stack, "KinesisStream") + .streamMode(StreamMode.ON_DEMAND) + .streamName(kinesisStream) + .build(); + + stream.grantRead(function); + KinesisEventSource kinesisEventSource = KinesisEventSource.Builder + .create(stream) + .enabled(true) + .batchSize(3) + .reportBatchItemFailures(true) + .startingPosition(StartingPosition.TRIM_HORIZON) + .maxBatchingWindow(Duration.seconds(1)) + .build(); + function.addEventSource(kinesisEventSource); + CfnOutput.Builder + .create(stack, "KinesisStreamName") + .value(stream.getStreamName()) + .build(); + } + + if (!StringUtils.isEmpty(ddbStreamsTableName)) { + Table ddbStreamsTable = Table.Builder.create(stack, "DDBStreamsTable") + .tableName(ddbStreamsTableName) + .stream(StreamViewType.KEYS_ONLY) + .removalPolicy(RemovalPolicy.DESTROY) + .partitionKey(Attribute.builder().name("id").type(AttributeType.STRING).build()) + .build(); + + DynamoEventSource ddbEventSource = DynamoEventSource.Builder.create(ddbStreamsTable) + .batchSize(1) + .startingPosition(StartingPosition.TRIM_HORIZON) + .maxBatchingWindow(Duration.seconds(1)) + .reportBatchItemFailures(true) + .build(); + function.addEventSource(ddbEventSource); + CfnOutput.Builder.create(stack, "DdbStreamsTestTable").value(ddbStreamsTable.getTableName()).build(); + } if (!StringUtils.isEmpty(largeMessagesBucket)) { Bucket offloadBucket = Bucket.Builder @@ -451,6 +501,8 @@ public static class Builder { private Map environmentVariables = new HashMap<>(); private String idemPotencyTable; private String queue; + private String kinesisStream; + private String ddbStreamsTableName; private Builder() { getJavaRuntime(); @@ -526,6 +578,16 @@ public Builder queue(String queue) { return this; } + public Builder kinesisStream(String stream) { + this.kinesisStream = stream; + return this; + } + + public Builder ddbStreamsTableName(String tableName) { + this.ddbStreamsTableName = tableName; + return this; + } + public Builder largeMessagesBucket(String largeMessagesBucket) { this.largeMessagesBucket = largeMessagesBucket; return this; diff --git a/powertools-serialization/src/main/java/software/amazon/lambda/powertools/utilities/EventDeserializer.java b/powertools-serialization/src/main/java/software/amazon/lambda/powertools/utilities/EventDeserializer.java index 22712e8ce..13ad4d28f 100644 --- a/powertools-serialization/src/main/java/software/amazon/lambda/powertools/utilities/EventDeserializer.java +++ b/powertools-serialization/src/main/java/software/amazon/lambda/powertools/utilities/EventDeserializer.java @@ -33,7 +33,9 @@ import com.amazonaws.services.lambda.runtime.events.SNSEvent; import com.amazonaws.services.lambda.runtime.events.SQSEvent; import com.amazonaws.services.lambda.runtime.events.ScheduledEvent; +import com.fasterxml.jackson.core.JsonParser; import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.ObjectReader; import java.io.IOException; import java.util.List; @@ -96,6 +98,8 @@ public static EventPart extractDataFrom(Object object) { return new EventPart(event.getRecords().stream() .map(SQSEvent.SQSMessage::getBody) .collect(Collectors.toList())); + } else if (object instanceof SQSEvent.SQSMessage) { + return new EventPart(((SQSEvent.SQSMessage) object).getBody()); } else if (object instanceof ScheduledEvent) { ScheduledEvent event = (ScheduledEvent) object; return new EventPart(event.getDetail()); @@ -113,6 +117,8 @@ public static EventPart extractDataFrom(Object object) { return new EventPart(event.getRecords().stream() .map(r -> decode(r.getKinesis().getData())) .collect(Collectors.toList())); + } else if (object instanceof KinesisEvent.KinesisEventRecord) { + return new EventPart(decode(((KinesisEvent.KinesisEventRecord)object).getKinesis().getData())); } else if (object instanceof KinesisFirehoseEvent) { KinesisFirehoseEvent event = (KinesisFirehoseEvent) object; return new EventPart(event.getRecords().stream() @@ -214,6 +220,17 @@ public T as(Class clazz) { } } + public M as() { + TypeReference typeRef = new TypeReference() {}; + + try { + JsonParser parser = JsonConfig.get().getObjectMapper().createParser(content); + return JsonConfig.get().getObjectMapper().reader().readValue(parser, typeRef); + } catch (IOException e) { + throw new EventDeserializationException("Cannot load the event as " + typeRef, e); + } + }; + /** * Deserialize this part of event from JSON to a list of objects of type T * diff --git a/powertools-sqs/src/main/java/software/amazon/lambda/powertools/sqs/SqsBatch.java b/powertools-sqs/src/main/java/software/amazon/lambda/powertools/sqs/SqsBatch.java index d0ffe6a73..4378fa707 100644 --- a/powertools-sqs/src/main/java/software/amazon/lambda/powertools/sqs/SqsBatch.java +++ b/powertools-sqs/src/main/java/software/amazon/lambda/powertools/sqs/SqsBatch.java @@ -23,6 +23,10 @@ import java.lang.annotation.Target; /** + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + * * {@link SqsBatch} is used to process batch messages in {@link SQSEvent} * *

@@ -87,6 +91,7 @@ */ @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) +@Deprecated public @interface SqsBatch { Class> value(); diff --git a/powertools-sqs/src/main/java/software/amazon/lambda/powertools/sqs/SqsUtils.java b/powertools-sqs/src/main/java/software/amazon/lambda/powertools/sqs/SqsUtils.java index c838180fd..1f00edf17 100644 --- a/powertools-sqs/src/main/java/software/amazon/lambda/powertools/sqs/SqsUtils.java +++ b/powertools-sqs/src/main/java/software/amazon/lambda/powertools/sqs/SqsUtils.java @@ -119,6 +119,11 @@ public static void overrideS3Client(S3Client s3Client) { } /** + * + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + * * This utility method is used to process each {@link SQSMessage} inside the received {@link SQSEvent} * *

@@ -146,12 +151,17 @@ public static void overrideS3Client(S3Client s3Client) { * @return List of values returned by {@link SqsMessageHandler#process(SQSMessage)} while processing each message. * @throws SQSBatchProcessingException if some messages fail during processing. */ + @Deprecated public static List batchProcessor(final SQSEvent event, final Class> handler) { return batchProcessor(event, false, handler); } /** + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + * * This utility method is used to process each {@link SQSMessage} inside the received {@link SQSEvent} * *

@@ -200,6 +210,7 @@ public static List batchProcessor(final SQSEvent event, * @see Amazon SQS dead-letter queues */ @SafeVarargs + @Deprecated public static List batchProcessor(final SQSEvent event, final Class> handler, final Class... nonRetryableExceptions) { @@ -207,6 +218,10 @@ public static List batchProcessor(final SQSEvent event, } /** + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + * * This utility method is used to process each {@link SQSMessage} inside the received {@link SQSEvent} * *

@@ -232,6 +247,7 @@ public static List batchProcessor(final SQSEvent event, * @return List of values returned by {@link SqsMessageHandler#process(SQSMessage)} while processing each message. * @throws SQSBatchProcessingException if some messages fail during processing and no suppression enabled. */ + @Deprecated public static List batchProcessor(final SQSEvent event, final boolean suppressException, final Class> handler) { @@ -241,6 +257,10 @@ public static List batchProcessor(final SQSEvent event, } /** + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + * * This utility method is used to process each {@link SQSMessage} inside the received {@link SQSEvent} * *

@@ -291,6 +311,7 @@ public static List batchProcessor(final SQSEvent event, * @see Amazon SQS dead-letter queues */ @SafeVarargs + @Deprecated public static List batchProcessor(final SQSEvent event, final boolean suppressException, final Class> handler, @@ -301,6 +322,10 @@ public static List batchProcessor(final SQSEvent event, } /** + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + * * This utility method is used to process each {@link SQSMessage} inside the received {@link SQSEvent} * *

@@ -355,6 +380,7 @@ public static List batchProcessor(final SQSEvent event, * @see Amazon SQS dead-letter queues */ @SafeVarargs + @Deprecated public static List batchProcessor(final SQSEvent event, final boolean suppressException, final Class> handler, @@ -367,6 +393,10 @@ public static List batchProcessor(final SQSEvent event, } /** + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + * * This utility method is used to process each {@link SQSMessage} inside the received {@link SQSEvent} * *

@@ -394,6 +424,7 @@ public static List batchProcessor(final SQSEvent event, * @return List of values returned by {@link SqsMessageHandler#process(SQSMessage)} while processing each message- * @throws SQSBatchProcessingException if some messages fail during processing. */ + @Deprecated public static List batchProcessor(final SQSEvent event, final SqsMessageHandler handler) { return batchProcessor(event, false, handler); @@ -401,6 +432,10 @@ public static List batchProcessor(final SQSEvent event, /** + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + * * This utility method is used to process each {@link SQSMessage} inside the received {@link SQSEvent} * *

@@ -450,6 +485,7 @@ public static List batchProcessor(final SQSEvent event, * @see Amazon SQS dead-letter queues */ @SafeVarargs + @Deprecated public static List batchProcessor(final SQSEvent event, final SqsMessageHandler handler, final Class... nonRetryableExceptions) { @@ -458,6 +494,10 @@ public static List batchProcessor(final SQSEvent event, /** + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + * * This utility method is used to process each {@link SQSMessage} inside the received {@link SQSEvent} * *

@@ -484,6 +524,7 @@ public static List batchProcessor(final SQSEvent event, * @return List of values returned by {@link SqsMessageHandler#process(SQSMessage)} while processing each message. * @throws SQSBatchProcessingException if some messages fail during processing and no suppression enabled. */ + @Deprecated public static List batchProcessor(final SQSEvent event, final boolean suppressException, final SqsMessageHandler handler) { @@ -491,7 +532,13 @@ public static List batchProcessor(final SQSEvent event, } + /** + * @deprecated + * @see software.amazon.lambda.powertools.batch in powertools-batch module. + * Will be removed in V2. + */ @SafeVarargs + @Deprecated public static List batchProcessor(final SQSEvent event, final boolean suppressException, final SqsMessageHandler handler,