Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: add global logRetentionDays setting #23909

Open
1 of 2 tasks
fargito opened this issue Jan 30, 2023 · 7 comments
Open
1 of 2 tasks

core: add global logRetentionDays setting #23909

fargito opened this issue Jan 30, 2023 · 7 comments
Labels
@aws-cdk/core Related to core CDK functionality effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p2

Comments

@fargito
Copy link

fargito commented Jan 30, 2023

Describe the feature

add a global logRetentionDays context setting in cdk.json and make it the default for all log retention settings

Use Case

Hello,

Setting the retention period on all resources is quite frustrating.

Moreover, the value is often identical between all resources in the same stack, so it would be awesome to be able to define it in a centralized place.

Also, since the default is Infinity, having this setting would potentially have a big impact to help reduce the number of useless logs.

WDYT?

Proposed Solution

No response

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

CDK version used

2.62.2

Environment details (OS name and version, etc.)

Ubuntu 22.04

@fargito fargito added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Jan 30, 2023
@github-actions github-actions bot added the @aws-cdk/core Related to core CDK functionality label Jan 30, 2023
@peterwoodworth
Copy link
Contributor

Have you looked into aspects? Aspects are a generic way you could implement anything like this in your own CDK app

I'm not sure that this particular request would be easy to implement however, some of our log retention implementations use custom resources. Additionally, I'm not sure we'd want to implement this feature due to the precedent it could set and the maintenance overhead it would create. I can leave this open to discuss workarounds, and to gather community feedback on if this is something we should do

@peterwoodworth peterwoodworth added p2 effort/medium Medium work item – several days of effort and removed needs-triage This issue or PR still needs to be triaged. labels Feb 7, 2023
@fargito
Copy link
Author

fargito commented Feb 7, 2023

Thanks for your response! I understand this constraint, I will check if aspects can help with my use case.

@anentropic
Copy link

I have a similar requirement, but more a frustration with resources which are apparently out of my control which create logs with "never expire" retention policy.

For example I find there are a bunch of "never expire" log groups like:

/aws/lambda/MyStack-CustomVpcRestrictDefaultSGCustomRes-KeArPuyYcxtf
/aws/lambda/MyStack-CustomVpcRestrictDefaultSGCustomRes-VeSnkFXYTboC
...

...presumably one per deployment, as mentioned in this related issue #13250

CustomVpcRestrictDefaultSGCustomRes is generated by something in CDK under the hood. I can see this resource in the generated template.json after synth, but have been unable to find any way to target it via Aspects.

Also note how the log group name changes each time, making it impossible(?) to target with an override via LogRotation construct as some people are doing in the issue above.

e.g. I made this aspect:

import jsii
from aws_cdk import Annotations, IAspect
from constructs import IConstruct, MetadataEntry

def _get_logical_id(metadata: list[MetadataEntry]) -> str | None:
    for entry in metadata:
        if entry.type == "aws:cdk:logicalId":
            return entry.data

@jsii.implements(IAspect)
class DebugAspect:
    def visit(self, con: IConstruct):
        logical_id = _get_logical_id(con.node.metadata)
        Annotations.of(con).add_error(f"{con.node.id} | {con.__class__.__name__} | {con.node.path} | {logical_id}")

...and then in my tests I do:

def test_log_retention(stack: Website):
    cdk.Aspects.of(stack).add(LogRetentionChecker())  # type: ignore
    annotations = Annotations.from_stack(stack)
    errors = [msg.entry.data for msg in annotations.find_error("*", Match.any_value())]
    breakpoint()

...as a clunky way of enumerating everything that is addressable via aspects. But CustomVpcRestrictDefaultSGCustomRes does not appear (and nor do any LogGroups).

Other problematic items I found are:

  • The logs for RDS DatabaseSecret hosted secret rotation Lambda are also created as "never expire". The constructs and methods involved, e.g. rds_instance.add_rotation_single_user, do not provide any way to specify log retention policy.
  • RDS Proxy debug logs are also created as "never expire". Again, the DatabaseProxy construct or rds_instance.add_proxy method provide only a debug_logging boolean arg with no way to control the retention policy.

So given that it's somewhat haphazard whether constructs give any control over log retention, and there are other resources created which are unadressable, it would be useful to have some sort of global control over log retention policy.

@anentropic
Copy link

Additionally...

I would like to be able to write assertions in tests that resources have an appropriate log retention policy set.

When I define my own Lambda and set the log retention then:

  • the log retention does not show up in Properties of the Lambda
  • there is no AWS::Logs::LogGroup resource that I can target
  • there is a Custom::LogRetention resource I can reach via template.find_resources but it only has a very indirect relation back to the Lambda that created it, via logretention['Properties']['LogGroupName']['Fn::Join'] == ['', ['/aws/lambda/', {'Ref': 'MyLambda59B4AED0'}]]
  • I can't access the similar custom logretention resources that are created for RDS secret rotation lambdas, only for the lambda that I define myself

this all seems so half-baked

@comcalvi
Copy link
Contributor

comcalvi commented Jul 17, 2024

cdk.json is not the right place to add this configuration, that should only be used to configure CLI behavior. The most global setting for it would be in App but that's not very flexible, I think it makes more sense to do something along the lines of what @pahud has mentioned here: #30800. I haven't validated the exact API yet, but I think that idea is on the right track.

@comcalvi
Copy link
Contributor

comcalvi commented Sep 9, 2024

@fargito @anentropic we've implemented CustomResourceConfig which allows you to set the LogRetention of every CDK-vended custom resource. Can you try it out and see if it addresses most of the excess that you're seeing? If not, what other sources of these logs are you encountering?

@anentropic
Copy link

anentropic commented Sep 17, 2024

@comcalvi

great news!

However I added this to my app and redeployed:

import aws_cdk as cdk
from aws_cdk import aws_logs as logs
from aws_cdk.custom_resources import CustomResourceConfig

app = cdk.App()
CustomResourceConfig.of(app).add_log_retention_lifetime(logs.RetentionDays.ONE_MONTH)

but it seems to have had no effect

and looking at the cdk-deploy-change-set for the deploy in CloudFormation I can't see anything relating to log retentions

specifically I have the following log groups still set to "Never expire":

/aws/lambda/my-stack-CustomS3AutoDeleteObjectsCu-Kd5FivMDiZ9e
/aws/lambda/mystackDatabaseMySQLRotationSingleUser163EDC26
/aws/rds/proxy/mystackdatabasemysqlrdsproxyv197d76dea

(I realise the RDS Proxy one wasn't expected to be affected, but it's frustrating I have no way to configure it)

...and this one set to "1 day":

/aws/lambda/my-stack-LogRetentionaae0aa3c5b4d4f8-cAUUtfZ6Ld7W

...the latter I don't mind at all, but presumably it should have been affected by the CustomResourceConfig too?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/core Related to core CDK functionality effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p2
Projects
None yet
Development

No branches or pull requests

4 participants