Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changelog retention defaults to 90 days after upgrade, all prior changelogs are lost. #14182

Closed
julianstolp opened this issue Nov 3, 2023 · 4 comments · Fixed by #14233
Closed
Assignees
Labels
severity: medium Results in substantial degraded or broken functionality for specfic workflows status: accepted This issue has been accepted for implementation type: bug A confirmed report of unexpected behavior in the application

Comments

@julianstolp
Copy link

NetBox version

v3.6.4

Python version

3.8

Steps to Reproduce

  1. Set the changelog retention days to other value than 90 days with a new configuration revision
  2. Do upgrade as described in documentation docs.netbox.dev
  3. Log in to your netbox instance and go to /extras/changelog and the changelog retention days have been defaulted to 90 days all prior logs are deleted.

Expected Behavior

The changelog retention days should not default back to 90 days after upgrade.

Observed Behavior

/core/config/
image

/extras/changelog/
image

/extras/config-revisions/add/
image

@julianstolp julianstolp added the type: bug A confirmed report of unexpected behavior in the application label Nov 3, 2023
@abhi1693
Copy link
Member

abhi1693 commented Nov 3, 2023

I think this is an issue with the config revision that defaults to the 1st revision after upgrade. This was also brought up at #14178. I have not tested this myself and may require some changes to the redis cache checks to ensure the set revisions are not unset.

@abhi1693 abhi1693 added status: under review Further discussion is needed to determine this issue's scope and/or implementation severity: medium Results in substantial degraded or broken functionality for specfic workflows labels Nov 3, 2023
@julianstolp
Copy link
Author

@abhi1693 I can confirm that #14178 is actually the root cause for the behavior i have observed.

@abhi1693 abhi1693 added status: needs owner This issue is tentatively accepted pending a volunteer committed to its implementation and removed status: under review Further discussion is needed to determine this issue's scope and/or implementation labels Nov 9, 2023
@abhi1693
Copy link
Member

abhi1693 commented Nov 9, 2023

The issue is with the clearcache command that resets the config revisions.

@abhi1693 abhi1693 self-assigned this Nov 9, 2023
@abhi1693 abhi1693 added status: accepted This issue has been accepted for implementation and removed status: needs owner This issue is tentatively accepted pending a volunteer committed to its implementation labels Nov 9, 2023
@jeremystretch
Copy link
Member

jeremystretch commented Nov 9, 2023

Rather than working around the specific cache entry, a more robust solution may be to move the designation of the active config version into the database. We could do this with a boolean column and a constraint limiting the table to a single true value.

I suppose this presents a problem when enabling maintenance mode, however.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
severity: medium Results in substantial degraded or broken functionality for specfic workflows status: accepted This issue has been accepted for implementation type: bug A confirmed report of unexpected behavior in the application
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants