Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error checking prior to file writes #678

Closed
brandtkeller opened this issue Sep 23, 2024 · 2 comments · Fixed by #687
Closed

Error checking prior to file writes #678

brandtkeller opened this issue Sep 23, 2024 · 2 comments · Fixed by #687
Assignees
Labels
bug Something isn't working
Milestone

Comments

@brandtkeller
Copy link
Member

Steps to reproduce

Need to determine steps to reproduce

Expected result

If some error has occurred that resulted in an OSCAL model not being properly available for write to a file - we should error and/or handle that write accordingly.

Actual Result

Occasional errors in a workflow might produce a properly named file with empty contents. This produces additional errors on follow-on write operations that want to perform a merge operation.

Visual Proof (screenshots, videos, text, etc)

Severity/Priority

Low - likely edge case driven

Additional Context

N/A

@brandtkeller brandtkeller added the possible-bug Something may not be working label Sep 23, 2024
@github-actions github-actions bot added the triage Awaiting triage from the team label Sep 23, 2024
@brandtkeller brandtkeller added bug Something isn't working and removed possible-bug Something may not be working triage Awaiting triage from the team labels Sep 23, 2024
@brandtkeller brandtkeller added this to the v0.8.0 milestone Sep 23, 2024
@meganwolf0 meganwolf0 self-assigned this Sep 24, 2024
@meganwolf0
Copy link
Collaborator

Talked to Richard about how this occurred after being unable to reproduce - they created an empty assessment-results.yaml, and so when the WriteOscalModel function was trying to ingest an existing model, it was choking. I think there might be a couple avenues to address but wanted to open it up for feedback:

  1. Better error messaging (I think it's unclear what exactly is happening/why the model can't be written to the existing file)
  2. If an invalid oscal model file is provided, just write to like filename copy.yaml - the main issue I see here is that on successive write operations, say if you don't catch that assessment-results.yaml is invalid, so it writes to assessment-results copy.yaml, then on the next go-around it would overwrite assessment-results copy.yaml - or do you have some kind of recursion logic to add to the first valid assessment-results <whatever>? It just feels like you could introduce a lot of weirdness so not a huge fan of this approach but did want to add this option
  3. Something else?

Also, even though it wasn't the cause, could still add a routine to validate the oscal before dumping - this would kind of be like a failsafe if for some reason invalid oscal was created, it doesn't get actually written. I'm torn here because this is maybe good for like an extra check, but also not super great as we shouldn't be creating invalid oscal anywhere in the process/we should catch it with other testing/etc. So wasn't sure what to think about implementing this - just because it will obviously add some overhead. (I think there's also a bit of a nuance here between valid oscal with respect to go structures and valid oscal with respect to how the jsonschema sees it, which aren't really the same, so this would be a jsonschema validation)

This is a lot of words to basically ask - what should closure be for this?

@brandtkeller
Copy link
Member Author

agree with your assessment on option 2 - not the biggest fan of the complexity involved here.

Option 1 should be a goal anytime we have a lens on errors returning and not being useful.

As for our stance I think we can make a few adjustments that improve the fidelity:

I believe our current stance is still valuable. A default file specification makes sense for clearly outlining the artifact and the ability to automatically merge is valuable in a few different scenarios.

We should definitely improve error handling - but I don't want to conduct the error handling AFTER we've executed the core of the operation and then lose our evidence and data.

I'd like to propose front-loading the potential artifact if it exists. If there is an error with some malformed artifact - we return that error before conducting any operation. We can also retain that artifact in memory if it exists and not have to re-load it later?

Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: ✅ Done
Development

Successfully merging a pull request may close this issue.

2 participants