-
Notifications
You must be signed in to change notification settings - Fork 639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(batchUpdate): enhance batch update functionality #1483
feat(batchUpdate): enhance batch update functionality #1483
Conversation
61c263d
to
1c6501b
Compare
For the record, #988 was a previous attempt at something similar. See spinnaker/governance#214 for background. |
7bbdabd
to
cb00635
Compare
to match the one that Front50CoreConfiguration provides. This paves the way to test additional PipelineController functionality.
…urations to be used for save/update controller mappings * Add new configuration class PipelineControllerConfig * Update Front50WebConfig to use PipelineControllerConfig * Update PipelineController to use PipelineControllerConfig * Update PipelineControllerSpec to use PipelineControllerConfig * Update PipelineControllerTck to use PipelineControllerConfig * add test to check duplicate pipelines when refreshCacheOnDuplicatesCheck flag is enabled and disabled
* refactor SqlStorageService.storeObjects() method to make the bulk save an atomic operation * without this change, in case of db exception, some chunks of pipelines get saved while the others fail leading to inconsistency. * Last Catch block is now removed as it's no longer partial storage of supplied pipelines * add test for bulk create pipelines which tests the atomic behaviour of the SqlStorageService.storeObjects() method
…or batchUpdate(). checkForDuplicatePipeline() is removed from validatePipeline() and cron trigger validations are moved into validatePipeline() so that reusable code stays at on e place. remove unused overloaded checkForDuplicatePipeline() method Fix NPE caused in a test(should create pipelines in a thread safe way) in PipelineControllerSpec due to a newly added log message in PipelineController.save()
…to address deserialization issues and add some useful log statements
…troller.batchUpdate * Check if user has WRITE permissions on the pipeline, if not the pipeline will be added to invalid pipelines list * This change is a first step towards controlling access at pipeline level in a batch update. batchUpdate is still allowed only for admins but in the next few commits, the access level will be equated to that of individual pipeline save. * Check if duplicate pipeline exists in the same app * Validate pipeline id * Adjust test classes for PipelineController changes
…failed pipelines and their counts * The response will be in the following format: [ "successful_pipelines_count" : <int>, "successful_pipelines" : <List<String>>, "failed_pipelines_count" : <int>, "failed_pipelines" : <List<Map<String, Object>>> ]
…ine in the batch already exists and their lastModified timestamps don't match then the pipeline is stale and hence added to invalid pipelines list. This behaviour is same as that of individual save and update operations. * add test to validate the code around staleCheck for batchUpdate
* adjust permissions to batchUpdate (before: isAdmin, now: verifies application write permission). * enforce runAsUser permissions while deserializing pipelines * This puts batchUpdate on a par with individual save w.r.t. access restrictions * adjust test classes according to the changes to the PipelineController
…oller.validatePipeline
Fixed test exceptions by making the following changes: - added @EqualsAndHashCode to Pipeline - added `pipelineDAO.all(true)` in SqlPipelineControllerTck.createPipelineDAO() to initialize the cache with empty set. Otherwise, the tests fail due to NPE.
cb00635
to
894299a
Compare
pipelinesToSave.size(), | ||
System.currentTimeMillis() - bulkImportStartTime); | ||
|
||
List<String> savedPipelines = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the failed pipelines, you're storing more info about each pipeline, eg application, id, etc. does it make sense to add some of that information to the saved pipelines as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see where we store error information in failed pipelines, but not other stuff. Can you provide a pointer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@link108 - Given hundreds/thousands of pipelines, the idea I think is to let the user know more about failed pipelines and keep the info about succeeded ones to minimum. But if adding a couple of fields (like id, application) provides more value, we can do that.
} | ||
} | ||
} | ||
} | ||
} catch (e: Exception) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I doubt this happens much - but any reason to lose this try/catch here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking TransientDaoException case where DB is failing or similar...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SHORT answer it'd be nice if these were caught & re-raised as Spinnaker exception objects
try { | ||
withPool(poolName) { | ||
jooq.transactional(sqlRetryProperties.transactions) { ctx -> | ||
withPool(poolName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this flip, I think the entire batch fails now vs. only a chunk of it - intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guess that's partly the point of this PR - just wondering if that's a good thing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can ignore this mostly just curious philosophically which is better :)
front50-web/src/main/java/com/netflix/spinnaker/front50/controllers/PipelineController.java
Show resolved
Hide resolved
|
||
return pipelineDAO.create(pipeline.getId(), pipeline); | ||
Pipeline savedPipeline = pipelineDAO.create(pipeline.getId(), pipeline); | ||
log.info( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same nit: probably should be debug level vs. info level.
front50-web/src/main/java/com/netflix/spinnaker/front50/controllers/PipelineController.java
Show resolved
Hide resolved
log.debug( | ||
"Successfully validated pipeline {} in {}ms", | ||
pipeline.getName(), | ||
System.currentTimeMillis() - validationStartTime); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe validation time would be better as a metric? A gauge OR counter with tags by app, or pipeline stages or such? NOT required, just a thought.
List<Pipeline> pipelines = deserializedPipelines.getLeft(); | ||
List<Map<String, Object>> failedPipelines = deserializedPipelines.getRight(); | ||
|
||
log.info( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution on "info" logs. IF this isn't used often ok - not sure that batchUpdates like this are regularly done but would prefer debug level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This log occurs once for the entire batch operation so I am hoping it comes handy while monitoring.
front50-web/src/main/java/com/netflix/spinnaker/front50/controllers/PipelineController.java
Outdated
Show resolved
Hide resolved
if (staleCheck | ||
&& !Strings.isNullOrEmpty(pipeline.getId()) | ||
&& pipeline.getLastModified() != null) { | ||
checkForStalePipeline(pipeline, errors); | ||
} | ||
|
||
// Run other pre-configured validators | ||
pipelineValidators.forEach(it -> it.validate(pipeline, errors)); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly a LOT of the above operations LIKE the cron ids could be moved to pipelineValidators & shrink a lot of this code down :) and simplify it! Future enhancement maybe...
front50-api/src/main/java/com/netflix/spinnaker/front50/api/model/pipeline/Pipeline.java
Outdated
Show resolved
Hide resolved
@link108 @jasonmcintosh - could you approve the PR? |
…e functionality The documentation for front50 already existed. Adds some sections for the gate and orca changes which make the functionality feature complete. See relevant PRs spinnaker/front50#1483, spinnaker/orca#4773, and spinnaker/gate#1823
…e functionality (#458) The documentation for front50 already existed. Adds some sections for the gate and orca changes which make the functionality feature complete. See relevant PRs spinnaker/front50#1483, spinnaker/orca#4773, and spinnaker/gate#1823 Co-authored-by: Richard Timpson <richard.timpson@salesforce.com>
This PR adds the following functionality to pipeline batch update operation :