-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 Add Pathfinder assessment migration to CLI #507
Conversation
Extending tackle CLI tool with option to export Pathfinder assessments and import it to Konveyor API assessments endpoint. ```$ ./tackle migrate-assessments``` Related to konveyor#489 Signed-off-by: Marek Aufart <maufart@redhat.com>
hack/tool/tackle
Outdated
@@ -722,6 +725,65 @@ class TackleTool: | |||
# Push the updated assessment | |||
apiJSON(self.tackle2Url + tackle2path("assessments/%d" % assmnt2['id']), self.tackle2Token, data=assmnt2, method='PATCH', ignoreErrors=ignoreErrors) | |||
|
|||
def migrateAssessments(self, assessmentsFileName="assessments", ignoreErrors=False): | |||
cnt = 0 | |||
## Export Pathfinder data for each Application |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jmontleon @jortel Thinking about usage in operator upgrade:
- option 1 - Does it make sense first dump Pathfinder data on Konveyor 0.2 version, then upgrade Konveyor and import assessments to 0.3 as part of the upgrade steps (option 1) or
- option 2 - upgrade Konveyor, but keep running Pathfinder, then run the assessment migration live directly from Pathfinder (without Hub proxy) to Hub API without creating temp files and kill Pathfinder after migration succeeded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to think about this from a workflow where the operator is orchestrating it. Option 2 sounds cleaner to me, from a logic standpoint. Is it possible to run the migration utility process within a pod (maybe a job would be appropriate) that can do the migration between pathfinder and hub?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A job sounds good to me too to execute the python migration script (@dymurray, was it planned to have the script part of the hub image or a special one?). It'd need to get endpoints for Pathfinder (probably the internal one) and Hub API to run the migration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps I am being confused by the terminology. When I read execute the python migration script I feel like we are rolling something special for this.
I think the original plan is the simplest:
- Add functionality to the export/import tool to work with Assessments. This is what the tool is for.
- Add support for hub API version 3 (for all things including assessment).
- Package the tool with the operator.
- The operator uses to tool to export/import assessments.
This is an application level migration that does should be performed/orchestrated by the operator, not the hub.
The workflow:
- The operator uses the tool to export assessments though the hub API (this is how the tool works).
- upgrade tackle.
- import the assessments
- On success, Get rid of pathfinder.
If the operator doing this in a job is better, it seems to just add moving parts but open to understanding why it's better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previous comments included also the "execution" part of the assessments migration that will be part of operator repo, so that was misleading, sorry for mixing it.
One more open question is about credentials/authorization - the tool expects username&password which is not known to the Hub (AFAIK), just Keycloak knows it. But as @mansam suggested, an addon token could be used for this assessments migration purpose. would this addon token (that is known to the operator too) be suitable for calling Hub API by the tool instead of login/password @jortel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hub has a builtin auth provider the is used for addon tokens.
Yes, we have the operator build/sign an auth token with assessment scopes and signed using the key in the secret. I will prototype that.
hack/tool/tackle
Outdated
# Skip if Assessment for given Application already exists | ||
if len(apiJSON(self.tackle2Url + "/hub/applications/%d/assessments" % passmnt["applicationId"], self.tackle2Token, data={"questionnaire": {"id": 1}})) > 0: | ||
print(" Assessment already exists, skipping.") | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is an assessment on the application, import is skipped, so the script can run multiple times without failing.
The export part is commented out until we decide best matching scenario for execution with operator upgrade. Examples from import part:
|
Signed-off-by: Marek Aufart <maufart@redhat.com>
hack/tool/tackle
Outdated
print(" Assessment already exists, skipping.") | ||
continue | ||
|
||
# Prepare new Assessment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks great.
Since Pathfinder component is being replaced by Konveyor Hub API, already existing Assessments need to be migrated. Adding a script based on Hub's CLI tool https://github.com/konveyor/tackle2-hub/tree/main/hack/tool that connects to running Pathfinder and migrates Assessments to running Hub API v0.3. This script is expected to be executed by operator during upgrade Konveyor 0.2->0.3. Example usage: (only auth token argument is optional, both urls are required) ``` $ ./migrate-pathfinder-assessments --pathfinder-url http://pathfinder.svc --hub-base-url http://hub.svc:8080 --token "" ``` Related to konveyor/tackle2-hub#489 and konveyor/tackle2-hub#507 --------- Signed-off-by: Marek Aufart <maufart@redhat.com>
Signed-off-by: Marek Aufart <maufart@redhat.com>
Updated assessment migration part to do pathfinder query and migration to Hub API without temporary files. Also added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I recommend rebasing on main
to clear the go vet
errors.
The `/applications/:id/analysis` and `/applications/:id/analysis/report` returned the report for: analysis.ID=:id (application ID) instead of the Last analysis for the application.ID = :id. Potentially related to: https://issues.redhat.com/browse/MTA-1344 Signed-off-by: Jeff Ortel <jortel@redhat.com>
Better support for test run analysis with seed PRs using `--build-arg SEED_PROJECT <url>` `--build-arg SEED_BRANCH <branch> ` Example: ``` $ docker build . -t quay.io/jortel/tackle2-hub:latest \ --build-arg 'SEED_PROJECT=jortel/tackle2-seed' \ --build-arg 'SEED_BRANCH=add-targets' \ && docker push quay.io/jortel/tackle2-hub:latest ``` Signed-off-by: Jeff Ortel <jortel@redhat.com>
Marshalling the Assessment resource's `Sections` field unconditionally results in populating the model's `Sections` with the byte string `null`, which breaks the test for empty sections, making it always look like the assessment is being imported with sections "as-is". To fix this, only marshal the sections in the Assessment's `Model()` method if they are not empty. Additionally, replace the `if m.Sections == nil` test with a test for length, which works on nil as well as empty slices. Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Previously PrepareForArchetype was using only the membership criteria to autofill assessments, but the enhancement specifies that it should use the archetype tags as well as the criteria tags. Fixes https://issues.redhat.com/browse/MTA-1396 Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Fixes konveyor#510 Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Only changing the `required` field is permitted for builtin questionnaires, all other changes and deletion are disallowed. --------- Signed-off-by: Sam Lucidi <slucidi@redhat.com>
The HTML report (tarball) is composed/streamed by the hub in steps: 1. Add the static report directory (template). 2. Add the composed `output.js` file which contains the analysis data. Using this method, the ReportWriter was adding the `output.js` (entry) twice in the tar. First from the static report (template) and the second generated by the hub. The tar command line seems to deal with this by overwriting the file. However the _extract_ functionality used by browsers ignores the 2nd occurrence. The fix is to filter out the 1st output.js when using tar.Writer.AddDir(). The `Filter` in the `tar` package only supported _include_ filtering. This PR updates the `tar.Filter` to support both _included_ and _excluded_ patterns. Refitted and tested the /bucket. --------- Signed-off-by: Jeff Ortel <jortel@redhat.com>
https://issues.redhat.com/browse/MTA-1298 Signed-off-by: Jeff Ortel <jortel@redhat.com>
This was missing from the Assessment/Questionnaire sections and is necessary to completely implement the enhancement. Signed-off-by: Sam Lucidi <slucidi@redhat.com>
The static report JS needs the app ID to be a string. Signed-off-by: Jeff Ortel <jortel@redhat.com>
Updating Maven Identity sample to use (currently) correct "maven" kind instead of "mvn" to make it working with addon https://github.com/konveyor/tackle2-addon-analyzer/blob/61027b29f36ffa5b884ceac81c61d74341bdd762/cmd/mode.go#L151-L153 Signed-off-by: Marek Aufart <maufart@redhat.com>
Add the ticket API Test --------- Signed-off-by: Yash Khare <yash2010118@akgec.ac.in>
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Addresses: https://bugzilla.redhat.com/show_bug.cgi?id=2242803. CVE which impacts http/2 and rapid reset DDOS attacks. Signed-off-by: Dylan Murray <dymurray@redhat.com>
Using the container status `Reason` for more accurate reporting. ``` state: Failed image: quay.io/jortel/tackle2-addon-analyzer:debug pod: konveyor-tackle/task-13-gcmjs retries: 1 started: 2023-10-16T10:36:30.221282042-07:00 terminated: 2023-10-16T10:36:40.301254088-07:00 bucket: id: 17 name: "" errors: - severity: Error description: 'Pod failed: OOMKilled' ``` Also, the RWX should be disabled by default. --------- Signed-off-by: Jeff Ortel <jortel@redhat.com>
Fixes FOSSA complaint. Signed-off-by: Jeff Ortel <jortel@redhat.com>
`ArchivedIssue` in the API package being an alias to another type confuses the swag tool. To mitigate this, I had to overwrite the swagger type of that field on the Analysis resource with `object`. This means that the api documentation for that field will be unhelpfully vague, but it allows the documentation to build. Signed-off-by: Sam Lucidi <slucidi@redhat.com>
The refactor to using the new `tar` package accidentally removed setting the X-Directory:Expand. As a result, the client is not expanding downloaded directory (tarball). --- Fixed API tests. The bucket directory test really needs 2+ files to be accurate. It missed this issue. Added the task bucket (subresource) test for symmetry. --------- Signed-off-by: Jeff Ortel <jortel@redhat.com>
Fixing resources cleanup at the end of the test, by sending the identity's ID to `Identity.Delete` instead of the tracker's ID Signed-off-by: Maayan Hadasi <mguetta@redhat.com>
…konveyor#530) * Add `confidence` and `risk` fields to Application and Archetype resources. Overall confidence calculated using the algorithm from Pathfinder for each assessment, then summed and divided by total number of assessments. Overall risk level is `green` if all assessments are `green`, otherwise the overall risk is `red`, `unknown`, or `yellow` in that priority order if any assessments have that risk level. * Moved some assessment status logic out of the resource and into the assessment package where it belongs. * Added ArchetypeResolver to better encapsulate the related behaviors and for symmetry with ApplicationResolver. * Simplified the Application and Archetype endpoints by replacing several related resource `With*` methods with `WithResolver`. * Improved consistency of the APIs within the `assessment` package. --------- Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Fixes konveyor#527 Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Add hack script to generate jwt tokens for _builtin_ auth. Likley used by actors such as the operator for migration related to upgrade. Related: - auth.Validator signature changed to return error. This is more flexible and _failures_ can be propagated. - auth.Builtin.Authenticate() returns NotAuthenticated when should have returned NotValid for clearity. - auth.NotValid includes `Reason`. --------- Signed-off-by: Jeff Ortel <jortel@redhat.com>
closes konveyor#539 --------- Signed-off-by: Jeff Ortel <jortel@redhat.com>
This also adds some convenience methods to the `Setting` model to make reading and updating their values somewhat less tedious. Fixes konveyor#505 --------- Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
The yaml encoder/decoder needs to be explicitly told inline the `Resource` struct's fields on each of the API resources, otherwise attempts to bind requests that include Resource fields (`id`, `createTime`, `createUser`, `updateUser`) fail. Signed-off-by: Sam Lucidi <slucidi@redhat.com>
The assessment stakeholder relationships need to be preloaded so that the `applications/id/assessments` and `archetypes/id/assessments` subresources populate their association fields properly. Signed-off-by: Sam Lucidi <slucidi@redhat.com>
This PR represents a first step towards improving our automatically generated API documentation. It includes the following changes: * Adds general API level doc comments that are a requirement for Swagger/OpenAPI. * Adds missing godocs for parameters where they were missing. This was causing Swagger validation to fail and was preventing QE from successfully converting our Swagger 2 documentation to OpenAPI 3.0 * Fixes some incorrect parameter documentation * Adds a new target to convert our Swagger 2 documentation to OpenAPI 3.0 using the Swagger conversion API at `converter.swagger.io`. * Corrects the existing docs-swagger target command invocation. * Updates the redoc target to use the newly generated OpenAPI 3.0 documentation. --------- Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
This was missed in the original PR. konveyor#544 Signed-off-by: Sam Lucidi <slucidi@redhat.com>
Extending tackle CLI tool with option to export Pathfinder assessments and import it to Konveyor API assessments endpoint.
$ ./tackle migrate-assessments
More information about CLI usage: https://github.com/konveyor/tackle2-hub/tree/main/hack/tool#readme
Related to #489