-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure kfp-ui can show logs from Argo #582
Comments
Thank you for reporting us your feedback! The internal ticket has been created: https://warthogs.atlassian.net/browse/KF-6542.
|
Reproduce the errorI was able to reproduce the error by the following steps:
Test the fixTo test the fix suggested in canonical/bundle-kubeflow#1120, I:
Resultslogs cannot be viewed, with a different error message this time: The error is DebuggingFrom the log above Looking at the logs from
we can see there the request to minio is trying to fetch from the path Now, let's get inside the minio container to see if we can find the persisted data, and if it's at the expected path in the bucket:
we can observe there that the persisted data is indeed found there, but it's not at the expected path. Looking at the upstream changes from kfp 2.2 to 2.3, they have added a new env in the frontend's This env Seen also in upstream, in this comment it is mentioned that the value for In our CKF, this ConfigMap is created by
so this is the format that is used by our And due to the change upstream, the |
Testing the fixesFixes implemented in #605 and canonical/argo-operators#208 must be tested simultaneously.
Expected result that logs are still visible even when workflow is deleted: v1 Pipelines logsThe logs of v1 pipelines can be viewed from the They are not affected by the workflow being deleted since they directly link to the logs in the s3 storage. To test it, you can use this v1 pipeline yaml from our integration tests. |
A note on why V1 Pipelines persist the logs, even if the In MySQL ( For V1 Pipelines, we see in the above table that the status of the Argo Workflow contains as an status:
phase: "Succeeded"
nodes:
execution-order-pipeline-6bbzm-1432297067:
id: "execution-order-pipeline-6bbzm-1432297067"
name: "execution-order-pipeline-6bbzm.echo1-op"
displayName: "echo1-op"
phase: "Succeeded"
...
inputs:
parameters:
- name: "text1"
value: "message 1"
outputs:
artifacts:
- name: "main-logs"
s3:
key: "execution-order-pipeline-6bbzm/execution-order-pipeline-6bbzm-echo1-op-1432297067/main.log"
exitCode: "0"
children:
- "execution-order-pipeline-6bbzm-3473380184" Summary
|
Closed by #605 and canonical/argo-operators#208 |
Context
This is in order to resolve canonical/bundle-kubeflow#1120
The KFP frontend has an environment variable
ARGO_ARCHIVE_LOGS
, that is used by the frontend to know if it should proxy logs from MinIO. More on this can be found in canonical/bundle-kubeflow#1120 (comment)We'll need to introduce a new config option, that will be configuring this env var to True by default, to ensure the UI will be fetching logs from Argo by default.
An extra step we'll also need to do will be to have one more config option for disabling the GKE metadata, which was making the upstream container constantly restart canonical/bundle-kubeflow#1120 (comment)
What needs to get done
ARGO_ARCHIVE_LOGS
env var inkpf-ui
DISABLE_GKE_METADATA
env var inkfp-ui
Definition of Done
The text was updated successfully, but these errors were encountered: