-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
downloading artifact from s3 in ui, timed out waiting for condition #2129
Comments
@sarabala1979 before I investigate - have you seen similar before please? |
generally this kind of error is because the CA that signed s3.amazonaws.com isn't being used in the request. Depending on your situation & security requirements (https://serverfault.com/questions/444186/is-it-safe-to-use-s3-over-http-from-ec2-as-opposed-to-https), this can likely be fixed by using Otherwise, it would appear that argo needs to have the CA when making the s3 request. I haven't looked at the code or docker image to know more. |
It looks like the image is from scratch (https://github.com/argoproj/argo/blob/master/Dockerfile#L87-L91), which wouldn't have CA's by default. I also don't see them COPY'd in. So unless the CA is built into the binary (which would seem odd), then I'm guessing no certs exist in the docker. You can always create a secret and mount them into the container and set All this said, I'm guessing this worked previously so I'm probably missing how they got in. It's also possible they exist in the container, but are out of date. |
@ddseapy thanks for your suggestions. I tried with I assume the secret should have the contents of /etc/ssl/certs/ca-certificates.crt? I could give that a try, but I am starting to suspect there is something else going on (as well)? Thoughts welcome. |
@haghabozorgi I am hitting the same Yes, that's generally where ca certs are at on machines. If |
@ddseapy is there extra config needed to get the artifact store setup in 2.4.3? I was trying to confirm if this is working in 2.4.3 but if i insert the same configmap snippet i see no artifacts list in the archive tab. |
One quite way of doing this is to add the following to the argo-server deployment:
|
@haghabozorgi I don't think the config is different. hopefully @tcolgate 's fix works for you. |
The executor probably has a valid system CA as it's build on a debian image. the argocli container is more stripped down. |
I think this is probably a bug and that the argo-server image may need to be built on the same base as the argocli. This is not straight forward, I'll own it. |
@alexec are you also testing with |
I'm working of a fix that will change from scratch - what would be useful is for someone to test it for me. |
@tcolgate thoughts on a solution
@jessesuen - thoughts on this please? |
I don't have access to S3 is a way to reliably test this. I've create a PR, would you be able to use the Dockefile there to try it out? |
@haghabozorgi i'm afraid I'm not sure what the cause of that issue is. I made the other ticket to track that. |
@alexec I am building from your repo now, but not sure if i can test properly given the issue I mentioned above. I assume i will see the same http named cookie not present message? |
@haghabozorgi yes. if you deploy without @tcolgate's fix, and see "http named cookie not present" instead of the original error "x509: certificate signed by unknown authority", then the PR works. |
@alexec using the image from your docker file results in |
Ok, but we know that is a good fix for the certs. |
Ok - should we close this issue once the PR is merged? |
I use minio over http as well. |
Can you please check you have secrets set-up?
|
as far as i can tell it's ok. apiVersion: v1
data:
config: |
containerRuntimeExecutor: docker
artifactRepository:
archiveLogs: true
s3:
accessKeySecret:
key: accesskey
name: ddseapy-minio
secretKeySecret:
key: secretkey
name: ddseapy-minio
bucket: ds-argo-artifacts
endpoint: ddseapy-minio
insecure: true
region: us-east-1
metricsConfig:
enabled: true
path: /metrics
port: 8080
persistence:
archive: true
connectionPool:
maxIdleConns: 100
maxOpenConns: 0
nodeStatusOffLoad: true
postgresql:
host: ddseapy-postgresql
port: 5432
database: argo
tableName: argo_workflows
userNameSecret:
name: ddseapy-argo-workflow-controller
key: postgresqlUsername
passwordSecret:
name: ddseapy-argo-workflow-controller
key: postgresqlPassword
kind: ConfigMap
metadata:
name: ddseapy-argo-workflow-controller
namespace: ddseapy
---
apiVersion: v1
data:
accesskey: REDACTED
secretkey: REDACTED
kind: Secret
metadata:
labels:
app: minio
name: ddseapy-minio
namespace: ddseapy
type: Opaque |
@ddseapy I redacted your paste as you shared un-encrypted credentials. If this is a production system, then you should immediately change your password. |
I'm wondering where this error is coming from - can you open your browser console and share the HTTP request and response please? |
no, that is simply |
Oh - what auth mode are you using? Server? |
Ok. I've reproduced this. |
Fix implemented. |
@alexec does your fix address downloading from s3 or is it just for the minio over http use case? |
minio is an s3-compatible api. while, i don't have the ability to test s3, and it sounds like neither does @alexec it almost certainly fixes the error for s3 as well.. |
@alexec can we please re-open? I am testing install.yaml from master with same s3 snippet in my original comment on this issue and am still seeing |
If bug persists - please re-open. |
@haghabozorgi master install.yaml points to With rc8 I am able to download logs, though all other artifacts are downloaded as |
The "named cookie" error should be fixed in rc8. |
Yay! |
Hi, i found another solution if it can help.. This is my config-map used in the controler to allow the container init to push logs on s3
This is the conf of the argo-server deployment who mount the ca.crt and allow the UI to get logs from s3 after the pod is killed.
|
Checklist:
What happened:
Installed the latest 2.5.0rc7 via install.yaml on eks 1.14 and added to the install.yaml the diff shown in the output below, so the
archivedLogs
and s3 config are enabled (workflow-controller-configmap):to gain ui access on localhost to the ui in kubernetes
kubectl port-forward svc/argo-server 2746:2746 -n argo
run a basic hello world workflow via argo cli and the workflow completes as expected, and clicking on artifacts link in ui shows main-logs object as expected, but when you click to download the actual artfact in the ui, the browser eventually returns a "timed out waiting on condition"
What you expected to happen:
I expect clicking on the link to download the requested artifact.
How to reproduce it (as minimally and precisely as possible):
install.yaml with a s3 config similar to above and run any workflow, and then try and download the resulting main-log artifact.
Logs
argo-server log shows:
Message from the maintainers:
If you are impacted by this bug please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.
The text was updated successfully, but these errors were encountered: