Skip to content
This repository has been archived by the owner on May 6, 2021. It is now read-only.

Update cluster token upon 401 response from OSO #331

Open
hrishin opened this issue Jan 30, 2019 · 4 comments
Open

Update cluster token upon 401 response from OSO #331

hrishin opened this issue Jan 30, 2019 · 4 comments

Comments

@hrishin
Copy link
Member

hrishin commented Jan 30, 2019

After some interval or incidents devtools-sre token gets updated. However, Idler is not able to understand the change in OSO token. Hence all watch requests for cluster starts failing.

The only way to get the updated token is to restart the idler.

Idler needs to sync the new cluster token as soon as it receives the 401 HTTP response. (a demand paging like)

@chmouel
Copy link
Contributor

chmouel commented Jan 31, 2019

This need to use the same strategy as the other fabric8-services, using fabric8-cluster api you can watch changes and refresh as done in -tenant :

https://github.com/fabric8-services/fabric8-tenant/blob/b3df98f84747d39e0a5ac220cb9c1f41cc7b18df/cluster/service.go#L69-L86

I think we need to rephrase this issue description with my comment instead,

@alexeykazakov
Copy link
Contributor

Well, I think we should not cache clusters at all. Let's just load clusters by URL when we need them:
GET https://cluster.openshift.io/api/clusters?cluster-url={apiURL}

See http://swagger.goa.design/?url=github.com%2Ffabric8-services%2Ffabric8-cluster%2Fdesign#!/clusters/clusters_list

And obtain the token when you need it via auth token endpoint which you already do but without caching it:
GET https://auth.openshift.io/api/token?for={apiURL}

@chmouel
Copy link
Contributor

chmouel commented Jan 31, 2019

@alexeykazakov how do you detect a change and refresh that token ?

@alexeykazakov
Copy link
Contributor

Cluster service watches the cluster configuration secret. So, when the secret with the cluster configuration is changed the cluster service immediately updates its DB.
Auth service pulls the cluster tokens every 5 minutes to update its local cache. We planned to drop the cache as part of our OSD support and obtain the token every time we need it but we will probably suspend this work for now.

So, my suggestion would be that all other clients do not implement or use any cache at all. Just pull the cluster information when you need it from the cluster service and the cluster token when you need it from the auth service. Right now practically it would mean that if there is a new cluster added to the system then it will be available for every service immediately. If there is some changes in the existing cluster configuration (like updated token) then the change will be available in 5 minutes or less. No any service restarts required.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants