Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load-balancing MapFishPrint in kubernetes and shared-state #2970

Closed
remyguillaume opened this issue Jun 27, 2023 · 0 comments · Fixed by #2984
Closed

Load-balancing MapFishPrint in kubernetes and shared-state #2970

remyguillaume opened this issue Jun 27, 2023 · 0 comments · Fixed by #2984

Comments

@remyguillaume
Copy link

Context

  • MapFish print version: 3.30.3
  • Java version: (the one in your docker container)
  • OS: (the one in your docker container)

Describe the bug

We deployed MapFishPrint in our Kubernetes cluster. It works fine with 1 pod.
But we would like to manage a right load balancing, having N working pods behind the same service.
This does not work today, because the mapfishprint containers are not stateless, and the containers do not share the generated reports of status.

Actually, when we start an asynchronous report, we get a result with an json containing the URL we can query to get the status of our report. But this URL will be send randomly to any pod, which possibly does not know anything about our report.

How to reproduce

  • Just deploy many pods of MFP behind the same service in any Kubernetes cluster.
  • Start a asynchron report (POST)
  • Use the /report/status URL to query the report status

=> Sometimes, it works, sometimes you get invalid reference 'xxxxxx' because the pod does not know the report id.

Question

Is there a way to configure MapFishPrint today, in order to share the report states among all running pods?
Probably you have already solved this for the MapFishPrint SaaS Version?

Thanks,
Guillaume.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant