Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uwsgi config "lazy-apps: yes" "should_start_http_server" return False #31

Closed
xiecang opened this issue Aug 6, 2019 · 9 comments
Closed

Comments

@xiecang
Copy link

xiecang commented Aug 6, 2019

"UWsgiPrometheusMetrics().should_start_http_server()" returns False when I set uwsgi's startup parameter "lazy-apps:yes".

How can I use it when setting "lazy-apps:yes" ?
uwsgi config:

uwsgi:
  wsgi: wsgi:app
  http-socket: 0.0.0.0:11010
  processes: 4
  threads: 16
  master: yes
  ignore-write-errors: yes
  ignore-sigpipe: yes
  die-on-term: yes
  wsgi-disable-file-wrapper: yes
  max-requests: 65535
  max-requests-delta: 1024
  log-prefix: uWSGI
  log-date: yes
  log-slow: 10000
  disable-logging: yes
  need-app: true
  reload-mercy: 1
#  lazy-apps: yes

code:

    from prometheus_flask_exporter.multiprocess import UWsgiPrometheusMetrics
    metrics = UWsgiPrometheusMetrics(app)
    metrics.start_http_server(9200, )
@rycus86
Copy link
Owner

rycus86 commented Aug 6, 2019

Hm, I'm not familiar with the setting, let me have a quick look.

@rycus86
Copy link
Owner

rycus86 commented Aug 6, 2019

One workaround that seems to work is exposing the metrics endpoint on the main application:

app = Flask(__name__)
metrics = UWsgiPrometheusMetrics(app)
metrics.register_endpoint('/metrics')

I'm still testing if the endpoint can be started on a new port with lazy-apps.

@rycus86
Copy link
Owner

rycus86 commented Aug 6, 2019

See an example in https://github.com/rycus86/prometheus_flask_exporter/blob/master/examples/uwsgi-lazy-apps/server.py and the setup in https://github.com/rycus86/prometheus_flask_exporter/blob/master/examples/uwsgi-lazy-apps/Dockerfile#L14-L16

@rycus86
Copy link
Owner

rycus86 commented Aug 6, 2019

I haven't found a way to start the endpoint on a separate HTTP server - the master process doesn't seem to run the module. If you know a way, please let me know!
Otherwise if you're happy with the workaround, please close the issue.

Thanks!

@xiecang
Copy link
Author

xiecang commented Aug 7, 2019

It is very effective!
Thanks!

@xiecang xiecang closed this as completed Aug 7, 2019
@tlinhart
Copy link

tlinhart commented Aug 7, 2019

I'm a bit late here but anyway. Exposing the metrics endpoint on the main app is fine with lazy-apps. However, when for some reason you need to server the metrics on a different port (e.g. you don't want to expose them to the world), you can't do that. That's exactly my case. In my setup, the app is deployed as a Docker image (nginx -> uWSGI -> Flask app) and I need to run the metrics on a different port so that it's not exposed to the internet but only to Prometheus running on internal network. The solution I took is to use multiprocess.MultiProcessCollector from official Python client in my original Flask app. The other app that exposes the metrics uses the same setup for the prometheus_multiproc_dir environmental variable and hence accesses the metrics from my app.

@rycus86
Copy link
Owner

rycus86 commented Aug 7, 2019

That sounds like a good setup. How's the second app running to expose the metrics endpoint? Is it running in the same container?

@tlinhart
Copy link

tlinhart commented Aug 7, 2019

Yeah. I use Supervisord to run nginx and uWSGI. So basically I added another program to the game -- Flask app exposing the metrics. Another advantage is that this way, you can scale your app and the metrics app independently.

@rycus86
Copy link
Owner

rycus86 commented Aug 7, 2019

Thanks for sharing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants