Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ets table ibrowse_stream grows endlessly #157

Open
gaynetdinov opened this issue Sep 18, 2017 · 10 comments
Open

ets table ibrowse_stream grows endlessly #157

gaynetdinov opened this issue Sep 18, 2017 · 10 comments

Comments

@gaynetdinov
Copy link

gaynetdinov commented Sep 18, 2017

Hello.

I have an API gateway elixir app, this app streams requests a lot. I've noticed that memory grows over time constantly and the main consumer is ets tables. The most of RAM got eaten by ibrowse_stream table (like 650 mb).

So I put IO.inspect :ets.info(:ibrowse_stream, :memory) * :erlang.system_info(:wordsize) inside receive loop when the process receives :ibrowse_async_response_end message. Turns out that the table is increasing after each streaming request.

Is that some kind of expected behaviour? Should I remove pid id from ets manually after each streaming request like :ets.delete(:ibrowse_stream, {:req_id_pid, id})?

Thanks.

@cmullaparthi
Copy link
Owner

No, you shouldn't have to clear out entries in the ETS table after each streaming request. Sounds like a bug - I will investigate.

@cmullaparthi
Copy link
Owner

I ran a few different requests but couldn't reproduce this behaviour. Can you give me some more help in diagnosing this? What exactly are the options you are invoking ibrowse with?

@gaynetdinov
Copy link
Author

gaynetdinov commented Sep 26, 2017

Thanks for checking that issue.

I call HTTPotion as the following:

opts = [headers: <...>, timeout: <...>, ibrowse: [max_sessions: 200, max_pipeline_size: 10, stream_to: {self(), :once}]

case HTTPotion.get(url, opts) do
  %HTTPotion.AsyncResponse{id: id} ->
    async_response(conn, id)
  %HTTPotion.ErrorResponse{message: msg} ->
    # log error
    send_error_response(conn, 502, msg)
end

defp async_response(conn, id) do
  :ok = :ibrowse.stream_next(id)
  
  receive do
    # here a lot of different cases, like matching :ibrowse_async_headers, :ibrowse_async_response_timeout, :connection_closed_no_retry and so on

    {:ibrowse_async_response, ^id, data} ->
      case chunk(conn, data) do
        {:ok, conn} ->
          async_response(conn, id)
        {:error, reason} ->
          conn
      end
    {:ibrowse_async_response_end, ^id} ->
      # OUTPUT ETS TABLE SIZE
      IO.inspect :ets.info(:ibrowse_stream, :memory) * :erlang.system_info(:wordsize)
      conn
  end
end

And then I just run my elixir app locally and fire requests and see this

[info] GET /url
<...>
[info] Chunked 200 in 1263ms
2480
[info] GET /url
<...>
[info] Chunked 200 in 331ms
2560
[info] GET /url
<...>
[info] Chunked 200 in 111ms
2640
[info] GET /url
<...>
[info] Chunked 200 in 447ms
2720
[info] GET /url
<...>
[info] Chunked 200 in 397ms
2800

@elpaisa
Copy link

elpaisa commented Oct 5, 2017

Any updates on this issue?, it seems related to a problem that i have, ibrowse is starting connections endlessly, ranch is killing the application, because it makes reach the limit of max acceptors

@cmullaparthi
Copy link
Owner

I'm sorry I haven't had much time to look into this. Will try this week.

@gaynetdinov
Copy link
Author

gaynetdinov commented Mar 13, 2018

Seems like this issue is easily solved by using :ibrowse.stream_close/1. I didn't use it before (my bad), that's why I saw those 'leaks'. Now, after adding it, the memory doesn't seem to grow anymore, there are spikes, yes, but after the spikes memory levels go back to normal values.

I'll continue monitoring, but most likely this 'issue' is solved.

Thanks again for your work!

@sumerman
Copy link

Arguably ibrowse should be monitoring stream_to process.

@cmullaparthi
Copy link
Owner

@gaynetdinov thank you for diligently following up on this.

@sumerman good point, I'll try to add this functionality. That said, I'll happily accept a pull request too ;-)

@gaynetdinov
Copy link
Author

I'm afraid I have to reopen this issue, I was too quick in a making decision that there is no leaking anymore.

screen shot 2018-03-20 at 14 06 03

I'll check if the reason is ets table.

@gaynetdinov gaynetdinov reopened this Mar 20, 2018
@argl
Copy link

argl commented Sep 6, 2018

Is this still an issue? I have seen similar bevaviour in the past, but have not dug deeper into it and used a different client without streaming for now. I would love to go back though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants