Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removed ability to disable connection pool #337

Merged
merged 7 commits into from
Apr 17, 2017

Conversation

jai1
Copy link
Contributor

@jai1 jai1 commented Apr 6, 2017

Motivation

Fix for #331

Modifications

Check if the cnxpool is off and close the connection if cnx failed

Result

Client can't open too many connections on broker.

@jai1 jai1 self-assigned this Apr 6, 2017
@jai1 jai1 added the type/bug The PR fixed a bug or issue reported a bug label Apr 6, 2017
@jai1 jai1 added this to the 1.18 milestone Apr 6, 2017
@jai1 jai1 requested a review from merlimat April 6, 2017 23:08
@@ -503,6 +503,14 @@ void connectionOpened(final ClientCnx cnx) {
cnx.channel().close();
return null;
}

if (client.getConfiguration().getConnectionsPerBroker() == 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be better to close the connection after it gets not used anymore instead than after errors?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How this would fix cases in which a producer is created and then closed? Or the connection used to do a topic lookup?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right - the fix won't work in these scenarios .. I will try another approach - call close in the finalize method of ClientCnx

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uhm... I really don't like to get into the finalizer() business... It can be very unpredicably called and you shouldn't do anything that could possibly block..

Copy link
Contributor

@merlimat merlimat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are still other cases in which the connections might be leaked:

  • As you were doing initially, a producer might fail to create
  • Connections used for topic and metadata lookups

@@ -430,16 +432,27 @@ void registerConsumer(final long consumerId, final ConsumerImpl consumer) {
consumers.put(consumerId, consumer);
}

void closeIfNotUsedAgain() {
if (producers.size() == 0 && consumers.size() == 0 && isConnectionPoolingDisabled) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cheapest check is for the final boolean flag:

if (isConnectionPoolingDisabled && producers.isEmpty() && consumers.isEmpty()) {
 ....
}

@jai1
Copy link
Contributor Author

jai1 commented Apr 10, 2017

@merlimat
Scenarios-

  1. Pooling OFF:-
    a. Create producer/consumer fails
    - handled in connectionOpened();
    b. Lookup or getMeta fails
    - handled in findBroker and getPartitionedTopicMetadata
    c. Reconnect fail - handled by calling removeConsume/Producer explicitly on old connection
    d. Close fails - handled by removeConsume/Producer
    e. Consumer/Producer gets GC’d without closing
    - No way right now but we can implement Closeable and explicitly throw IOException in the constructor to force ppl to use try-with-resource idiom - it’s a big change and we need to think about backward compatibility.

  2. Pooling ON:-
    a. We are guaranteed to leak atmost X number of resources.

@jai1 jai1 force-pushed the TooManyFileDescriptors branch from 4baa728 to 1dc10ea Compare April 10, 2017 19:43
@merlimat
Copy link
Contributor

@jai1 The only reason to disable connection pooling was for testing purposes, to simulate multiple connections hitting broker. I don't see any other good reason to disable pooling.

If it's just for the test, then it may make more sense to enforce the pool to be always there (>=1) and if we want to test with many connections, just use a huge number of connections per host in the pool settings. What do you think?

@jai1 jai1 force-pushed the TooManyFileDescriptors branch from 3042f7a to f9e0d37 Compare April 16, 2017 21:35
@jai1
Copy link
Contributor Author

jai1 commented Apr 16, 2017

@merlimat - This looks ok?

Copy link
Contributor

@merlimat merlimat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@merlimat merlimat merged commit 6099e88 into apache:master Apr 17, 2017
@merlimat merlimat changed the title Too many file descriptors Removed ability to disable connection pool Apr 17, 2017
@jai1 jai1 deleted the TooManyFileDescriptors branch April 17, 2017 17:00
hrsakai pushed a commit to hrsakai/pulsar that referenced this pull request Dec 10, 2020
* Avoid producer deadlock on connection closing

* Fixed constants init

* Avoid creating timer instance each time, if channel is not full

* Added debug statements
hangc0276 pushed a commit to hangc0276/pulsar that referenced this pull request May 26, 2021
Fixes apache#336

Besides applying the LightProto generated new package name and some new APIs, this PR also fixes the tests error caused by apache#9240, which introduces a new method PulsarService#createLocalMetadataStore.

And some tests are affected by the wrong parse of entry data after the pulsar update, this PR fixes the test failures.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug The PR fixed a bug or issue reported a bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants