-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoolOfflineException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN ... #166
Comments
I found a way to make it work. What I changed was this: Looks like for Dyno RACK is == DC which is weird IMHO previous code should work. I had to change the code in 2 places.
private static Host buildHost(DynomiteNodeInfo node){
Host host = new Host(node.getServer(),8102,node.getDc());
host.setStatus(Status.Up);
return host;
}
private static TokenMapSupplier toTokenMapSupplier(List<DynomiteNodeInfo> nodes){
StringBuilder jsonSB = new StringBuilder("[");
int count = 0;
for(DynomiteNodeInfo node: nodes){
jsonSB.append(" {\"token\":\""+ node.getTokens()
+ "\",\"hostname\":\"" + node.getServer()
+ "\",\"zone\":\"" + node.getDc()
+ "\"} ");
count++;
if (count < nodes.size())
jsonSB.append(" , ");
}
jsonSB.append(" ]\""); Logs - Working :-) SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/diego/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-simple/1.7.21/be4b3c560a37e69b6c58278116740db28832232c/slf4j-simple-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/diego/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.21/7238b064d1aba20da2ac03217d700d91e02460fa/slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
[main] WARN com.netflix.config.sources.URLConfigurationSource - No URLs will be polled as dynamic configuration sources.
[main] INFO com.netflix.config.sources.URLConfigurationSource - To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
[main] INFO com.netflix.config.DynamicPropertyFactory - DynamicPropertyFactory is initialized with configuration sources: com.netflix.config.ConcurrentCompositeConfiguration@56cbfb61
[main] INFO com.netflix.dyno.contrib.ArchaiusConnectionPoolConfiguration - Dyno configuration: CompressionStrategy = NONE
[main] WARN com.netflix.dyno.jedis.DynoJedisClient - DynoJedisClient for app=[DynomiteClusterChecker] is configured for local rack affinity but cannot determine the local rack! DISABLING rack affinity for this instance. To make the client aware of the local rack either use ConnectionPoolConfigurationImpl.setLocalRack() when constructing the client instance or ensure EC2_AVAILABILTY_ZONE is set as an environment variable, e.g. run with -DEC2_AVAILABILITY_ZONE=us-east-1c
[main] INFO com.netflix.dyno.jedis.DynoJedisClient - Starting connection pool for app DynomiteClusterChecker
[pool-3-thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - Adding host connection pool for host: Host [hostname=127.0.0.1, ipAddress=null, port=8102, rack: local-dc, datacenter: local-d, status: Up]
[pool-3-thread-1] INFO com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Priming connection pool for host:Host [hostname=127.0.0.1, ipAddress=null, port=8102, rack: local-dc, datacenter: local-d, status: Up], with conns:3
[pool-3-thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - Successfully primed 3 of 3 to Host [hostname=127.0.0.1, ipAddress=null, port=8102, rack: local-dc, datacenter: local-d, status: Up]
[main] WARN com.netflix.dyno.connectionpool.impl.lb.AbstractTokenMapSupplier - Local Datacenter was not defined
[main] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - registered mbean com.netflix.dyno.connectionpool.impl:type=MonitorConsole
Z: 200
[Thread-1] INFO com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Shutting down connection pool for host:Host [hostname=127.0.0.1, ipAddress=null, port=8102, rack: local-dc, datacenter: local-d, status: Up]
[Thread-1] WARN com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Failed to close connection for host: Host [hostname=127.0.0.1, ipAddress=null, port=8102, rack: local-dc, datacenter: local-d, status: Up] Unexpected end of stream.
[Thread-1] WARN com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Failed to close connection for host: Host [hostname=127.0.0.1, ipAddress=null, port=8102, rack: local-dc, datacenter: local-d, status: Up] Unexpected end of stream.
[Thread-1] WARN com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Failed to close connection for host: Host [hostname=127.0.0.1, ipAddress=null, port=8102, rack: local-dc, datacenter: local-d, status: Up] Unexpected end of stream.
[Thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - Remove host: Successfully removed host 127.0.0.1 from connection pool
[Thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - deregistered mbean com.netflix.dyno.connectionpool.impl:type=MonitorConsole Cheers, |
Diego you got it ready on the way to provide the DC. Dyno reads the environmental variable from the instance and uses that to determine the datacenter: dyno/dyno-core/src/main/java/com/netflix/dyno/connectionpool/impl/utils/ConfigUtils.java Lines 27 to 56 in b9feaa6
You can see that in the WARN message you are getting:
Please free to ask any further questions or close the issue if your question has been answered. |
I see @ipapapa This makes sense and we use this in PROD so you can do PREFERABLE ZONE where if your microservice is running on us-west-2a and you have Dynomite us-weest-2a you should pick us-west-2a instead of other zone or region. This code I sent was just a POC ro proof a point. The point here is IMHO looks wrong to me:
|
I think maybe there is a corner case here. IF I have a Karyon microservice it works however If I do a java main class does not work unless I rebuild the whole dyno connection. This code can show the issue: https://github.com/diegopacheco/netflixoss-pocs/tree/master/dynomite-client-dyno-notrca. IF I shut down 1 or my dynamite nodes(assuming 3 nodes cluster in docker) dyno is not failing over to the other nodes unless I rebuild the connection... Strangely this works on this simple microservice https://github.com/diegopacheco/netflixoss-pocs/tree/master/karyon-dyno-microservice First I get this(Right after killing a node) com.netflix.dyno.connectionpool.exception.FatalConnectionException: FatalConnectionException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN, port=0, rack: UNKNOWN, datacenter: UNKNOW, status: Down], latency=0(0), attempts=1]redis.clients.jedis.exceptions.JedisConnectionException: Unexpected end of stream. Some time later - I start getting this... com.netflix.dyno.connectionpool.exception.PoolOfflineException: PoolOfflineException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN, port=0, rack: UNKNOWN, datacenter: UNKNOW, status: Down], latency=0(0), attempts=0]host pool is offline and no Racks available for fallback I debug ConnectionPoolImpl
Line:283
RetryPolicy retry = cpConfiguration.getRetryPolicyFactory().getRetryPolicy();
There is : cpConfiguration.getRetryFactory() == RetryONCE == 1
This could be the issue my .setRetryPolicyFactory(new RetryNTimes.RetryFactory(3,true)) not being applied and that wht no fallback.
IF I do:
ConfigurationManager.getConfigInstance().setProperty("dyno.dynomiteCluster.retryPolicy","RetryNTimes:3:true");
I get the proper retry count.... but still not falling over as I expect. |
You can close this BUG. It works for me now https://github.com/diegopacheco/netflixoss-pocs/tree/master/karyon-dyno-microservice The problem was that I was not setting LOCAL_RACK and without local_rack the retry/fallback does not happen since there is no remote hosts to burrow connection from. As soon as I set the local rack all starter to work just fine. Thanks anyways @ipapapa You can close this now. |
Actually, you can also set the environment variables to avoid changing the code between local and Karyon. I use the following in my laptop |
@ipapapa
Running dynomite locally single node with this config.
Running with dyno 1.5.7 using this code:
It was all working fine with dyno 1.5.1 so when I change to dyno 1.5.7 i'm getting this Exception.
LOGS
Cheers,
Diego Pacheco
The text was updated successfully, but these errors were encountered: