Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ECONNRESET - Session Expired #242

Closed
jremyf opened this issue May 9, 2017 · 16 comments
Closed

ECONNRESET - Session Expired #242

jremyf opened this issue May 9, 2017 · 16 comments

Comments

@jremyf
Copy link

jremyf commented May 9, 2017

Hi!
I am making a web application based on NodeJS and AngularJS. These applications are deployed on Azure and I also have a virtual machine with Neo4J on it.
My requests through the driver are OK, excepted after a certain amount of time,
I get this error :

econ

Here is how I instantiate the driver :

var neo4j = require('neo4j-driver').v1; var driver = neo4j.driver("bolt://server.adress", neo4j.auth.basic("login", "password")); module.exports = driver;

Here is how I request my server. For each request that is send, I create a new session that I closed right after.

var driver = require('driver_neo4j'); this.method = function() { var session = driver.session(); session.run("REQUEST") .then(function(result) { session.close(); }); }

I found nothing weird in the Neo4J server logs.
It's look like it's related to this #144
If you have any ideas to solve this problem, I'm taking them!

Update :
I actually upgrade neo4j-driver version 1.2 to 1.3.
From now on I get the exact same stack of error with the following code : 'ServiceUnavailable'
@Lahpman7 This error is happening every 5 to 10 minutes in my application.

@ghost
Copy link

ghost commented May 11, 2017

I too am having this issue. I don't really have a solution, but a temporary work around is to generate a script that restarts your app (Dynos in my case; I am working with Heroku) every half an hour.

@lutovich
Copy link
Contributor

Hi @jremyf, @Lahpman7.

Thanks for reporting this problem. It feels like read ECONNRESET error can only happen when server abruptly closes connection while client is reading. Could you please attach neo4j.log, debug.log and security.log for the relevant timeframe to this issue?

I think this error might also be caused by some load balancer that closes connections after some period of inactivity. Do you guys know if there is smth like this in your network setup?

If you want this problem to go away transaction functions API available in 1.2+ driver might help. They retry given transaction with exponential backoff on network errors. Relevant API functions are:
session.readTransaction(tx => {...}) and session.writeTransaction(tx => {...}). Readme and developer manual contain more info about them. Would be nice to get to the bottom of this issue though.

Looking forward to your reply.

@ghost
Copy link

ghost commented May 16, 2017

Hi @lutovich, unfortunately, because I am using Heroku with GrapheneDB, I only have access to messages.log and neo4j.log, and both are clear from errors. The only error I am getting (after around 20-30 minutes) is this below:

2017-05-16T04:20:43.420228+00:00 app[web.1]: { Error: This socket has been ended by the other party 2017-05-16T04:20:43.420246+00:00 app[web.1]: at new Neo4jError (/app/node_modules/neo4j-driver/lib/v1/error.js:65:132) 2017-05-16T04:20:43.420247+00:00 app[web.1]: at newError (/app/node_modules/neo4j-driver/lib/v1/error.js:55:10) 2017-05-16T04:20:43.420248+00:00 app[web.1]: at NodeChannel._handleConnectionError (/app/node_modules/neo4j-driver/lib/v1/internal/ch-node.js:322:41) 2017-05-16T04:20:43.420249+00:00 app[web.1]: at emitOne (events.js:101:20) 2017-05-16T04:20:43.420249+00:00 app[web.1]: at TLSSocket.emit (events.js:188:7) 2017-05-16T04:20:43.420250+00:00 app[web.1]: at TLSSocket.writeAfterFIN [as write] (net.js:297:8) 2017-05-16T04:20:43.420251+00:00 app[web.1]: at NodeChannel.write (/app/node_modules/neo4j-driver/lib/v1/internal/ch-node.js:355:20) 2017-05-16T04:20:43.420252+00:00 app[web.1]: at Chunker.flush (/app/node_modules/neo4j-driver/lib/v1/internal/chunking.js:120:18) 2017-05-16T04:20:43.420252+00:00 app[web.1]: at Connection.sync (/app/node_modules/neo4j-driver/lib/v1/internal/connector.js:523:21) 2017-05-16T04:20:43.420253+00:00 app[web.1]: at /app/node_modules/neo4j-driver/lib/v1/session.js:123:16 code: 'SessionExpired' }

I guess I could give transactional functions a shot, but I would like to try to resolve this before swapping all the syntax in my project.

Thanks for the response

@lutovich
Copy link
Contributor

Hi @Lahpman7,

Database can close driver connections on protocol errors or unsuccessful authentication. It writes logs about such events. So if there is nothing like that in debug.log something else must be closing those connections.

What kind of dynos do you use? Docs mention dyno sleeping that happens after 30 mins of inactivity. Could it be something you experience?

@jremyf
Copy link
Author

jremyf commented May 16, 2017

Hello @lutovich ,

I didn’t install any load balancing system on my Azure VM. It didn’t seem like there is such a thing by default.

For print the log, I did the followings steps:

  • Restarting the node.js server
  • Restarting the server
  • Sending a MATCH MERGE request
  • Waiting 5 minutes
  • Sending it again

The first one is always working, and the second one crash, as expected.
Actually, I am still using the community edition. So I don’t have security.log and I can’t set any roles. Really nothing relevant appears in the other logs.
neo4.log

capture

debug.log

deb

I did the same process with a request sent by transaction. It’s actually working ... in more than 20 seconds. And I get the following error in debug.log (it is the same output as if I restarted my node.js server):

deb2

Maybe you want me to test this with an enterprise edition to get more information?
Thanks for helping!

Update :
Actually after your response I saw this :
Timeout Azure

I will check on my side in the azure configuration.

@ghost
Copy link

ghost commented May 17, 2017

@lutovich
I am paying for the hobby features on heroku (basically it doesn't sleep), so it probably isn't a sleeping issue.
Also, the graphene team kindly released the debug.log to me, and this was the only error I could find (this is one section of about four others scattered around the log) :

2017-05-14 20:00:50.278+0000 ERROR [o.n.b.v.t.BoltProtocolV1] Failed to write response to driver Cannot write to buffer when closed
java.io.IOException: Cannot write to buffer when closed
  at org.neo4j.bolt.v1.transport.ChunkedOutput.ensure(ChunkedOutput.java:163)
  at org.neo4j.bolt.v1.transport.ChunkedOutput.writeShort(ChunkedOutput.java:94)
  at org.neo4j.bolt.v1.packstream.PackStream$Packer.packStructHeader(PackStream.java:330)
  at org.neo4j.bolt.v1.messaging.PackStreamMessageFormatV1$Writer.handleSuccessMessage(PackStreamMessageFormatV1.java:150)
  at org.neo4j.bolt.v1.messaging.msgprocess.MessageProcessingCallback.completed(MessageProcessingCallback.java:91)
  at org.neo4j.bolt.v1.messaging.msgprocess.MessageProcessingCallback.completed(MessageProcessingCallback.java:31)
  at org.neo4j.bolt.v1.runtime.internal.SessionStateMachine.after(SessionStateMachine.java:907)
  at org.neo4j.bolt.v1.runtime.internal.SessionStateMachine.run(SessionStateMachine.java:727)
  at org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorkerFacade.lambda$run$1(SessionWorkerFacade.java:69)
  at org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorker.execute(SessionWorker.java:116)
  at org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorker.run(SessionWorker.java:77)
  at java.lang.Thread.run(Thread.java:748)
2017-05-14 20:00:50.279+0000 ERROR [o.n.b.v.t.BoltProtocolV1] Failed to write response to driver Cannot write to buffer when closed
java.io.IOException: Cannot write to buffer when closed
  at org.neo4j.bolt.v1.transport.ChunkedOutput.ensure(ChunkedOutput.java:163)
  at org.neo4j.bolt.v1.transport.ChunkedOutput.writeShort(ChunkedOutput.java:94)
  at org.neo4j.bolt.v1.packstream.PackStream$Packer.packStructHeader(PackStream.java:330)
  at org.neo4j.bolt.v1.messaging.PackStreamMessageFormatV1$Writer.handleSuccessMessage(PackStreamMessageFormatV1.java:150)
  at org.neo4j.bolt.v1.messaging.msgprocess.MessageProcessingCallback.completed(MessageProcessingCallback.java:91)
  at org.neo4j.bolt.v1.messaging.msgprocess.MessageProcessingCallback.completed(MessageProcessingCallback.java:31)
  at org.neo4j.bolt.v1.runtime.internal.SessionStateMachine.after(SessionStateMachine.java:907)
  at org.neo4j.bolt.v1.runtime.internal.SessionStateMachine.pullAll(SessionStateMachine.java:738)
  at org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorkerFacade.lambda$pullAll$2(SessionWorkerFacade.java:75)
  at org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorker.execute(SessionWorker.java:116)
  at org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorker.executeBatch(SessionWorker.java:102)
  at org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorker.run(SessionWorker.java:82)
  at java.lang.Thread.run(Thread.java:748)

@jremyf
Copy link
Author

jremyf commented May 19, 2017

Hello @lutovich ,

On Azure, I can only have an idle timeout of 30 minutes maximum. After this delay, the connection will be closed.

So i'm asking if a keep alive system in the driver could be a solution cause we have no control over it except for some database configurations.

For now, I am sending a little request every 3 minutes to bypass this issue.

Thanks for helping.

@lutovich
Copy link
Contributor

@jremyf @Lahpman7 it does sound like some keep alive or ping functionality could be useful here. I guess it is also possible to do a simpler thing and just close connections that were idle for more than configurable amount of time. This approach also omits TCP retransmission possible during ping.

We will investigate how to add such feature in upcoming releases.

@agalazis
Copy link

agalazis commented Jul 13, 2017

Same issue in aws market place version maybe something is undertunned in cloud versions? any configuration we can do on the server? It used t o work fine in the docker container

@viz
Copy link

viz commented Aug 6, 2017

Same issue using neo4j-driver 1.4.0 in AWS Lambda function using Node 6.10
Error is:

error creating new order: { Neo4jError: read ECONNRESET
at Neo4jError.Error (native)
at new Neo4jError (/var/task/node_modules/neo4j-driver/lib/v1/error.js:76:132)
at newError (/var/task/node_modules/neo4j-driver/lib/v1/error.js:66:10)
at NodeChannel._handleConnectionError (/var/task/node_modules/neo4j-driver/lib/v1/internal/ch-node.js:328:41)
at emitOne (events.js:101:20)
at TLSSocket.emit (events.js:188:7)
at emitErrorNT (net.js:1277:8)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickDomainCallback (internal/process/next_tick.js:128:9) code: 'ServiceUnavailable', name: 'Neo4jError' }

@arboreal84 on SO suggested this may be related to nodejs event loop starvation??
https://stackoverflow.com/questions/44251168/neo4j-javascript-bolt-driver-econnreset-buffer-closed,
https://stackoverflow.com/questions/29812692/node-js-server-timeout-problems-ec2-express-pm2/43806215#43806215

Not sure if this helps to narrow down the problem...

@viz
Copy link

viz commented Aug 6, 2017

With some further investigation of the possibility of event loop starvation I notice that the only place in the driver that uses process.nextTick is in the promise related code - so I'm wondering whether the use of the promise approach (session.run().then(...)) is a common thread to those experiencing this problem? That certainly applies to me...

update: I've now seen this error both in promise style code and eventEmitter style code, so disregard this thought :-)

@viz
Copy link

viz commented Aug 7, 2017

Just saw it again after I set NODE_DEBUG=tls,net
Here's the more detailed log messages:

START RequestId: 1a1379f8-7b0b-11e7-a16a-bf4cea59a665 Version: $LATEST
2017-08-07T00:55:27.059Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: emit close
2017-08-07T00:55:27.140Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: afterWrite 0
2017-08-07T00:55:27.160Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: afterWrite call cb
2017-08-07T00:55:27.160Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: onread -104
2017-08-07T00:55:27.160Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: destroy
2017-08-07T00:55:27.160Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: close
2017-08-07T00:55:27.160Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: close handle
2017-08-07T00:55:27.161Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: destroy
2017-08-07T00:55:27.161Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: already destroyed, fire error callbacks
2017-08-07T00:55:27.161Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: onSocketFinish
2017-08-07T00:55:27.162Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: oSF: ended, destroy ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: null,
pipesCount: 0,
flowing: true,
ended: false,
endEmitted: false,
reading: true,
sync: false,
needReadable: true,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
defaultEncoding: 'utf8',
ranOut: false,
awaitDrain: 0,
readingMore: false,
decoder: null,
encoding: null }
2017-08-07T00:55:27.162Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: destroy undefined
2017-08-07T00:55:27.162Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: destroy
2017-08-07T00:55:27.162Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: already destroyed, fire error callbacks
2017-08-07T00:55:27.163Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	error creating new order: { Neo4jError: read ECONNRESET
at Neo4jError.Error (native)
at new Neo4jError (/var/task/node_modules/neo4j-driver/lib/v1/error.js:76:132)
at newError (/var/task/node_modules/neo4j-driver/lib/v1/error.js:66:10)
at NodeChannel._handleConnectionError (/var/task/node_modules/neo4j-driver/lib/v1/internal/ch-node.js:328:41)
at emitOne (events.js:101:20)
at TLSSocket.emit (events.js:188:7)
at emitErrorNT (net.js:1277:8)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickDomainCallback (internal/process/next_tick.js:128:9) code: 'ServiceUnavailable', name: 'Neo4jError' }
END RequestId: 1a1379f8-7b0b-11e7-a16a-bf4cea59a665
REPORT RequestId: 1a1379f8-7b0b-11e7-a16a-bf4cea59a665	Duration: 160.79 ms	Billed Duration: 200 ms Memory Size: 128 MB	Max Memory Used: 60 MB	

compared to a successful request:

START RequestId: 11ded360-7b08-11e7-a4b5-f3b03c692e88 Version: $LATEST
2017-08-07T00:33:46.441Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: pipe false undefined
2017-08-07T00:33:46.441Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: connect: find host hobby-xxx.dbs.graphenedb.com
2017-08-07T00:33:46.442Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: connect: dns options { family: undefined, hints: 32 }
2017-08-07T00:33:46.541Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:46.541Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read wait for connection
2017-08-07T00:33:46.885Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterConnect
2017-08-07T00:33:46.886Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	TLS 1: start
2017-08-07T00:33:46.886Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:46.886Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: Socket._read readStart
2017-08-07T00:33:47.303Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	TLS 1: secure established
2017-08-07T00:33:47.306Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite 0
2017-08-07T00:33:47.306Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite call cb
2017-08-07T00:33:47.307Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite 0
2017-08-07T00:33:47.307Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite call cb
2017-08-07T00:33:47.512Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: onread 4
2017-08-07T00:33:47.512Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: got data
2017-08-07T00:33:47.513Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:47.727Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: onread 26
2017-08-07T00:33:47.727Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: got data
2017-08-07T00:33:47.729Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:47.730Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite 0
2017-08-07T00:33:47.730Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite call cb
2017-08-07T00:33:48.807Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: onread 378
2017-08-07T00:33:48.807Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: got data
2017-08-07T00:33:48.810Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:49.061Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: pipe false undefined
2017-08-07T00:33:49.062Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: connect: find host sns.ap-southeast-2.amazonaws.com
2017-08-07T00:33:49.062Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: connect: dns options { family: undefined, hints: 32 }
2017-08-07T00:33:49.102Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:49.102Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read wait for connection
2017-08-07T00:33:49.160Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite 0
2017-08-07T00:33:49.160Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite call cb
2017-08-07T00:33:49.160Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: onread 7
2017-08-07T00:33:49.160Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: got data
2017-08-07T00:33:49.160Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:49.180Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterConnect
2017-08-07T00:33:49.180Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	TLS 1: start
2017-08-07T00:33:49.180Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:49.180Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: Socket._read readStart
2017-08-07T00:33:49.204Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	TLS 1: secure established
2017-08-07T00:33:49.205Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite 0
2017-08-07T00:33:49.205Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: afterWrite call cb
2017-08-07T00:33:49.293Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: onread 157
2017-08-07T00:33:49.293Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: got data
2017-08-07T00:33:49.295Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:49.295Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: onread 294
2017-08-07T00:33:49.295Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: got data
2017-08-07T00:33:49.340Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: _read
2017-08-07T00:33:49.462Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: destroy undefined
2017-08-07T00:33:49.462Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: destroy
2017-08-07T00:33:49.462Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: close
2017-08-07T00:33:49.462Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	NET 1: close handle
2017-08-07T00:33:49.462Z	11ded360-7b08-11e7-a4b5-f3b03c692e88	published notification of order: { instrumentId: '5930d3d705d3f0110512960e',
id: 'f09fe003-b6fd-49dc-ba49-746fae3e9699',
principalAmount: '45500',
status: 'Submitted' }
END RequestId: 11ded360-7b08-11e7-a4b5-f3b03c692e88
REPORT RequestId: 11ded360-7b08-11e7-a4b5-f3b03c692e88	Duration: 3320.82 ms	Billed Duration: 3400 ms Memory Size: 128 MB	Max Memory Used: 55 MB	

Note that in the successful request the Neo4j query is followed by a message being published to an SNS topic.

Hopefully this will help someone work out what's going on - it's beyond me :-(

@ghost
Copy link

ghost commented Aug 7, 2017

2017-08-07T00:55:27.160Z	1a1379f8-7b0b-11e7-a16a-bf4cea59a665	NET 1: onread -104

That debug message returns the number of bytes read by read.
In this case it's a negative number. A negative number means an error occurred.
errno 104 is Connection reset by peer.

#define ECONNRESET  104 /* Connection reset by peer */

This means the remote peer is closing the connection. Everything after that is cleanup.

Now, why is this happening? I would follow the TCP stream following a tool like Wireshark or tcpdump. You can use a Wireshark filter to monitor a specific port (tcp.port == <port number>) and also look for the packet that resets the connection, a packet with the RST flag on (tcp.flags.reset == 1).
Once you've that packet on that port follow that TCP stream and see what data was sent.

@viz
Copy link

viz commented Aug 8, 2017

thanks @arboreal84 - challenge is that this is happening in AWS Lambda function, so no opportunity to use wireshark :-(

I'll look into trying to replicate locally, but not sure whether that will be possible or whether it will tell me much given the difference in infrastructure configuration.

I'll try attacking from the other end and see if GrapheneDB can help.

@agalazis
Copy link

agalazis commented Aug 8, 2017

In my case the issue was on the database side I could increase memory settings. I guess cloud setups (including official community edition on aws marketplace)are not optimised to take full advantage of the host machine unless you manually configure them to achieve it and that somehow defeats the purpose of installing it from marketplace.

@zhenlineo
Copy link
Contributor

Hi,
This issue has been siting here for a while. For the initial suggested cause, a.k.a. TCP connection timeout problem, we've already added (connection pool management work)[https://neo4j.com/docs/developer-manual/current/drivers/client-applications/#driver-config-connection-pool-management] to address it.

I am closing this issue as the initial cause is already addressed. If you have other problems, please be free to open new issues.
Cheers,
Zhen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants