Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error during socket read: End of file #3

Closed
Gauravshah opened this issue Jun 13, 2015 · 8 comments
Closed

Error during socket read: End of file #3

Gauravshah opened this issue Jun 13, 2015 · 8 comments

Comments

@Gauravshah
Copy link

I am trying to use the jar as a library and I am getting the following exception

E0612 22:21:52.243887 120803328 io_service_socket.h:196] Error during socket read: End of file; 0 bytes read so far

E0612 22:23:36.134788 120803328 io_service_socket.h:169] Error during socket write: Broken pipe; 0 bytes out of 816 written

Environment:

  • JRuby 1.7.12 , 1.9 mode
  • JDK "1.8.0_40-ea"
  • Mac OSX 10.10.3
  • amazon-kinesis-producer-0.9.0.jar
  • protobuf-java-2.6.1.jar
  • guava-18.0.jar
  • commons-lang-2.6.jar
  • commons-compress-1.9.jar
  • commons-io-2.4.jar
  • slf4j-simple-1.7.12.jar
  • slf4j-api-1.7.12.jar

code:

          java_import 'com.amazonaws.kinesis.producer.Configuration'
          java_import 'com.amazonaws.kinesis.producer.KinesisProducer'
          java_import 'java.nio.ByteBuffer'
          java_import 'com.google.common.util.concurrent.FutureCallback'
          java_import 'com.google.common.util.concurrent.Futures'

          kinesis_config = Configuration.new
          kinesis_config.setRegion('us-east-1');
          kinesis_config.setAwsAccessKeyId("xxxx");
          kinesis_config.setAwsSecretKey("xxxx");
          kinesis_config.setMaxConnections(1);
          kinesis_config.setRequestTimeout(60000);
          kinesis_config.setRecordMaxBufferedTime(101);
          @kinesis_producer = KinesisProducer.new(kinesis_config);

          data = {}
          data[:foo] = :bar
          b= ByteBuffer.wrap(data.to_json.to_java_bytes)
          f = @kinesis_producer.addUserRecord("stream_name",event_id.to_s,b)          
@kevincdeng
Copy link
Contributor

Hi Gaurav,

These are usually caused by the server closing sockets. The requests will be retried, so there should be no loss of data.

We are aware of the issue and will be working on a fix.

@Gauravshah
Copy link
Author

Hi Kevin,

sure thanks for the update.

@andrepintorj
Copy link

Hi Kevin, any update regarding this issue? I'm also facing this error message.

@kevincdeng
Copy link
Contributor

Hi Andre,

This will be addressed in the upcoming release (unfortunately we can't give dates, but it will be in the near future).

@kevincdeng
Copy link
Contributor

The 0.10.0 release is now available and fixes this issue.

@pmogren
Copy link

pmogren commented Aug 20, 2015

I see this error on 0.10.1

@kevincdeng
Copy link
Contributor

Hi Paul,

That would mean that the socket was actually closed unexpectedly. This could happen due to various networking issues. Are they frequent or just occasional? Occasional connection drops should not be an issue since there are automatic retries.

@pmogren
Copy link

pmogren commented Aug 20, 2015

I see it frequently, but at this point it's an un-tuned proof of concept application which is sending way more records at the KPL than the shard will accept, which may have something to do with it. I'll follow up in the future if it seems like a real problem. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants