Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove the Version Range for Dependencies #106

Merged
merged 1 commit into from
May 16, 2017

Conversation

pfifer
Copy link
Contributor

@pfifer pfifer commented May 16, 2017

Removed the version range specifier for aws-java-sdk-core.

See PR #84

Removed the version range specifier for aws-java-sdk-core.

See PR awslabs#84
@pfifer pfifer added this to the v0.12.4 milestone May 16, 2017
@pfifer pfifer merged commit fae974f into awslabs:master May 16, 2017
pfifer added a commit that referenced this pull request May 17, 2017
=== 0.12.4

==== Java

* Upgraded dependency on aws-java-sdk-core to 1.11.128, and removed version range.
  * [PR #84](#84)
  * [PR #106](#106)
* Use an explicit lock file to manage access to the native KPL binaries.
  * [Issue #91](#91)
  * [PR #92](#92)
* Log reader threads should be shut down when the native process exits.
  * [Issue #93](#93)
  * [PR #94](#94)

==== C++ Core

* Add support for using a thread pool, instead of a thread per request.
  The thread pool model guarantees a fixed number of threads, but have issue catching up if the KPL is overloaded.
  * [PR #100](#100)
* Add log messages, and statistics about sending data to Kinesis.
  * Added flush statistics that record the count of events that trigger flushes of data destined for Kinesis
  * Added a log message that indicates the average time it takes for a PutRecords request to be completed.

      This time is recorded from the when the request is enqueued to when it is completed.
  * Log a warning if the average request time rises above five times the configured flush interval.

      If you see this warning normally it indicates that the KPL is having issues keeping up. The most likely
      cause is to many requests being generated, and you should investigate the flush triggers to determine why flushes
      are being triggered.
  * [PR #102](#102)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant