Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InvalidSignatureException: Signature expired #527

Closed
evansolomon opened this issue Mar 9, 2015 · 26 comments · Fixed by #706
Closed

InvalidSignatureException: Signature expired #527

evansolomon opened this issue Mar 9, 2015 · 26 comments · Fixed by #706
Labels
guidance Question that needs advice or information.

Comments

@evansolomon
Copy link
Contributor

First, I know there have been several similar issues in the past. I've tried to read through them all but they all seem to be either for issues that are closed or for different problems.

I have a bunch of apps that run on AWS + Docker and at some point will randomly start throwing these errors. I think the error usually comes up when I haven't worked on the app for a couple days, so the "expiration" is pretty extreme. For example, the one I just got is "expired" by 2 days.

Signature expired: 20150307T194740Z is now earlier than 20150309T203545Z

Some details that are probably relevant:

  • These apps are using static API keys in ~/.aws/
  • The docker container links the .aws directory along the lines of docker run --volume "${HOME}/.aws":/home/app/.aws whatever-image-name
  • Eventually through some random combination of re-installing dependencies, running the app outside of docker, and reading chicken entrails, things do start working again — unfortunately, I can never figure out exactly why

I've dug through the internals of the SDK a bit, but I don't quite know what I'm looking for and hoping you can point me in a more productive direction.

Currently I'm using v2.1.14 (I know there are a couple newer patch releases in the last couple days). Here is a recent stack track.

InvalidSignatureException: Signature expired: 20150307T194740Z is now earlier than 20150309T203545Z (20150309T204045Z - 5 min.)
  at Request.extractError (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/protocol/json.js:43:27)
  at Request.callListeners (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/sequential_executor.js:100:18)
  at Request.emit (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
  at Request.emit (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/request.js:604:14)
  at Request.transition (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/request.js:21:12)
  at AcceptorStateMachine.runTo (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/state_machine.js:14:12)
  at /home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/state_machine.js:26:10
  at Request.<anonymous> (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/request.js:22:9)
  at Request.<anonymous> (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/request.js:606:12)
  at Request.callListeners (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/sequential_executor.js:104:18)
  at Request.emit (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
  at Request.emit (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/request.js:604:14)
  at Request.transition (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/request.js:21:12)
  at AcceptorStateMachine.runTo (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/state_machine.js:14:12)
  at /home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/state_machine.js:26:10
  at Request.<anonymous> (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/request.js:22:9)
  at Request.<anonymous> (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/request.js:606:12)
  at Request.callListeners (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/sequential_executor.js:104:18)
  at callNextListener (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/sequential_executor.js:90:14)
  at IncomingMessage.onEnd (/home/app/contents/node_modules/galileofive-common/node_modules/aws-sdk/lib/event_listeners.js:183:11)
  at IncomingMessage.emit (events.js:117:20)
  at _stream_readable.js:943:16
  at process._tickDomainCallback (node.js:463:13)
@odeke-em
Copy link

Just curious are you able to synchronize your NTP server/clock with AWS' ? Several months ago, I encountered this error a couple of times, when creating signatures on my local machine. Hopefully this doc will be relevant: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html

@evansolomon
Copy link
Contributor Author

I've looked into the clock thing before but don't think that's it. In the stack trace above, the date is off by like 48 hours, so that would be a lot of clock skew.

@lsegal
Copy link
Contributor

lsegal commented Mar 10, 2015

@evansolomon I'm not sure why clock skew would be ruled out, it seems to coincide precisely with your usage pattern described above:

I think the error usually comes up when I haven't worked on the app for a couple days,

If Docker images only resync NTP when they are re-activated through commands and they have not been activated in multiple days, that would explain the skew pretty accurately. The SDK definitely relies on "current system time" when making requests, so it needs to be synced prior to sending requests. You might see the exact same behavior on a development machine right after waking the machine up from sleep (prior to clock sync), for example.

I would recommend taking stock of the date command in your Docker image next time you run into this issue. Knowing the current system time would be very helpful to diagnosing the issue.

@evansolomon
Copy link
Contributor Author

I have tried running date in the past when it's happened and always had correct results. I've also rebuilt the docker image without cache and had the same problem.

I don't know how the internals of docker image clocks work, but I just took a couple of old (and not recently used) docker images on my machine and ran docker run old-image-name date for each one. They all came out with correct dates.

I agree it does seem like a good theory, but I haven't been able to find any evidence of it.

@AdityaManohar
Copy link
Contributor

@evansolomon
Circling back to the error stack, it looks like the skew is actually 5 min and not 48 hours. Would you be able to re-run the date command on your Docker image and let me know what you find?

Do you also encounter the error when you run the same code outside your Docker image?

Alternatively, would you be able to share your Docker image? This would help debug the issue.

@evansolomon
Copy link
Contributor Author

it looks like the skew is actually 5 min and not 48 hours

I assume you mean this line

20150307T194740Z is now earlier than 20150309T203545Z (20150309T204045Z - 5 min.)

I parsed that as March 7, 2015 19:47 is earlier than March 9, 2015 20:40. I don't really know what the 5 minute thing on the end means, I was thinking maybe that was the threshold.

Would you be able to re-run the date command on your Docker image and let me know what you find?

Just tried again on an image I haven't used since yesterday and got "Tue Mar 10 18:22:48 UTC 2015" (which is correct at the time I'm writing this).

Do you also encounter the error when you run the same code outside your Docker image?

I'm not sure. Our workflow is pretty heavily Docker-centric, so this isn't much of a use case for me.

would you be able to share your Docker image?

It is based on https://github.com/phusion/passenger-docker. The only differences are some nginx configs, the shared ~/.aws directory, and my apps.

@kyriesent
Copy link

@evansolomon did you ever find the cause/solution for this? I've got the same issue on a docker machine running on debian:jessie. Pretty much same exact stack trace as yours.

@evansolomon
Copy link
Contributor Author

@kyriesent Sorry, never really figured it out

@kyriesent
Copy link

Turns out mine was a clock syncing issue on my VM. Running a sync between the VM and the host cleared it up.

@AdityaManohar
Copy link
Contributor

@evansolomon I've just pushed a patch that offsets the SDK clock when a clock skew error is detected. Simply set the correctClockSkew option when constructing a service client.

Here's an example:

var dynamodb = new AWS.DynamoDB({correctClockSkew: true});

Let me know if this works for you. Thanks for your patience.

@evansolomon
Copy link
Contributor Author

Thanks for the update. Is there a reason this shouldn't be on by default?

@AdityaManohar
Copy link
Contributor

@evansolomon This is turned off by default because there may be other customers who are applying a clock skew correction using the AWS.config.systemClockOffset property.

@evansolomon
Copy link
Contributor Author

Are there cases where systemClockOffset is set to 0 and the user would not want the skew correction to be applied? It seems like that would be a sensible default, but maybe I'm missing something.

@lraulier
Copy link

lraulier commented Feb 8, 2016

hi all,

same issue 'Signature expired' on aws command line
resolved by 'calling ntpd' (The Network Time Protocol (NTP) is used to synchronize the time of a computer).

hope it helps
regards

@creeefs
Copy link

creeefs commented Aug 9, 2016

I encountered this issue locally, and restarting my docker daemon fixed it.

@tonyjiang
Copy link

tonyjiang commented Feb 22, 2017

@Clee681 thank you for sharing your experience and solution! Restarting docker daemon fixed my problem as well - could take much longer to solve it without seeing your comment.

@mankins
Copy link

mankins commented Mar 7, 2017

None of the above mentions workarounds worked for me, even though the environment was similar.

One difference may be that I was uploading via the --zip-file method of the aws-cli over a slow network. The upload was actually taking more than 5 minutes and consequently giving the SignatureDoesNotMatch error.

I was able to work around this slow network by uploading first to s3, and then instead of the zip-file command line option using the --s3-bucket and --s3-key options. Perhaps this is an unrelated issue but it took me long enough to solve that I thought I'd document it here in case.

@afeld
Copy link

afeld commented Mar 27, 2017

@sbeam
Copy link

sbeam commented May 11, 2017

if you are running docker on OS X then this is probably the cause, a long-standing issue with docker not resyncing the clock after sleep. Fix coming in next release.

docker/for-mac#17

@harish-swamy
Copy link

go to vm settings -> date&time -> enable date and time with internetaccess ... this issue should be fix.

@varenc
Copy link

varenc commented Jul 18, 2017

would love to see correctClockSkew default to True in the future since this fixes a problem that virtually everyone will eventually run in to. Having it default to false is like embedding a non-deterministic bug that causes problem for ~0.1% of users in everyone's AWS implementation. Any site with lots of diverse users will eventually have to deal with this likely after their implementation has gone into production. I've opened an issue regarding this for discussion: #1632

@saikiran91
Copy link

I have an exactly same issue. I run Ubuntu terminal on Windows 10. Any solution for this?

@Kovaloff
Copy link

Kovaloff commented Sep 1, 2018

a restart of the VM helped me

@srchase srchase added guidance Question that needs advice or information. and removed Question labels Jan 4, 2019
@zeeshanjamal16
Copy link

Run the command to sync clock
ntpdate pool.ntp.org

@abidulrmdn
Copy link

restarting docker daemon fixed it

sudo systemctl start docker
or
sudo service docker start

@lock
Copy link

lock bot commented Sep 28, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.

@lock lock bot locked as resolved and limited conversation to collaborators Sep 28, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
guidance Question that needs advice or information.
Projects
None yet
Development

Successfully merging a pull request may close this issue.