Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: undefined is not a function #196

Closed
shaharmor opened this issue Nov 18, 2015 · 10 comments
Closed

TypeError: undefined is not a function #196

shaharmor opened this issue Nov 18, 2015 · 10 comments

Comments

@shaharmor
Copy link
Collaborator

Hi,
Using ioredis 1.7.6, we observed the following issue.
It probably happens if for some reason the response received from redis is incorrect (maybe timed out).
You should add a check that the response exists and is correct?

TypeError: undefined is not a function
    at /var/www/XXX/node_modules/YYY/node_modules/ioredis/lib/cluster.js:471:34
    at run (/var/www/XXX/node_modules/YYY/node_modules/ioredis/lib/utils/index.js:151:16)
    at tryCatcher (/var/www/XXX/node_modules/YYY/node_modules/ioredis/node_modules/bluebird/js/main/util.js:26:23)
    at Promise.successAdapter (/var/www/XXX/node_modules/YYY/node_modules/ioredis/node_modules/bluebird/js/main/nodeify.js:23:30)
    at Promise._settlePromiseAt (/var/www/XXX/node_modules/YYY/node_modules/ioredis/node_modules/bluebird/js/main/promise.js:575:21)
    at Promise._settlePromises (/var/www/XXX/node_modules/YYY/node_modules/ioredis/node_modules/bluebird/js/main/promise.js:693:14)
    at Async._drainQueue (/var/www/XXX/node_modules/YYY/node_modules/ioredis/node_modules/bluebird/js/main/async.js:123:16)
    at Async._drainQueues (/var/www/XXX/node_modules/YYY/node_modules/ioredis/node_modules/bluebird/js/main/async.js:133:10)
    at Immediate.Async.drainQueues [as _onImmediate] (/var/www/XXX/node_modules/YYY/node_modules/ioredis/node_modules/bluebird/js/main/async.js:15:14)
    at processImmediate [as _immediateCallback] (timers.js:367:17)
@luin
Copy link
Collaborator

luin commented Nov 18, 2015

Could you please post the result of command cluster slots?

@shaharmor
Copy link
Collaborator Author

there you go:

127.0.0.1:6379> cluster slots
1) 1) (integer) 5461
   2) (integer) 10922
   3) 1) "127.0.0.1"
      2) (integer) 6379
   4) 1) "10.240.56.139"
      2) (integer) 6380
2) 1) (integer) 0
   2) (integer) 5460
   3) 1) "10.240.29.254"
      2) (integer) 6379
   4) 1) "10.240.78.46"
      2) (integer) 6380
3) 1) (integer) 10923
   2) (integer) 16383
   3) 1) "10.240.56.139"
      2) (integer) 6379
   4) 1) "10.240.29.254"
      2) (integer) 6380

I think the issue happened when the cluster was not in its OK state, maybe during promotion or some other state

@AVVS
Copy link
Contributor

AVVS commented Nov 18, 2015

This looks like the issue I had during the tests, and it was already fixed in the modern versions of ioredis

@shaharmor
Copy link
Collaborator Author

But the issue is at this line:
https://github.com/luin/ioredis/blob/1.7.6/lib/cluster.js#L471
meaning there was no error, and the result just became not as it should.

I don't see any check of the response in the master branch

@luin
Copy link
Collaborator

luin commented Nov 18, 2015

That's strange. It seems that redis guarantees cluster slots returns a nested array so that items should be an array. Could you print the result of cluster slots that ioredis gets when this error happens?

@shaharmor
Copy link
Collaborator Author

It doesn't happen anymore, it only happened last night during a cluster failover.

@luin
Copy link
Collaborator

luin commented Nov 18, 2015

Just checked the source code of cluster slots command: https://github.com/antirez/redis/blob/unstable/src/cluster.c#L3737. Haven't find out why items is not an array. Anyway, let me know if the error happens again.

@AVVS Are you able to reproduce the error (using the 1.7.6 version)?

@AVVS
Copy link
Contributor

AVVS commented Nov 18, 2015

Thought was related to #56, but doesn't seem to be. Either way that was when cluster wasn't yet initialized and not all slots were allocated yet. In the recent versions I haven't seen it once and can't really try 1.7.6 at this point - these test sets are long gone 👎

@shaharmor
Copy link
Collaborator Author

I'm not sure its related to redis.
I think that there was something in the connection, maybe load on the server, on redis, timeout, network issue, or something like that.

@shaharmor
Copy link
Collaborator Author

I believe this can be closed as i have yet to witness it again using the latest version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants