Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: Cannot read property 'name' of null #1105

Closed
greghart opened this issue Aug 9, 2016 · 65 comments · Fixed by #2367
Closed

TypeError: Cannot read property 'name' of null #1105

greghart opened this issue Aug 9, 2016 · 65 comments · Fixed by #2367

Comments

@greghart
Copy link

greghart commented Aug 9, 2016

First off, thanks for the library! Overall it's been incredibly performant and straightforward to use. I am running into one issue which may be an issue with the library, though it's a bit hard to follow the protocol for me so my usage may be off as well.

I'm running into an uncaught error during the parseComplete handler in client.js (https://github.com/brianc/node-postgres/blob/v6.0.1/lib/client.js#L131).

I am doing a direct connect against the pool and then utilizing pg-query-stream

pool.connect().then(function(client){
  stream = client.query(new QueryStream(query, null));

  stream.on('error', function(streamError){
    stream.close(function(closeError){ 
      // What to do with a close error? At this point stream is already hosed :shrug:
      client.release(streamError);
    }
  });

  stream.on('close', function(){
    client.release();
  }); 
});

At some point during my app's life cycle, I get an uncaught error

TypeError: Cannot read property 'name' of null
    at null.<anonymous> (/var/app/current/node_modules/pg/lib/client.js:139:26)
    at emit (events.js:107:17)
    at Socket.<anonymous> (/var/app/current/node_modules/pg/lib/connection.js:121:12)
    at Socket.emit (events.js:107:17)
    at readableAddChunk (_stream_readable.js:163:16)
    at Socket.Readable.push (_stream_readable.js:126:10)
    at TCP.onread (net.js:540:20)

So obviously activeQuery is becoming null, and I am trying to narrow down why I would receive a parseComplete event when the active query is null.

According to my logs, the stream is not emitting an error, and seems to be closing normally. Therefore it seems like the connection client is getting a readyForQuery or end event, and then right after that getting the parseComplete event. Any idea why this would be happening, or see any issues with my usage? Thanks for any help you can give! I'll keep looking into it as well.

@vitaly-t
Copy link
Contributor

I wonder, if the same error would occur within pg-promise, if you follow the example...

At least this would help ruling out some reasons of why this is happening...

@brianc
Copy link
Owner

brianc commented Aug 10, 2016

@greghart interesting error - any way you could include the query & some data so I could try to reproduce it here? Maybe using generate_series or something if you need lots of data?

@dynajoe
Copy link

dynajoe commented Feb 24, 2017

I'm having a similar issue. For me it seems to be related to parameterized queries.

.query('SELECT $1', ['a'])

Do you have any further findings @greghart

@dynajoe
Copy link

dynajoe commented Feb 24, 2017

I may have found something promising. I tried to use a new client object with the original in the prototype chain. I did this because I didn't want client.release() to be called from child queries within a transaction.

I did something like this: (using lodash _.create)

const new_client = _.create(client, {
   release: function () { /* noop */ }
})

new_client.query('SELECT 1')

@charmander
Copy link
Collaborator

I did this because I didn't want client.release() to be called from child queries within a transaction.

Call client.release() in a predictable manner, then? Bluebird’s resource management might help.

@jordandouglaswhite
Copy link

I've been having the same issue but only on my live site, once every few hours.
My logs would always show that I received 2 requests to run the same query within about 5 ms - always 1 after another (no other queries in between).
I don't know if the parameters are the same because I don't log them.

One would then get executed and the second would crash.
It would usually happen again within a minute if I attempted to restart that application without restarting all the others that communicated with it. Basically, some combination of requests caused the issue to appear.

I haven't been able to replicate this offline.
To me at this point, it looks like a race condition and maybe some kind of reference error (destroying the activeQuery on the wrong client).

I switched to pg-native and the problem appears to have stopped (over 12 hours with no issues).

@nemo
Copy link

nemo commented Mar 5, 2017

Having this issue. I had switched away from pg-native cause it was giving headache as well.

Only happens on first-start. The second time pg is used, the error goes away. The "race" condition is that it happens always at first.

So part of our production push right now is to hit the servers ourselves first before we let users use them so they don't get this error. Which is a hack of a fix.

Anyone was able to fix this?

@hvrauhal
Copy link

hvrauhal commented Apr 7, 2017

Just had this issue occur with connect-pg-simple configured pretty much as described in the advanced example of https://github.com/voxpelli/node-connect-pg-simple#usage.

@brianc
Copy link
Owner

brianc commented May 24, 2017

Do any of y'all have a small script I can use that will likely reproduce this? Also are you using pg or pg-native? This is really perplexing but really crappy & I want to get it fixed if possible. I've used node-postgres for years on many systems in production and never seen this...so I'm not sure what exactly it could be.

@nemo gah that fix is gross - sorry you have to do that. You're using normal pg and what exact version of node are you using?

@andyatmiami
Copy link

My application (very rarely) has experienced this problem as well. We have unfortunately been unable to distill it down to a simple script/app that can reliably reproduce, but general observation so far seems to indicate this happens when there is a connection issue with the database.

@brianc : I can completely understand the need to be able to reproduce in order to fix - but what are your thoughts on a short term fix to at least check to make sure activeQuery is a valid object before trying to reference the name property? Given the inability for client applications to handle this error in any way... and its drastic consequences - would be great if some form of incremental improvement could be delivered...

https://github.com/brianc/node-postgres/blob/master/lib/client.js#L241 indicates access to the name attribute allows to avoid the client from re-parsing the query. I would much rather take extraneous reparsing when the alternative is uncaught exception 😇

@brianc
Copy link
Owner

brianc commented Aug 28, 2017

@andyatmiami - what version of pg & pg-pool are you using? Are you using pg-cursor or pg-query-stream?

@andyatmiami
Copy link

andyatmiami commented Aug 28, 2017

"pg": "6.1.2"
"pg-pool": "1.6.0"
"node": "6.11.1"

We use parameterized queries exclusively.

FWIW:
I have a dev I am working with this afternoon who has had minimal success reproducing this - not yet reliable and not yet distilled into a simple app to run. At a high level (which I am working on refining) - we have 1 wildly expensive query (takes seconds to run), and then hammer a high load of other queries which are not long-running in parallel. Sometimes (and this is with Postgres completely stable) we see the activeQuery error fire...

@brianc
Copy link
Owner

brianc commented Aug 28, 2017

Yah that's wretched that its happening. I really hope you can figure out a reproducible test case. I'd also suggest trying with the most recent version of pg & pg-pool. They both received pretty substantial internals upgrades from 6.x -> 7.x & 1.x -> 2.x respectively.

@andyatmiami
Copy link

@brianc : Thanks for responsiveness in issue - will post back any updates/learnings as they come available!

@vedant15
Copy link

vedant15 commented Aug 30, 2017

I get the same error and also sometimes a different stack which I am guessing points to the same root cause

    self.activeQuery.handleCommandComplete(msg, con)
                    ^

TypeError: Cannot read property 'handleCommandComplete' of null
    at Connection.<anonymous> (/home/app/node_modules/pg/lib/client.js:239:21)
    at emitOne (events.js:96:13)
    at Connection.emit (events.js:188:7)
    at Socket.<anonymous> (/home/app/node_modules/pg/lib/connection.js:118:12)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:188:7)
    at readableAddChunk (_stream_readable.js:176:18)
    at Socket.Readable.push (_stream_readable.js:134:10)
    at TCP.onread (net.js:547:20)
events.js:160
      throw er; // Unhandled 'error' event

I see there was pull request from @spollack (#961) around this . @spollack, @jshepard it was mentioned in the PR that there was an another library causing this, do you mind sharing your findings ?

@jshepard
Copy link

@vedant15 I think our issue was caused by a connection leak in pre 2.0 versions of node-any-db-pool.
We ended up back porting the fix from 2.0 in a fork here

@spollack
Copy link
Contributor

@vedant15 sorry i wish i had written down better notes on this. I can tell you however that we have been running PR #961 in prod over the past year, and as far as i recall, have had no recurrences of this crash.

@vedant15
Copy link

@spollack @jshepard thanks for the quick response.

Were you guys also seeing this issue only during high loads and with statement timeouts ?

@jshepard
Copy link

@vedant15 pretty sure we only saw it during statement timeouts.

@vedant15
Copy link

vedant15 commented Aug 30, 2017

ok @jshepard , I am able to reproduce this issue on local (not always), but only when I put a considerable load on the application running queries with statement timeouts .

Thanks for all the info !

@brianc
Copy link
Owner

brianc commented Aug 30, 2017

If statement timeouts are causing it I might be able to simulate them locally

@vedant15
Copy link

vedant15 commented Aug 30, 2017

@brianc I added some log statements on my local and whenever I am able to reproduce any of the 2 flavors of this issue, the log statements show the following:

  1. activeQuery for a given client is set to null in Client.prototype._pulseQueryQueue
  2. Right after this , either commandComplete or parseComplete is fired for the same client

quick question , from the comments in the code, it seems parseComplete is fired only for prepared statements , is this assumption correct ?

Also, what do you think about checking if activeQuery is null in commandComplete and parseComplete , do you see any issues with that ; I know you had some reservations about this based on the comment in #961

@brianc
Copy link
Owner

brianc commented Aug 30, 2017

quick question , from the comments in the code, it seems parseComplete is fired for prepared statements , is this assumption correct ?

Yah - any query w/ parameters has a parse phase which the backend returns a parseComplete on.

Also, what do you think about checking if activeQuery is null in commandComplete and parseComplete , do you see any issues with that ; I know you had some reservations about this based on the comment in #961

I feel like that's similar to doing a try { something() } catch(e) { /* no big deal */ } and papers over what is a larger issue. I'd strongly prefer to have a reproducible test case & fix the underlying issue. Otherwise we're hiding an error and as far as the protocol operation is concerned possibly putting the connection into a bad/unknown state.

@brianc
Copy link
Owner

brianc commented Aug 30, 2017

If you can send over a script that smashes a local database w/ lots of load & uses statement timeouts that might be all I need. I've gotten pretty decent at triangulating issues in the connection after all these years. 😝

@brianc
Copy link
Owner

brianc commented Aug 30, 2017

something like:

const { Pool } = require('pg')
const async = require('async')

async.times(100000000, function(i, next) {
  pool.query('pg_sleep(100)')
  pool.query('SELECT $1::text as name', ['foo'], next)
})

or something? I'm missing where to define the statement timeout and if that reproduces the issue in your environment though ⁉️

@vedant15
Copy link

@brianc let me see if I can come up with a script which can reliably reproduce this (reliably being the keyword 😛 ) . I will keep you posted

@brianc
Copy link
Owner

brianc commented Aug 30, 2017

sweet - even if i need to run the script for a long time or over and over to reproduce the issue I should be able to work with it and narrow down where things are going amiss

@vedant15
Copy link

So .. my initial attempts to recreate it via a script has failed. I will try a couple of ideas to see if I can make it work .. 😞

@dynajoe
Copy link

dynajoe commented Aug 30, 2017

I believe the code I posted above(6 months ago) reproduced the error. Working backward from that may find the root cause.

@brianc
Copy link
Owner

brianc commented Aug 31, 2017

@joeandaverde I appreciate what you put above, but it's not quite enough for me to repro easily over here. can you post up a more complete/executable example? neither of your examples above are executable "out of the box" and I'm not sure I follow what "working backwards" entails.

@kibertoad
Copy link
Contributor

@daveisfera If you could create a failing test against current master, that would be super helpful.

@pankleks
Copy link

I have same problem:
Cannot read property 'handleCommandComplete' of null

pg node: 7.6.1

@charmander
Copy link
Collaborator

@pankleks when?

@pankleks
Copy link

@charmander hard to tell, but probably related to timeout from db.

@timotm
Copy link

timotm commented Feb 20, 2019

After having stumbled into this in our production code, I tried to generate a little script to reproduce. I put the script in https://github.com/timotm/pg-bug . It seems to have something to do with timing as CPU load affects the reproducibility, and on a slower machine it reproduces more often than on a faster machine. (adjusting the number of yes commands in the background might thus have an effect)

Running it three times in row in my machine, I got three different errors:

✔ 13:34 ~/pg-bug [master|✚ 1…1294] $ npm start

> pg-bug@1.0.0 start /Users/metsala/pg-bug
> createdb pg-bug-local-test-db 2>/dev/null; yes > /dev/null & yes > /dev/null & yes > /dev/null & yes > /dev/null & yes > /dev/null & node index.js; killall yes

Initialized, now looping..

/Users/metsala/pg-bug/node_modules/bluebird/js/release/using.js:12
        setTimeout(function(){throw e;}, 0);
                              ^
error: canceling statement due to statement timeout
    at Connection.parseE (/Users/metsala/pg-bug/node_modules/pg/lib/connection.js:555:11)
    at Connection.parseMessage (/Users/metsala/pg-bug/node_modules/pg/lib/connection.js:380:19)
    at Socket.<anonymous> (/Users/metsala/pg-bug/node_modules/pg/lib/connection.js:120:22)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at addChunk (_stream_readable.js:263:12)
    at readableAddChunk (_stream_readable.js:250:11)
    at Socket.Readable.push (_stream_readable.js:208:10)
    at TCP.onread (net.js:597:20)
sh: line 1: 47917 Terminated: 15          yes > /dev/null
sh: line 1: 47918 Terminated: 15          yes > /dev/null
sh: line 1: 47919 Terminated: 15          yes > /dev/null
sh: line 1: 47920 Terminated: 15          yes > /dev/null
sh: line 1: 47921 Terminated: 15          yes > /dev/null
✔ 13:34 ~/pg-bug [master|✚ 1…1294] $ npm start

> pg-bug@1.0.0 start /Users/metsala/pg-bug
> createdb pg-bug-local-test-db 2>/dev/null; yes > /dev/null & yes > /dev/null & yes > /dev/null & yes > /dev/null & yes > /dev/null & node index.js; killall yes

Initialized, now looping..
/Users/metsala/pg-bug/node_modules/pg/lib/client.js:278
    if (self.activeQuery.name) {
                         ^

TypeError: Cannot read property 'name' of null
    at Connection.<anonymous> (/Users/metsala/pg-bug/node_modules/pg/lib/client.js:278:26)
    at emitOne (events.js:116:13)
    at Connection.emit (events.js:211:7)
    at Socket.<anonymous> (/Users/metsala/pg-bug/node_modules/pg/lib/connection.js:125:12)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at addChunk (_stream_readable.js:263:12)
    at readableAddChunk (_stream_readable.js:250:11)
    at Socket.Readable.push (_stream_readable.js:208:10)
    at TCP.onread (net.js:597:20)
sh: line 1: 47979 Terminated: 15          yes > /dev/null
sh: line 1: 47980 Terminated: 15          yes > /dev/null
sh: line 1: 47981 Terminated: 15          yes > /dev/null
sh: line 1: 47982 Terminated: 15          yes > /dev/null
sh: line 1: 47983 Terminated: 15          yes > /dev/null
✔ 13:35 ~/pg-bug [master|✚ 1…1294] $ npm start

> pg-bug@1.0.0 start /Users/metsala/pg-bug
> createdb pg-bug-local-test-db 2>/dev/null; yes > /dev/null & yes > /dev/null & yes > /dev/null & yes > /dev/null & yes > /dev/null & node index.js; killall yes

Initialized, now looping..
/Users/metsala/pg-bug/node_modules/pg/lib/client.js:271
    self.activeQuery.handleCommandComplete(msg, con)
                     ^

TypeError: Cannot read property 'handleCommandComplete' of null
    at Connection.<anonymous> (/Users/metsala/pg-bug/node_modules/pg/lib/client.js:271:22)
    at emitOne (events.js:116:13)
    at Connection.emit (events.js:211:7)
    at Socket.<anonymous> (/Users/metsala/pg-bug/node_modules/pg/lib/connection.js:125:12)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at addChunk (_stream_readable.js:263:12)
    at readableAddChunk (_stream_readable.js:250:11)
    at Socket.Readable.push (_stream_readable.js:208:10)
    at TCP.onread (net.js:597:20)
sh: line 1: 48046 Terminated: 15          yes > /dev/null
sh: line 1: 48047 Terminated: 15          yes > /dev/null
sh: line 1: 48048 Terminated: 15          yes > /dev/null
sh: line 1: 48049 Terminated: 15          yes > /dev/null
sh: line 1: 48050 Terminated: 15          yes > /dev/null

@timotm
Copy link

timotm commented Feb 20, 2019

With console.log debug technology it seemed that the flow in the crash was something like

  1. open transaction
  2. set local statement timeout
  3. execute prepared INSERT statement
  4. the above INSERT statement times out
  5. pg receives CommandCompleted
  6. pg sends Sync
  7. pg receives ErrorResponse
  8. pg sends Sync
  9. pg receives ReadyForQuery with status E (for Sync after CommandCompleted)
  10. client dequeues the next statement (in this case ROLLBACK), setting self.activeQuery
  11. pg receives ReadyForQuery with status E (for Sync after ErrorResponse)
  12. client dequeues the next statement, setting self.activeQuery (or sets to null if no next statement)
  13. pg receives CommandCompleted for ROLLBACK but self.activeQuery points to the wrong query or null

I tried ignoring the first ReadyForQuery and only reacting to the second one and could no longer reproduce the null crashes.

EDIT: forgot to include server version: PostgreSQL 9.5.6 on x86_64-apple-darwin16.4.0, compiled by Apple LLVM version 8.0.0 (clang-800.0.42.1), 64-bit on my machine and PostgreSQL 9.6.11 on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit on the production machine

timotm added a commit to digabi/digabi-os that referenced this issue Feb 21, 2019
timotm added a commit to digabi/digabi-os that referenced this issue Feb 21, 2019
@timotm
Copy link

timotm commented Feb 22, 2019

An interesting observation: after having upgraded the production machine to PostgreSQL 11.2 (Debian 11.2-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit, the problem no longer exists.

Also, on a colleague's machine running PostgreSQL 11.1 on x86_64-apple-darwin17.7.0, compiled by Apple LLVM version 10.0.0 (clang-1000.11.45.5), 64-bit we couldn't reproduce this with the script.

@n-clark
Copy link

n-clark commented Apr 28, 2019

Started getting this error after using the query_timeout config parameter, pg 7.10.0

Repro steps for me, using this config:
{ query_timeout: 10 * 1000, max: 32, min: 8 }

Steps: lock a table in the database and then issue queries against that table, so that they all time out; doesn't have to be very fast, 1-2 per second will do. I get the crash very quickly, somewhere after around 10 to 100 query timeouts.

Other timing information, if it's relevant: the database and node-postgres servers are on separate machines but the ping rtt is very low, 0.5 - 1ms.

@juliusza
Copy link

I'm also using query_timeout:

[2019-07-11 09:36:36] [ERROR] [pid:8718] TypeError: Cannot read property 'name' of null
TypeError: Cannot read property 'name' of null
at Connection. (/opt/acn/4be1e1fb/api/node_modules/pg/lib/client.js:313:26)
at emitOne (events.js:116:13)
at Connection.emit (events.js:211:7)
at Socket. (/opt/acn/4be1e1fb/api/node_modules/pg/lib/connection.js:126:12)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at Socket.Readable.push (_stream_readable.js:208:10)
at TCP.onread (net.js:597:20)

@emhagman
Copy link

emhagman commented Jul 23, 2019

I was experiencing this issue with pg-query-stream and downgrading to pg@7.4.1 fixed it. Not sure what changed but maybe that is something to look into?

@iancamp
Copy link

iancamp commented Nov 5, 2019

I'm also using pg-query-stream@2.0.1 with pg-promise@9.3.3. I tried downgrading pg-promise to a version that uses pg@7.4.1 based on the previous comment, but it did not fix the issue.

Unfortunately, I only encounter this issue when running some automated tests in a docker container (CentOS 7.6.1810). Things run noticeably slower in the container vs locally (MacOS 10.14.6) so maybe it's timing related as mentioned above. In all cases, the database is running in a separate container (PostgreSQL 11.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5), 64-bit).

If it's at all helpful, here's the test code that fails:

let errorEncountered = false;
let results;
try {
    results = await runQuery('SELECT * FROM nonexistant_table');
} catch (err) {
    errorEncountered = true;
    expect(err).to.exist;
    expect(err.message).to.equal('relation "nonexistant_table" does not exist');
}

expect(errorEncountered).to.be.true;

// Reaches here
results = await runQuery('SELECT * FROM good_table WHERE id >= 18 ORDER BY id DESC;');
// Does not reach here

expect(results).to.have.lengthOf(4);
expect(results[2].id).to.equal(19);

runQuery() sends the query to the server via a web socket request and receives the data in chunks. The function resolves when the query has completed.

Here's a much simplified version of my server code:

function executeQuery(query) {
    return new Promise((resolve, reject) => {
        connection.task(async tx => {
            await tx.none('SET <custom_config_key>=<custom_config_value>');

            return tx.stream(queryStream, function (stream) {
                stream.pipe(outputStream);
                
                stream.on('error', err => {
                    outputStream.emit('error', err);
                });

                // Resolve now that the stream has been initiated.
                resolve();
            });
        }).catch(err => {
            reject(err);
        });
    });
}

webSocketRequestHandler(query) {
    const outputStream = new PassThrough({ objectMode: true });

    // Add some stream listeners

    executeQuery(query).then(() => {
        // Send data back to the client in chunks as data comes through the stream
    });
}

The failure happens reliably every time I run the tests, so if there's some places I can add log statements to better track down this problem, I am happy to assist.

@iancamp
Copy link

iancamp commented Nov 6, 2019

After reading this comment and some more experimentation, I seem to have gotten around this problem by making sure the stream is closed before releasing the connection back to the pool.

function executeQuery(query) {
    return new Promise((resolve, reject) => {
        connection.task(async tx => {
            await tx.none('SET <custom_config_key>=<custom_config_value>');
    
            return new Promise((connResolve, connReject) => {
                return tx.stream(queryStream, function (stream) {
                    stream.on('close', () => {
                        // Make sure the stream is closed before releasing the connection back to the connection pool
                        connResolve();
                    });
                    stream.on('error', err => {
                        outputStream.emit('error', err);
                    });
                    stream.pipe(outputStream);

                    // Resolve now that the stream has been initiated.
                    resolve();
                });
            });
        }).catch(err => {
            reject(err);
        });
    });
}

I'm not sure where the right place to make the change would be, but it seems like maybe this safeguard could be added to one of the involved libraries (pg-pool or pg-promise?) so a close listener wouldn't be necessary externally...?

Also, I feel like it's worth mentioning that I initially got around this error by commenting out this line in pg-cursor/index.js. Probably not the best idea to do that, but it helped me figure things out.

@pvatterott
Copy link

For those of you catching this with statement timeouts: I was able to reproduce this error consistently when statement_timeout and query_timeout were set to the same value. Increasing query_timeout relative to statement_timeout seems to improve the frequency at which we see this error

@brianc
Copy link
Owner

brianc commented Oct 7, 2020

I have a PR up for this: #2367. The fix feels dirty to me, but I think what's happening here might be a race condition within postgres itself we're working around.

@pvatterott
Copy link

FYI @brianc we did see this on postgres 12.3, not 9.x

@brianc
Copy link
Owner

brianc commented Oct 8, 2020

@pvatterott that's good to know, but a bummer as my hunch was around a race condition in older versions of postgres. I've been going back and forth w/ some postgres maintainers & think this might fix it. I can confirm it fixes in 9.x & 10.x of postgres, it might fix it for 12.3 of postgres as well. The change involves pipelining the sync command in with the other extended query commands all at once. It also improves my benchmarks by up to 10% in some cases, which is a nice side effect.

@pckilgore
Copy link

Thank you @brianc !

BenBirt added a commit to dataform-co/dataform that referenced this issue Oct 9, 2020
BenBirt added a commit to dataform-co/dataform that referenced this issue Oct 9, 2020
@brianc
Copy link
Owner

brianc commented Oct 9, 2020

@pckilgore my pleasure! Let me know if it still happens...was quite a monster to track down but @timotm is the real hero here. Steps to reproduce & deep analysis helped me know where to focus. I regret it took me so long to circle back to this issue.

@Raphyyy
Copy link

Raphyyy commented Feb 25, 2021

Hello,
I am still running this issue randomly when I do a qs.destroy();
I posted a full example of the situation here : https://stackoverflow.com/questions/61323787/pg-promise-cancel-a-query-initiated-with-pg-query-stream
While I posted this 10 months ago, I still have this issue with the latest lib.
I am facing this under PostgreSQL 12. Same error occured on PostgreSQL 10.
The main issue is that I can't even find a workaround or catch the error, it crash the whole app.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.