Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR - Failed to parse GQL log line #37

Closed
matsuro-hadouken opened this issue Jan 4, 2023 · 4 comments
Closed

ERROR - Failed to parse GQL log line #37

matsuro-hadouken opened this issue Jan 4, 2023 · 4 comments

Comments

@matsuro-hadouken
Copy link

Latest version run in docker - v0.6.0
experimental-model-builder - not set
mainnet
indexing: ( QmXWbpH76U6TM4teRNMZzog2ismx577CkH7dzn1Nw69FcV )

https://thegraph.com/explorer/subgraphs/EdS5b1Zbw8A5KHs4JRnWDKdM5K2Hnq97UuE8RG9b61Ey

autoagora-processor spaming this error since yesterday:

ERROR - Failed to parse GQL log line: {"level":30,"time":1672827648209,"pid":123,"hostname":"ggg-ddd.com","name":"IndexerService","indexer":"0x8f689a83Dd52EAa1D5AD6A40C46189b4A0d70b06","operator":"0x...","indexer":"0x8f689a83Dd52EAa1D5AD6A40C46189b4A0d70b06","operator":"0x...","query":"{\"query\":\"query GetOriginTransfers($_v1_skip: Int, $_v2_first: Int, $_v3_orderBy: OriginTransfer_orderBy, $_v4_orderDirection: OrderDirection, $_v5_where: OriginTransfer_filter, $_v6_subgraphError: _SubgraphErrorPolicy_!) {\\n  __typename(block: {hash: \\\"0x7cddb40d352c55d70bdb9c3a8a2ae359e3af602efd69b8d2bd91947d2cdffd7d\\\"})\\n  xdai_originraphError: $_v6_subgraphError, block: {hash: \\\"0x7cddb40d352c55d70bdb9c3a8a2ae359e3af602efd69b8d2bd91947d2cdffd7d\\\"}) {\\n    id\\n    chainId\\n    transferId\\n    nonce\\n    status\\n    messageHash\\n    originDomain\\n    destinationDomain\\n    canonicalDomain\\n    to\\n    delegate\\n    receiveLocal\\n    callData\\n    slippage\\n    originSendedoptedAsset\\n      localAsset\\n      blockNumber\\n    }\\n    message {\\n      id\\n      leaf\\n      index\\n      root\\n      transferId\\n      destinationDomain\\n      transactionHash\\n      message\\n    }\\n    caller\\n    transactionHash\\n    timestamp\\n    gasPrice\\n    gasLimit\\n    blockNumber\\n  }\\n}\\n\",\"variables\":{\"_v1_skip\":0in\":[6450786,6648936,6778479,1634886255,1869640809,1886350457],\"nonce_gte\":29,\"originDomain\":6778479},\"_v6_subgraphError\":\"deny\"}}","responseTime":11,"msg":"Done executing paid query"}

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/autoagora_processor/main.py", line 231, in callback
    logs_db.log_query(db_log_entry)
  File "/usr/local/lib/python3.9/site-packages/autoagora_processor/logs_db.py", line 46, in log_query
    self.cur.execute(
psycopg2.InterfaceError: cursor already close
ERROR - Failed to parse GQL log line: {"level":30,"time":1672826508807,"pid":123,"hostname":"ggg-ddd.com","name":"IndexerService","indexer":"0x8f689a83Dd52EAa1D5AD6A40C46189b4A0d70b06","operator":"0x...","indexer":"0x8f689a83Dd52EAa1D5AD6A40C46189b4A0d70b06","operator":"0x...","component":"QueryProcessor","deployment":"QmXWbpH76U6TM4teRNMZzog2ismx577CkH7dzn1Nw69FcV","fees":"78649816097172240","query":"{\"query\":\"query GetBlockNumber {\\n  __typename(block: {hash: \\\"0xfed134349ff394da2736f9fd879f0f0e1abd81b627fb88b6e2b32ef4b502e1bb\\\"})\\n  xdai__meta: _meta(block: {hash: \\\"0xfed134349ff394da2736f9fd879f0f0e1abd81b627fb88b6e2b32ef4b502e1bb\\\"}) {\\n    block {\\n      number\\n    }\\n  }\\n}\\n\",\"variables\":{}}","responseTime":10,"msg":"Done executing paid query"}
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/autoagora_processor/main.py", line 231, in callback
    logs_db.log_query(db_log_entry)
  File "/usr/local/lib/python3.9/site-packages/autoagora_processor/logs_db.py", line 46, in log_query
    self.cur.execute(
psycopg2.InterfaceError: cursor already closed
@aasseman
Copy link
Contributor

aasseman commented Jan 5, 2023

I don't think that ERROR - Failed to parse GQL log line [...] and psycopg2.InterfaceError: cursor already closed are related.
To be more precise, when ERROR - Failed to parse GQL log line, it just logs it and discards the data.

psycopg2.InterfaceError: cursor already closed is related to a DB problem. Have you checked that the DB that backs AutoAgora is healthy?

@uchu
Copy link

uchu commented Jan 7, 2023

Actually, after some restarting the services, without any changes to the database, it started to make records to query_logs and query_skeletons tables. The only thing that still looks confusing, that only two rows out of 1.2k+ have query_timing_ms values other than null. Is it essential for the Relative price discovery, and does this field impact Absolute discovery results somehow? Would appreciate an advice!

@aasseman
Copy link
Contributor

query_timing_ms is only used for the --experimental-model-builder. More precisely, it's used for determining the relative costs (in execution time) of different query shapes. Agora (the Gateway component that applies the indexer pricing), actually prices multi-root queries separately, for example:

query myquery {
  tokens([...]){[...]}
  pairs([...]){[...]}
}

Is split in 2 separate queries to be priced, and would add up the price models for tokens[...] and pairs[...].
The issue is that on the indexer side, the whole query is executed at once, and there is no way for now to determine the separate execution time of each root.
So for now, when we have a multi-root query, AA will just ignore the execution time.
We know that it is very limiting, and that is the reason for the existence of --experimental-model-builder.

Without the --experimental-model-builder, AA will apply a flat pricing model for all queries of each subgraph, and for each subgraph, will continuously optimize the flat price for maximum revenue (GRT/second).

@uchu
Copy link

uchu commented Jan 11, 2023

@aasseman Thanks for explaining that. We still have --experimental-model-builder turned off for now, but just monitoring how the DB is populated to get a better picture if we can switch to Relative price discovery in some near future. Looks like most of queries we receive now in MIPs phase are multi-root, although it's hard to read the compressed skeletons in the DB sometimes: ) As for now we just got only 5 queries with timing of over 140k queries.

I believe that is the only issue that causes the small amount of queries with timing. Or can there be another cause.

@aasseman aasseman closed this as not planned Won't fix, can't repro, duplicate, stale Jan 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants