Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix allow_experimental_projection_optimization with enable_global_with_statement #34650

Merged
merged 1 commit into from
Feb 17, 2022

Conversation

azat
Copy link
Collaborator

@azat azat commented Feb 16, 2022

Changelog category (leave one):

  • Bug Fix (user-visible misbehaviour in official stable or prestable release)

Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Fix allow_experimental_projection_optimization with enable_global_with_statement (before it may lead to Stack size too large error in case of multiple expressions in WITH clause, and also it executes scalar subqueries again and again, so not it will be more optimal).

allow_experimental_projection_optimization requires one more
InterpreterSelectQuery, which with enable_global_with_statement will
apply ApplyWithAliasVisitor if the query is not subquery.

But this should not be done for queries from
MergeTreeData::getQueryProcessingStage()/getQueryProcessingStageWithAggregateProjections()
since this will duplicate WITH statements over and over.

This will also fix scalar.xml perf tests, that leads to the following
error now:

scalar.query0.prewarm0: DB::Exception: Stack size too large.

And since it has very long query in the log, this leads to the following
perf test error:

_csv.Error: field larger than field limit (131072)

Cc: @amosbird
Refs: #34456

@robot-clickhouse robot-clickhouse added the pr-bugfix Pull request with bugfix, not backported by default label Feb 16, 2022
@azat azat marked this pull request as draft February 16, 2022 15:27
…h_statement

allow_experimental_projection_optimization requires one more
InterpreterSelectQuery, which with enable_global_with_statement will
apply ApplyWithAliasVisitor if the query is not subquery.

But this should not be done for queries from
MergeTreeData::getQueryProcessingStage()/getQueryProcessingStageWithAggregateProjections()
since this will duplicate WITH statements over and over.

This will also fix scalar.xml perf tests, that leads to the following
error now:

    scalar.query0.prewarm0: DB::Exception: Stack size too large.

And since it has very long query in the log, this leads to the following
perf test error:

    _csv.Error: field larger than field limit (131072)

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
@azat azat force-pushed the fix-projection-optimization branch from fcb7a0e to 774744a Compare February 16, 2022 16:15
@azat azat marked this pull request as ready for review February 16, 2022 16:16
@azat
Copy link
Collaborator Author

azat commented Feb 16, 2022

Performance Comparison (actions) [4/4] — Errors while building the report.

The error is for the server without this patch (upstream):

$ fgrep scalar.query0.prewarm0 left-server-log.log  | tail -n2 | head -c150; echo
2022.02.16 20:46:17.632470 [ 333 ] {scalar.query0.prewarm0} <Error> TCPHandler: Code: 306. DB::Exception: Stack size too large. Stack address: 0x7fdc2
$ fgrep scalar.query0.prewarm0 right-server-log.log  | tail -n2
2022.02.16 20:46:18.496707 [ 336 ] {scalar.query0.prewarm0} <Information> executeQuery: Read 10000013 rows, 114.47 MiB in 0.852315803 sec., 11732755 rows/sec., 134.31 MiB/sec.
2022.02.16 20:46:18.497345 [ 336 ] {scalar.query0.prewarm0} <Debug> MemoryTracker: Peak memory usage (for query): 23.93 MiB.

@Avogar Avogar self-assigned this Feb 17, 2022
@Avogar Avogar merged commit 7411178 into ClickHouse:master Feb 17, 2022
@azat azat deleted the fix-projection-optimization branch February 17, 2022 10:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-bugfix Pull request with bugfix, not backported by default
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants