Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mariadb connection from pool fails #3240

Open
Bronson-Brown-deVost opened this issue May 22, 2024 · 1 comment
Open

Mariadb connection from pool fails #3240

Bronson-Brown-deVost opened this issue May 22, 2024 · 1 comment
Labels

Comments

@Bronson-Brown-deVost
Copy link

Bronson-Brown-deVost commented May 22, 2024

Bug Description

I am running an HTTP API server that uses SQLx for querying a MariaDB database. When I create a new database connection for every incoming request, I have no problems at all. When I use the SQLx pool, however, to make my database queries I regularly get ERROR error communicating with database: expected to read 4 bytes, got 0 bytes at EOF when trying to make a specific query. Have I missed something I need to do with the pool after using it? I create the database connection pool at the application startup with MySqlPoolOptions::new()... .connect_with(conn_data), store it in my state object, and pass it out to functions as db_pool: &Pool used as query_file_as!(...).fetch_all(db_pool).await. In fact the problematic query always works once and then crashes the server the second time it is run; I have checked and this is consistently the case—changing the value of max_connections() has no effect. All other queries can be called multiple times without any problem. Here is the error I get from mysql when it crashes, my apologies if this is entirely a MariaDB bug and has nothing to do with SQLx:

240522 12:03:24 [ERROR] mysqld got signal 11 ;
Sorry, we probably made a mistake, and this is a bug.

Your assistance in bug reporting will enable us to fix this for the next release.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs

We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

Server version: 10.6.16-MariaDB-0ubuntu0.22.04.1 source revision:
key_buffer_size=16384
read_buffer_size=131072
max_used_connections=2
max_threads=152
thread_count=2
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 334747 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7fadb0002088
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7faf38b2fc38 thread_stack 0x49000
/usr/sbin/mariadbd(my_print_stacktrace+0x32)[0x55d190c6ce82]
/usr/sbin/mariadbd(handle_fatal_signal+0x478)[0x55d1907a1298]
libc_sigaction.c:0(__restore_rt)[0x7faf4e387520]
/usr/sbin/mariadbd(_ZNK10Item_field11used_tablesEv+0xb)[0x55d1907cab5b]
/usr/sbin/mariadbd(_ZN27Used_tables_and_const_cache32used_tables_and_const_cache_joinEPK4Item+0x1d)[0x55d1907d14ad]
/usr/sbin/mariadbd(_ZN10Item_field15fix_outer_fieldEP3THDPP5FieldPP4Item+0x48b)[0x55d1907c7f6b]
/usr/sbin/mariadbd(_ZN10Item_field10fix_fieldsEP3THDPP4Item+0x435)[0x55d1907c89e5]
/usr/sbin/mariadbd(_ZN9Item_func10fix_fieldsEP3THDPP4Item+0xf6)[0x55d19080c1c6]
/usr/sbin/mariadbd(_ZN9Item_cond10fix_fieldsEP3THDPP4Item+0x122)[0x55d1907e60c2]
/usr/sbin/mariadbd(_Z11setup_condsP3THDP10TABLE_LISTR4ListIS1_EPP4Item+0x20f)[0x55d1905101af]
/usr/sbin/mariadbd(_ZN4JOIN7prepareEP10TABLE_LISTP4ItemjP8st_orderbS5_S3_S5_P13st_select_lexP18st_select_lex_unit+0x83b)[0x55d1905c780b]
/usr/sbin/mariadbd(+0xaa04f8)[0x55d19086c4f8]
/usr/sbin/mariadbd(_ZN14Item_subselect10fix_fieldsEP3THDPP4Item+0x1ad)[0x55d19086bbed]
/usr/sbin/mariadbd(_ZN9Item_func10fix_fieldsEP3THDPP4Item+0xf6)[0x55d19080c1c6]
/usr/sbin/mariadbd(_ZN9Item_func10fix_fieldsEP3THDPP4Item+0xf6)[0x55d19080c1c6]
/usr/sbin/mariadbd(_Z17create_view_fieldP3THDP10TABLE_LISTPP4ItemP25st_mysql_const_lex_string+0x6f)[0x55d190650aef]
/usr/sbin/mariadbd(_Z23find_field_in_table_refP3THDP10TABLE_LISTPKcmS4_S4_S4_P4ListIS1_EPP4ItembbPtbPS2_+0x21e)[0x55d19050cd0e]
/usr/sbin/mariadbd(_Z20find_field_in_tablesP3THDP10Item_identP10TABLE_LISTS4_P4ListIS3_EPP4Item27find_item_error_report_typebb+0x128)[0x55d19050d568]
/usr/sbin/mariadbd(_ZN10Item_field10fix_fieldsEP3THDPP4Item+0x293)[0x55d1907c8843]
/usr/sbin/mariadbd(_Z12setup_fieldsP3THD20Bounds_checked_arrayIP4ItemER4ListIS2_E17enum_column_usagePS6_S9_b+0x14b)[0x55d19050e37b]
/usr/sbin/mariadbd(_ZN4JOIN7prepareEP10TABLE_LISTP4ItemjP8st_orderbS5_S3_S5_P13st_select_lexP18st_select_lex_unit+0x464)[0x55d1905c7434]
/usr/sbin/mariadbd(_Z12mysql_selectP3THDP10TABLE_LISTR4ListI4ItemEPS4_jP8st_orderS9_S7_S9_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x5eb)[0x55d1905dab0b]
/usr/sbin/mariadbd(_Z13handle_selectP3THDP3LEXP13select_resultm+0x147)[0x55d1905dadd7]
/usr/sbin/mariadbd(+0x79b9dc)[0x55d1905679dc]
/usr/sbin/mariadbd(_Z21mysql_execute_commandP3THDb+0x3cfe)[0x55d1905744ce]
/usr/sbin/mariadbd(_ZN18Prepared_statement7executeEP6Stringb+0x52a)[0x55d19058927a]
/usr/sbin/mariadbd(_ZN18Prepared_statement12execute_loopEP6StringbPhS2_+0xa1)[0x55d190589431]
/usr/sbin/mariadbd(+0x7be3e5)[0x55d19058a3e5]
/usr/sbin/mariadbd(_Z19mysqld_stmt_executeP3THDPcj+0x30)[0x55d19058a570]
/usr/sbin/mariadbd(_Z16dispatch_command19enum_server_commandP3THDPcjb+0x18e1)[0x55d190578241]
/usr/sbin/mariadbd(_Z10do_commandP3THDb+0x131)[0x55d190579031]
/usr/sbin/mariadbd(_Z24do_handle_one_connectionP7CONNECTb+0x3af)[0x55d19068057f]
/usr/sbin/mariadbd(handle_one_connection+0x5d)[0x55d1906808fd]
/usr/sbin/mariadbd(+0xc0fa26)[0x55d1909dba26]
nptl/pthread_create.c:442(start_thread)[0x7faf4e3d9ac3]
x86_64/clone3.S:83(__clone3)[0x7faf4e46b850]

Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x7fadb0b7eb98): SELECT
        `language_sign_cluster_reading_parsing`.`language_lemma_id` AS `lemma_id: u32`,
        `language_lemma_form`.`lemma_form` AS `lemma`,
        `word_class`.`string` AS `word_class`,
        `word_class`.abbreviation AS `word_class_abbreviation`,
        trim(regexp_replace(`language_lemma`.`main_meaning`, '^[# ]+', '')) AS `short_definition`,
        `language_lemma_form`.`supplement` AS `root_designation`,
        `verbal_stem`.`string` AS `verb_stem`,
        `verbal_stem`.`abbreviation` AS `verb_stem_abbreviation`,
        `verbal_tempus`.`string` AS `verb_tense`,
        `verbal_tempus`.`abbreviation` AS `verb_tense_abbreviation`,
        `person`.`string` AS `person`,
        `person`.`abbreviation` AS `person_abbreviation`,
        `gender`.`string` AS `gender`,
        `gender`.`abbreviation` AS `gender_abbreviation`,
        `number`.`string` AS `number`,
        `number`.`abbreviation` AS `number_abbreviation`,
        `status`.`string` AS `state`,
        `status`.`abbreviation` AS `state_abbreviation`,
        `augment`.`string` AS `augment`,
        `augment`.`abbreviation` AS `augment_abbreviation`,
        `suffix_person`.`string` AS `suffix_person`,
        `suffix_person`.`abbreviation` AS `suffix_person_abbreviation`,
        `suffix_gender`.`string` AS `suffix_gender`,
        `suffix_gender`.`abbreviation` AS `suffix_gender_abbreviation`,
        `suffix_number`.`string` AS `suffix_number`,
        `suffix_number`.`abbreviation` AS `suffix_number_abbreviation`,
        `language_sign_cluster_reading_parsing`.`element_sequence` AS `word_part: u8`,
        `language_sign_cluster_reading_parsing`.`variant_sequence` AS `parsing_priority: u8`
FROM `language_sign_cluster_reading_parsing`
    JOIN `language_lemma` ON `language_sign_cluster_reading_parsing`.`language_lemma_id` = `language_lemma`.`language_lemma_id`
    JOIN `language_lemma_form` ON `language_lemma_form`.`language_lemma_id` = `language_lemma`.`language_lemma_id`
            AND `language_lemma_form`.`is_main` = 1
    JOIN `language_word_class` ON `language_lemma`.`language_word_class_id` = `language_word_class`.`language_word_class_id`
    JOIN `i18n_view_localized_string` `word_class` ON `word_class`.`string_id` = `language_word_class`.`i18n_string_id`
            AND `word_class`.`language_id` = COALESCE(
                    (SELECT `language_id`
                    FROM `i18n_view_language`
                    WHERE `i18n_view_language`.`iso_639_1` = ?
                    LIMIT 1),
                    1 # This is the value of our default language "de"
                )

    -- Verbal Stem --
    LEFT JOIN (
        `language_verbal_stem` `lvs`
        JOIN `i18n_localized_string` `verbal_stem` ON `verbal_stem`.`i18n_localized_string_id` = `lvs`.`i18n_string_id`
    ) ON `lvs`.`language_verbal_stem_id` = `language_sign_cluster_reading_parsing`.`language_verbal_stem_id`
        AND `verbal_stem`.`i18n_language_id` = `word_class`.`language_id`

    -- Verb Tense --
    LEFT JOIN (
        `language_verbal_tempus_definition` `ltd`
        JOIN `i18n_localized_string` `verbal_tempus` ON `verbal_tempus`.`i18n_localized_string_id` = `ltd`.`i18n_string_id`
    ) ON `ltd`.`language_verbal_tempus_definition_id` = `language_sign_cluster_reading_parsing`.`verbal_tempus`
        AND `verbal_tempus`.`i18n_language_id` = `word_class`.`language_id`

    -- Person --
    LEFT JOIN (
        `language_person_definition` `lpd`
        JOIN `i18n_localized_string` `person` ON `person`.`i18n_localized_string_id` = `lpd`.`i18n_string_id`
    ) ON `lpd`.`language_person_definition_id` = `language_sign_cluster_reading_parsing`.`person`
        AND `person`.`i18n_language_id` = `word_class`.`language_id`

    -- Gender --
    LEFT JOIN (
        `language_gender_definition` `lgd`
        JOIN `i18n_localized_string` `gender` ON `gender`.`i18n_localized_string_id` = `lgd`.`i18n_string_id`
    ) ON `lgd`.`language_gender_definition_id` = `language_sign_cluster_reading_parsing`.`gender`
        AND `gender`.`i18n_language_id` = `word_class`.`language_id`

    -- Number --
    LEFT JOIN (
        `language_number_definition` `lnd`
        JOIN `i18n_localized_string` `number` ON `number`.`i18n_localized_string_id` = `lnd`.`i18n_string_id`
    ) ON `language_sign_cluster_reading_parsing`.`number` = `lnd`.`language_number_definition_id`
        AND `number`.`i18n_language_id` = `word_class`.`language_id`

    -- State --
    LEFT JOIN (
        `language_status_definition` `lsd`
        JOIN `i18n_localized_string` `status` ON `status`.`i18n_localized_string_id` = `lsd`.`i18n_string_id`
    ) ON `language_sign_cluster_reading_parsing`.`status` = `lsd`.`language_status_definition_id`
        AND `status`.`i18n_language_id` = `word_class`.`language_id`

    -- Augment --
    LEFT JOIN (
        `language_augment_definition` `lad`
        JOIN `i18n_localized_string` `augment` ON `augment`.`i18n_localized_string_id` = `lad`.`i18n_string_id`
    ) ON `language_sign_cluster_reading_parsing`.`augment` = `lad`.`language_augment_definition_id`
        AND `augment`.`i18n_language_id` = `word_class`.`language_id`

    -- Suffix Person --
    LEFT JOIN (
        `language_person_definition` `lpsd`
        JOIN `i18n_localized_string` `suffix_person` ON `suffix_person`.`i18n_localized_string_id` = `lpsd`.`i18n_string_id`
    ) ON `lpsd`.`language_person_definition_id` = `language_sign_cluster_reading_parsing`.`suffix_person`
        AND `suffix_person`.`i18n_language_id` = `word_class`.`language_id`

    -- Suffix Gender --
    LEFT JOIN (
        `language_gender_definition` `lgsd`
        JOIN `i18n_localized_string` `suffix_gender` ON `suffix_gender`.`i18n_localized_string_id` = `lgsd`.`i18n_string_id`
    ) ON `lgsd`.`language_gender_definition_id` = `language_sign_cluster_reading_parsing`.`suffix_gender`
        AND `suffix_gender`.`i18n_language_id` = `word_class`.`language_id`

    -- Suffix Number --
    LEFT JOIN (
        `language_number_definition` `lnsd`
        JOIN `i18n_localized_string` `suffix_number` ON `suffix_number`.`i18n_localized_string_id` = `lnsd`.`i18n_string_id`
    ) ON `lnsd`.`language_number_definition_id` = `language_sign_cluster_reading_parsing`.`suffix_number`
        AND `suffix_number`.`i18n_language_id` = `word_class`.`language_id`

WHERE `language_sign_cluster_reading_parsing`.`manuscript_sign_cluster_reading_id` = ?
ORDER BY `language_sign_cluster_reading_parsing`.`element_sequence` ASC

Connection ID (thread ID): 6
Status: NOT_KILLED

Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on,not_null_range_scan=off,hash_join_cardinality=off,cset_narrowing=off

The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/ contains
information that should help you find out what is causing the crash.
Writing a core file...
Working directory at /var/lib/mysql
Resource Limits:
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        0                    unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             15053                15053                processes
Max open files            64183                64183                files
Max locked memory         2097152              2097152              bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       15053                15053                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us
Core pattern: |/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E

Kernel version: Linux version 5.15.0-92-generic (buildd@lcy02-amd64-005) (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #102-Ubuntu SMP Wed Jan 10 09:33:48 UTC 2024

Minimal Reproduction

Sorry, I am not exactly sure how to reproduce without my specific database and query

Info

  • SQLx version: 0.7.4
  • SQLx features enabled: "runtime-tokio", "tls-native-tls", "mysql", "macros", "time"
  • Database server and version: 10.6.16-MariaDB-0ubuntu0.22.04.1 (MySQL / Postgres / SQLite <x.y.z>)
  • Operating system: Linux version 5.15.0-92-generic (buildd@lcy02-amd64-005) (gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #102-Ubuntu SMP Wed Jan 10 09:33:48 UTC 2024
  • rustc --version: [REQUIRED]
@Bronson-Brown-deVost
Copy link
Author

Also, I can run this problematic Query from above in my database program (Datagrip) as many times as I want without any problems or database crashes.

maxcountryman referenced this issue May 30, 2024
* Check max lifetime in return_to_pool, not on acquire

* Improve checks in backgrand maintenance task

* add tests

* adjust test to fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant