-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Null-merge branch '8.0' (after 8.0.37) into trunk (8.4) #5389
Null-merge branch '8.0' (after 8.0.37) into trunk (8.4) #5389
Commits on Feb 20, 2024
-
Bug#36272336 Routers configure.cmake cannot be reused
Split router/cmake/configure.cmake to allow reuse. Change-Id: I4f4ca13729767e14ff75d0d915b124596ed1e132
Thomas Nielsen committedFeb 20, 2024 Configuration menu - View commit details
-
Copy full SHA for 6a5a4a5 - Browse repository at this point
Copy the full SHA 6a5a4a5View commit details
Commits on Feb 21, 2024
-
Bug#35854362 - Incorrect results when using group by loose
index scan Description: - Indexes are ordered based on their keys. Loose index scan effectively jumps from one unique value (or set of values) to the next based on the index’s prefix keys. - To “jump” values in an index, we use the handler call: ha_index_read_map(). - the first range read sets an end-of-range value to indicate the end of the first range. - The next range read does not clear the previous end-of-range value and applies it to the current range. - Since the end-of-range value has already been crossed in the previous range read, this causes the reads to stop. So the iteration is finished with the current range without moving onto the next range(unique set of values)resulting in an incorrect query result. Fix: - In order to find the next unique value, the old end-of-range value is cleared. Change-Id: I84290fb794db13ec6f0795dd14a92cf85b9dad09
Ayush Gupta committedFeb 21, 2024 Configuration menu - View commit details
-
Copy full SHA for c7e824d - Browse repository at this point
Copy the full SHA c7e824dView commit details
Commits on Feb 22, 2024
-
Bug#36248967: Issue in mysqldump (mysql dump utility)
Problem: mysqldump not sanitizing the version string obtained from server which may lead to injecting malicious commands to the output. Fix: added function sanitizing the version string by cutting off illegal part and issuing warning. Test: check the server version in the output with and without injected payload. Change-Id: I1f19e1c90bdb8d444285e427092face3bb16da01
Michal Jankowski committedFeb 22, 2024 Configuration menu - View commit details
-
Copy full SHA for f351ea9 - Browse repository at this point
Copy the full SHA f351ea9View commit details -
Bug#36159678 (1 of 6) JTie: remove some larger macros in jtie_gcalls
In jtie_gcalls.hpp, macros are used to create lists and templates. Many of these could stringify or concatenate up to 20 parameters, but capacity beyond about 13 is not needed. This patch shortens the lists. Clang will sometimes print a "rejecting template" error for each candidate template not chosen, so this helps to reduce the number of error messages generated before the compiler finally gives up. Change-Id: Iddbb25d0a6e5740815b686259c734a0bbf434d8d
Configuration menu - View commit details
-
Copy full SHA for acca876 - Browse repository at this point
Copy the full SHA acca876View commit details -
Bug#36159678 (2 of 6) JTie: Restore const variants of gcall templates
An earlier version of jtie_gcalls generated two distinct templates for gcall_mfv() and gcall_mfr() to map the const and non-const variants of otherwise identical method calls. This strategy was abandoned because some very old C++ compilers could not handle the macro invocations that were used, and because Microsoft's compilers considered the two templates to be equivalent. Microsoft compilers still behave the same way, but Clang requires distinct templates. This patch restores the earlier design, but only in non-Microsoft environments. Change-Id: I6694c7cdc096a6675e71bcd843fec99421367c70
Configuration menu - View commit details
-
Copy full SHA for 214d235 - Browse repository at this point
Copy the full SHA 214d235View commit details -
Bug#36159678 (3 of 6) JTie: Add missing specialization; test myjapi
In jtie_tconv_object_impl.hpp, add a missing specialization of Target<J, C> for const C. Fix an incorrect mapping in myjapi_classes.hpp for C0::check(). Then undefine JTIE_USE_WRAPPED_VARIANT_FOR_FUNCTION, and compile and run the tests (use "ctest -R jtie"). Change-Id: I5911bb6ef9ea149c2242378d39956a4d4afaab84
Configuration menu - View commit details
-
Copy full SHA for 55cd7ce - Browse repository at this point
Copy the full SHA 55cd7ceView commit details -
Bug#36159678 (4 of 6) JTie: Fix some ndbjtie function mappings
The jtie mapping for NdbOperation::getNdbErrorLine() should be marked as const-overloaded. The jtie mapping for Ndb_cluster_connection::set_service_uri() was declared as void, but should actually return int. The mappings for three const methods were defined to use the _t variant of the object trait type, but need to use the _ct variant. Change-Id: I4ab9e7a093c74edef72e2141d701e2e6c32b8bb6
Configuration menu - View commit details
-
Copy full SHA for 1d5fba1 - Browse repository at this point
Copy the full SHA 1d5fba1View commit details -
Bug#36159678 (5 of 6) JTie: no NDBJTIE_USE_WRAPPED_VARIANT_FOR_FUNCTION
In ndbjtie_defs.hpp, remove the "XXXXX temporary, for testing" comment from 2011, and do not define NDBJTIE_USE_WRAPPED_VARIANT_FOR_FUNCTION or NDBJTIE_USE_WRAPPED_VARIANT_FOR_OVERLOADED_FUNCTION. This will enable use of the unwrapped mappings for most of the NDB API on all platforms. Change-Id: Id132500995cbaee62e7c1e5aa40f9272613501bb
Configuration menu - View commit details
-
Copy full SHA for 23f947d - Browse repository at this point
Copy the full SHA 23f947dView commit details -
Bug#36159678 (6 of 6) JTie: Remove wrappers
Remove most JTie wrappers. They are no longer needed. The only remaining method wrappers are a small set for overloaded methods that have both const and non-const versions. Use "unifdef -UNDBJTIE_USE_WRAPPED_VARIANT_FOR_FUNCTION" and "unifdef -UNDBJTIE_USE_WRAPPED_VARIANT_FOR_OVERLOADED_FUNCTION" to remove wrapped variants from the ndbapi and mysql_utils jtie header files. Change-Id: Ie08c2242aed2bfda02defe76fef40bd0bad6e9e7
Configuration menu - View commit details
-
Copy full SHA for 854e46e - Browse repository at this point
Copy the full SHA 854e46eView commit details -
Bug#36313427 ndb_redo_log_reader broken options
In MySQL Cluster 8.0.31 several issues were introduced in ndb_redo_log_reader. Below fixes have been done: - For backward compatibility the legacy options with single minus for example -lap are now allowed after file argument. - One can now provide the mandatory numeric value for --mbyte/-mbyte, --page/-page, --pageindex/-pageindex options. - Option --mbyte now allows up to 1023 instead of 15 as value since we support fragment redo log file sizes up to 1GB. - When using --mbyte option it starts reading the file from given MB, previously it started from 4 times the given value. - Now it reads data only once, before it stepped back 3 quarter of a MB for every MB read and read it again. - Remove cast-qual warning in convert_legacy_options in reader.cpp. Added new test ndb.ndb_redo_log_reader. Change-Id: Iadfabcab9532eddf1314aeb618c72c31ab7b4a6f
Configuration menu - View commit details
-
Copy full SHA for 5d24f93 - Browse repository at this point
Copy the full SHA 5d24f93View commit details -
Bug#36313259 ndb_redo_log_reader fails with Record type = 0 not imple…
…mented. ndb_redo_log_reader had several issues for example it failed on valid files with: Record type = 0 not implemented. Error in redoLogReader(). Exiting! And also warned about wrong checksums. The following changes were done: - Ignore unused (zero filled) pages. - Allow partial (header only) pages. - Treat checksum value 37 as no checksum disabled. Which is the case for release build. Checksum is only done in debug build. - Show mbyte up to 1023 rather than up to 15 in help, still no check. We support fragment log file size up to 1GB. - Spell FfragmentId as FragmentId. - Fix cast-qual warning when using PrepareOperationRecord::m_keyInfo. Change-Id: I40304555488fcfb1b3f0949096c61fdfff6171c0
Configuration menu - View commit details
-
Copy full SHA for eb1c1df - Browse repository at this point
Copy the full SHA eb1c1dfView commit details -
Bug#36313259 ndb_redo_log_reader fails with Record type = 0 not imple…
…mented. ndb_redo_log_reader had several issues for example it failed on valid files with: Record type = 0 not implemented. Error in redoLogReader(). Exiting! And also warned about wrong checksums. The following changes were done: - Ignore unused (zero filled) pages. - Allow partial (header only) pages. - Treat checksum value 37 as no checksum disabled. Which is the case for release build. Checksum is only done in debug build. - Show mbyte up to 1023 rather than up to 15 in help, still no check. We support fragment log file size up to 1GB. - Spell FfragmentId as FragmentId. - Fix cast-qual warning when using PrepareOperationRecord::m_keyInfo. A new test ndb.ndb_redo_log_reader was added. Change-Id: I40304555488fcfb1b3f0949096c61fdfff6171c0
Configuration menu - View commit details
-
Copy full SHA for 03e7efb - Browse repository at this point
Copy the full SHA 03e7efbView commit details -
Merge branch 'mysql-5.7-cluster-7.5' into mysql-5.7-cluster-7.6
Change-Id: Icc7257a29ebcf899b6809238ca1bcb5ef8fc61fc
Configuration menu - View commit details
-
Copy full SHA for c6754bd - Browse repository at this point
Copy the full SHA c6754bdView commit details -
Null merge branch 'mysql-5.7-cluster-7.6' into mysql-8.0
Change-Id: I246b76d3bb974734edd426223649e6f25eef5ffc
Configuration menu - View commit details
-
Copy full SHA for 89e2ab3 - Browse repository at this point
Copy the full SHA 89e2ab3View commit details -
Bug#36313482 ndb_redo_log_reader can not read encrypted files
Support for encryption was added to ndb_redo_log_reader tool. Change-Id: If28dc314c2fab732474447db68e1a338248695e7
Configuration menu - View commit details
-
Copy full SHA for 2e02e9b - Browse repository at this point
Copy the full SHA 2e02e9bView commit details
Commits on Feb 23, 2024
-
Bug#36323186: gr_clone_from_unsupported_version failing on weekly-tru…
…nk Windows Problem: Slow Windows machines cause unexpected GR errors during tests: "There was an error when connecting to the donor server.." "For details please check.." Tests lacked these error checks, leading to false positives. Solution: Added suppressions for error messages in the test cases. Change-Id: I9d54f2a8190cdd61ecf56209ac567ceb765133ba
Jaideep Karande committedFeb 23, 2024 Configuration menu - View commit details
-
Copy full SHA for 14f89c1 - Browse repository at this point
Copy the full SHA 14f89c1View commit details -
Bug#35277407 InnoDB:trx hangs due to wrong trx->in_innodb value
This commit backports the fix to 8.0 This patch will solve the following duplicates of this bug: Bug #112425: trx_t might be Use-After-Free in innobase_commit_by_xid Bug #99643: innobase_commit_by_xid/innobase_rollback_by_xid is not thread safe Bug #105036: trx would be used after free in `innobase_commit_by_xid` and rollback Background: TrxInInnoDB is a RAII wrapper for trx_t object used to track if the transaction's thread is currently executing within InnoDB code. It is acquired on all entry points, and as Innodb can be entered "recursively", the trx->in_depth is used to track the balance of enters and exits. On the outermost enter, the thread additionally checks if trx->in_innodb has the TRX_FORCE_ROLLBACK (0x8000 0000) flag set, which means a high priority transaction is attempting an asynchronous rollback of this transaction, so to avoid races, this thread should wait for the rollback to complete. Issue: TrxInInnoDB's destructor calls exit which resets in_depth and in_innodb increased by enter. However innobase_commit_by_xid and innobase_rollback_by_xid calls trx_free_for_background which returns the trx back to the pool, before the destructor is called. If this trx is being reused by another thread, it can lead to data-race and corrupted value of in_depth and in_innodb. If in_depth gets the value of -1, subsequent calls to enter and exit will bump in_innodb by one. This can lead to indefinite wait if in_innodb reaches TRX_FORCE_ROLLBACK. Fix: Ensure that TrxInInnoDB calls exit before returning the trx object to the pool. Further add checks to catch corrupt values of in_depth when freeing trx. Trx state validation before free was missed in trx_free_prepared_or_active_recovered Thanks to Shaohua Wang (Alibaba, Ex-Innodb) for the contribution Change-Id: Ibf79bec85ffa0eaf65f565c169db61536bff10a2
Configuration menu - View commit details
-
Copy full SHA for 88b0eba - Browse repository at this point
Copy the full SHA 88b0ebaView commit details -
Bug#36298069 Missing REPL$mysql/ndb_apply_status line in
ndb.ndbinfo_plans result Problem: ndb.ndbinfo_plans have failed been seen to fail as below five times since test part for WL#11968 was introduced January 2022. Failures have mostly been on Windows but also on linux with ASAN. CURRENT_TEST: ndb.ndbinfo_plans --- .../mysql-test/suite/ndb/r/ndbinfo_plans.result 2024-02-11 06:00:21.000000000 +0300 +++ "...\log\ndbinfo_plans.reject" 2024-02-11 11:58:09.260695300 +0300 @@ -297,7 +297,6 @@ 2 NDB$BLOBEVENT_REPL$mysql/ndb_schema_3 NDB$BLOB_4_3 3 REPL$mysql/ndb_schema_result ndb_schema_result 4 ndb_index_stat_head_event ndb_index_stat_head -5 REPL$mysql/ndb_apply_status ndb_apply_status ## Query uses primary keys on both tables: EXPLAIN SELECT event_id, e.name, table_name FROM events e Analysis: Test assumes that event_id are constant but that can't be assumed, for example when prior test has recreated tables or events. Solution: Rewrite to find list of event_ids by table name which are known to be stable identifier. Change-Id: I2ba83b0a715ea2ca2c5a75dba4e1e8ea635391c1
Configuration menu - View commit details
-
Copy full SHA for 2eaf443 - Browse repository at this point
Copy the full SHA 2eaf443View commit details
Commits on Feb 26, 2024
-
Bug#36124625 HCS-11410 : Restore Dbsystem Failing with error "at leas…
…t one redo file is missi After calling rename(..) on POSIX platform one should fsync the (destination) directory node to make sure change is persisted to disc. Redo Log module was attempting fsync("data-dir/") instead of fsync("data-dir/#innodb_redo/"). This could manifest as data loss and InnoDB not being able to start up, if a power-outage (or OS crash) would happen right after rename, and before the change to directory got persisted to disc on its own pace. Similar problem could occur if a snapshot of the file system was taken at that time. The way this bug would manifest in practice is that when InnoDB needs a new redo log file with number 280 (because 279th has become full of data), what it does is the following (see log_files_produce_file()): 1. create "#ib_redo280_tmp" (using open(..) syscall) 2. resize "#ib_redo280_tmp" (using fallocate(..) + fsync/fdatasync()) 3. close("#ib_redo280_tmp") 4. open("#ib_redo280_tmp") 5. write the correct headers to "#ib_redo280_tmp" (using pwrite(..)) 6. because we use a RAII Log_file_handle which notices the file was modified by writes, calls fsync()/fdatasync() before calling close() on the handle 7. rename "#ib_redo280_tmp" to "#ib_redo280" (using rename() syscall) 8. fsync() or fdatasync() the parent directory which contains the renamed file (using a handle obtained from open(directory)) 9. mark "#ib_redo279" as full (using open(..), pwrite(...) ) (note the lack of fsync. We are ok with persisting it "later", but it shouldn't be "too early") 10. close("#ib_redo279") The problem is that in step 8. we fsync wrong directory, so during recovery the InnoDB would notice that the redo log file with the largest number, #ib_redo279, already has LOG_HEADER_FLAG_FILE_FULL indicating that it was full, which implies a new file with even larger number, #ib_redo280, should have been created (which involves rename) to accommodate new data, yet this newer file doesn't exist. InnoDB woud fail with an error: [ERROR] MY-013893 [InnoDB] Found existing redo log files, but at least one is missing. It is unknown if recovery could reach physically consistent state. Please consider restoring from backup or providing --innodb-force-recovery > 0. This patch fixes the bug, by calling fsync on the correct directory. Change-Id: I87f4eb23c60b0d82876789069d1a6716734815ac
Jakub Łopuszański committedFeb 26, 2024 Configuration menu - View commit details
-
Copy full SHA for 51a36bd - Browse repository at this point
Copy the full SHA 51a36bdView commit details -
Revert "Bug #33970854 Connect timeout too long towards MGMD nodes"
This patch revert commit c6953232a38895aae60c37563e12e4476e1d8ade Change-Id: I2c08dbd279958333b86e938cc120b64f5c1c1c5d
Configuration menu - View commit details
-
Copy full SHA for df63fc4 - Browse repository at this point
Copy the full SHA df63fc4View commit details
Commits on Feb 27, 2024
-
Bug#36248967: Issue in mysqldump (mysql dump utility) post push fix
Problem: mysqldump_bugs.test failing on release builds Fix: replace --source include/have_debug.inc by mysql_have_debug.inc Change-Id: I151b69dc0da1c5d36714b453ba51349e5057564b
Michal Jankowski committedFeb 27, 2024 Configuration menu - View commit details
-
Copy full SHA for c334b7e - Browse repository at this point
Copy the full SHA c334b7eView commit details -
Bug#36089900 Look for gcc-ar and gcc-ranlib when building on Oracle L…
…inux When building with LTO we should use gcc-ar/gcc-ranlib. Look for these versions regardless of LTO flags. Change-Id: I9da5b30bfbd01d4188e8f21721c26fe027e25762 (cherry picked from commit b88d1f86cd768b5555b2da899cbd07e309a156f6)
Tor Didriksen committedFeb 27, 2024 Configuration menu - View commit details
-
Copy full SHA for d4234d7 - Browse repository at this point
Copy the full SHA d4234d7View commit details -
Bug#36338366 Bump CMAKE_MINIMUM_REQUIRED VERSION in 8.0
We have dropped support for some obsolete Linux distros, so we can bump the CMAKE_MINIMUM_REQUIRED VERSION This allows us to use more modern cmake features. Change-Id: I7a37a0f25eff72c00457136dc3b5ee3956b3a139
Tor Didriksen committedFeb 27, 2024 Configuration menu - View commit details
-
Copy full SHA for 64edf9a - Browse repository at this point
Copy the full SHA 64edf9aView commit details -
Bug#36313482 ndb_redo_log_reader can not read encrypted files
Post push fix. The mysqltest command replace_result does not work on mysqltest command echo. Change-Id: Iaf81824f723e0ec2cdf9c306f944ed0285b4db46
Configuration menu - View commit details
-
Copy full SHA for 5c64f9b - Browse repository at this point
Copy the full SHA 5c64f9bView commit details
Commits on Feb 28, 2024
-
Bug#36343254 Update BuildRequires for cmake and bison
Some BuildRequires: rules have not been maintained properly. Update version required for cmake and bison. In the patch for 8.0, changed to: -BuildRequires: cmake >= 3.14.6 +BuildRequires: cmake >= 3.11.2 Change-Id: I28041799054bfbc3653e73cfc919aacf4e2b7f7f (cherry picked from commit 33f29fdce4fa3fb71373bbfdf7f27a51514b8009) (cherry picked from commit b0700d0e91abab644515826e156b4070f9de899f)
Tor Didriksen committedFeb 28, 2024 Configuration menu - View commit details
-
Copy full SHA for 2880213 - Browse repository at this point
Copy the full SHA 2880213View commit details
Commits on Feb 29, 2024
-
Bug#36225456 MySQL Router - Failed checking the Router account authen…
…tication plugin When the Router is bootstrapped it checks if the metadata user is not using unsupported mysql_native_password plugin. First part of this check is query of the users table for the host and plugin. If this fails the procedure is skipped but the user sees an error messsage as an bootstrap output. Meanwhile this can be no error scenario. For example the user that is used to bootstrap can have no privileges to query users table. This will be the case if the user created by the Shell is used. This patch removes the confusing error message from the bootstrap output. Change-Id: Ic9509a57f5886747f2fe401e566b9f0e2a3c6bd5
Andrzej Religa committedFeb 29, 2024 Configuration menu - View commit details
-
Copy full SHA for 631464b - Browse repository at this point
Copy the full SHA 631464bView commit details -
Bug#36247705 Signal 11 in Health Monitor during shutdown
Disallow system variable updates during shutdown. Change-Id: I6f259cd8eda8c6ac662573e68bfb17ddc386d1c3
Christopher Powers committedFeb 29, 2024 Configuration menu - View commit details
-
Copy full SHA for 7064521 - Browse repository at this point
Copy the full SHA 7064521View commit details
Commits on Mar 1, 2024
-
Bug#34595073 : MEB 8.0.30 BACKUP FAILS WHEN
PERFORMANCE_SCHEMA IS OFF Analysis: --------- When performance_schema is set to off, MEB fails with error "Unexpected number of rows from MySQL query" while taking a backup. The tables innodb_redo_log_files & keyring_component_status are not available when performance schema is disabled on the server. When MEB tried to query these tables, it gets empty set as the result and hence it fails. Fix: ---- The fix is to make these tables available even when performance_schema set to OFF or disabled on the server. Change-Id: I0f5fa1b3293733b517f5f19f1dcdebffc651c061
Sai Tharun Ambati committedMar 1, 2024 Configuration menu - View commit details
-
Copy full SHA for 17ebe2d - Browse repository at this point
Copy the full SHA 17ebe2dView commit details -
BUG#36059098: Stuck group_replication_set_as_primary
It was reported that the SQL function `group_replication_set_as_primary()` after successfully set a new primary remained waiting for the operation completeness. On the occurrence and implementation analysis, after ruling out other hypothesis, the only probable cause for the issue is a missed/not_handled broadcast on a condition wait call. More precisely on: ``` mysql_mutex_lock(&group_thread_end_lock); while (action_running && !coordinator_terminating) { DBUG_PRINT("sleep", ("Waiting for the group action execution process to terminate")); mysql_cond_wait(&group_thread_end_cond, &group_thread_end_lock); } mysql_mutex_unlock(&group_thread_end_lock); ``` which can cause the coordinator of the SQL function `group_replication_set_as_primary()` to wait uninterruptedly. To avoid the above issue, the `mysql_cond_wait()` function was replaced by `mysql_cond_timedwait()` which will periodic check if the predicate to wait still stands or if the wait can end. Additionally, a superfluous call to `mysql_cond_broadcast()` was deleted. Change-Id: I41fa0f1b3f3ee644e19b916814837ffe5232e265
Configuration menu - View commit details
-
Copy full SHA for c7d66f1 - Browse repository at this point
Copy the full SHA c7d66f1View commit details
Commits on Mar 2, 2024
-
Bug#36082229 [1/2] Check there are no active error inserts
set after NDBAPI test run Error insertion improvements - Have each block log when an error insertion is set - Indicates previous and new value - Gives visual indication of when error insert is set in log - Info for debugging problems - Add new ERROR 1 code which can be used to check that Error inserts are cleared, e.g. at the end of a test - Improve setting + clearing of error insert and extra value together. Change-Id: I18ee88a9a02cec7349a39a85aaec6d03f8df4002
Configuration menu - View commit details
-
Copy full SHA for 9da45e1 - Browse repository at this point
Copy the full SHA 9da45e1View commit details -
Bug#36082229 [2/2] Check there are no active error inserts set after
NDBAPI test run Problem: Some NDBAPI test do not clear active error insert injected to data nodes during test execution. This can have an unpredictable impact on test cases run afterward. Solution: New function, runCheckNoErrorInserted(), called at the end of every NDBAPI test case to check if there are currently error insert set in data nodes. Function uses the special error insert code '1' that makes data node crash if ERROR_INSERT_VALUE or ERROR_INSERT_EXTRA is set. Change-Id: I977d8881bae2f6d0d3d5090b0302176c69ae01c4
Configuration menu - View commit details
-
Copy full SHA for 742e403 - Browse repository at this point
Copy the full SHA 742e403View commit details -
Bug#36356479 decouple TRPMAN error insert handling from CMVMI
Problem: Currently there is a mix between CMVMI and TRPMAN error insert handling. Part of error insert values handled in TRPMAN were originally handled in CMVMI but, when TRPMAN was introduced and all transporter stuff was moved to TRPMAN, they were moved from CMVMI to TRPMAN as well. Also, as part of the moving of transporter handling to TRPMAN, part of the CMVMI error code range was reserved to TRPMAN [9500, 9899] causing CMVMI to have, in fact, two different error range [9000, 9499] and [9900, 9999]. This mix of error insert handling in both CMVMI and TRPMAN is undesirable because in some cases signals are, unnecessarily, sent to CMVMI and then, re-sent to TRPMAN and, in addition, it can cause difficulties reading error insert code. Solution: - TRPMAN error insert handling decoupled from CMVMI. - Error range for CMVMI and TRPMAN redefined as: - TRPMAN [9000, 9599] - CMVMI [9600, 9999] - All existing error insert code preserved with same behaviour as before. - Existing error in the [9000, 9599] range, handled in CMVMI, moved to TRPMAN. - Handling of dump 9006 moved to TRPMAN as well. This patch also fixes testNodeRestart -n Bug24717. Test dumps code 9002 to data node. TRPMAN sets error 9002 in self block, but it never hit because handling of error 9002 is done in CMVMI instead TRPMAN. (It was probably missed in the migration of transporter stuff from CMVMI to TRPMAN). Fix: execNODE_START_REP Implemented in TRPMAN block so error 9002 can be handled as expected. Also, TRPMAN block added to all_blocks list in NDBCNTR in order to make NODE_START_REP to be sent to TRPMAN instances. Change-Id: Ibf56def74818f5043b43c40e39ecdaa1e4e02974
Configuration menu - View commit details
-
Copy full SHA for b9a4639 - Browse repository at this point
Copy the full SHA b9a4639View commit details -
Bug#36192351 Error insert 10099 injected by testBackup never hit
Problem: Some testBackup test cases injects error 10099, but it never hit. 10099 is not handled anywhere in data node code. This makes tests fails because Error in never cleared. Solution: Do not inject error 10099 since it is useless. Change-Id: I0644c372919bbd3deeeb07fada25d41fc8ca2396
Configuration menu - View commit details
-
Copy full SHA for 9510c4e - Browse repository at this point
Copy the full SHA 9510c4eView commit details -
Bug#TBD test_event -n Bug30780 leaves Error Inserted after running
Test injects error 8064 to delay LQH_TRANSREQ during TC takeover. Error is consumed by one DBTC instance and then cleared. Since there can be more than 1 TC instance test can fail due to error 8064 being left in other DBTC instances. Fix: clears EI from test side to prevent leftovers when using more than 1 TC instance per data node. Change-Id: Ia95513d030b42577b19748ddb5f28cf4b7c42a15
Configuration menu - View commit details
-
Copy full SHA for 19af640 - Browse repository at this point
Copy the full SHA 19af640View commit details -
Bug#36356532 testBackup -n Bug17882305 leaves Error Inserted after ru…
…nning Test injects error 10046 in Backup instances to force DIH to change the next fragment to scan. Error is injected to Backup Proxy and then proxy resends the error to all workers where it is eventually cleared. But, in Backup proxy instance error is never cleared. Fix: Clear error in test side (Insert Error 0) to force it to be cleared in all Backup instances (including Backup proxy). Change-Id: Ieafdde52bf19c6d3946d6d53dd26eab9dc329526
Configuration menu - View commit details
-
Copy full SHA for d48059c - Browse repository at this point
Copy the full SHA d48059cView commit details -
Bug#36356550 testBackup leaves Error 10036 and 10038 Inserted after r…
…unning - testBackup -n FailMaster T1 Test injects 10038 into backup block. Error hits but backup block does not clear it. Fix: clear 10038 from test side. - testBackup -n NFMasterAsSlave T1 - testBackup -n FailSlave T1 Test injects 10036 into backup block. Error hits but backup block does not clear it. Fix: clear 10038 from test side. Change-Id: If74dbcfedd117d460d3c03e46d0a308cbd1cac8f
Configuration menu - View commit details
-
Copy full SHA for a7b04a2 - Browse repository at this point
Copy the full SHA a7b04a2View commit details -
Bug#36356580 testNodeRetart -n Bug27466 leaves Error Inserted after r…
…unning Test injects error 8039 to DBTC block. Error is consumed by one DBTC instance and then cleared. Since can be more than 1 TC instance test can fail due to error 8086 being left in other DBTC instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 TC instance per data node. Change-Id: I3dc7a05f80ff8a2206be7040b176c80ef139c2c3
Configuration menu - View commit details
-
Copy full SHA for a8d005b - Browse repository at this point
Copy the full SHA a8d005bView commit details -
Bug#36356598 testNodeRestart -n LCP_with_many_parts_drop_table leaves
Error Inserted after running Test injects error 10048 to force number of parts of a partial LCP to 1. Error is consumed by one Backup instance, but it is never cleared. Test can fail due to error 10048 being left in Backup instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I3c12564cd3cab4bd2869a4490cb9166903508c91
Configuration menu - View commit details
-
Copy full SHA for 12344f9 - Browse repository at this point
Copy the full SHA 12344f9View commit details -
Bug#36356615 testNodeRestart -n CommittedRead leaves Error Inserted
after running Test injects error 8048 and 8049 to make TC not choose own node for simple/dirty read. Error is consumed by one DBTC instance, but it is never cleared. Test can fail due to error 8048 being left in Backup instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I6aa3a4dbc0bfa057e50f5dd752b1f574963e8447
Configuration menu - View commit details
-
Copy full SHA for d6f5724 - Browse repository at this point
Copy the full SHA d6f5724View commit details -
Bug#36356700 testLimits leaves Error 8068 Inserted after running
Test cases ExhaustSegmentedSectionPk, ExhaustSegmentedSectionScan and ExhaustSegmentedSectionIx injects error 8068 to free all segments hoarded by errors 8065, 8066, 8067 sent earlier. Error is consumed by one DBTC instance and then cleared. Since can be more than 1 TC instance test can fail due to error 8068 being left in other DBTC instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 TC instance per data node. Change-Id: Ie0d89636b523811a4c54ee3ece06f00cbd237192
Configuration menu - View commit details
-
Copy full SHA for c295d64 - Browse repository at this point
Copy the full SHA c295d64View commit details -
Bug#36356719 testIndex leaves Error 5020 Inserted after running
Test cases SR1 and SR1_O, injects error 5020 to force system to read pages form file when executing prepare operation record. Error is consumed by one LQH instance, but it is never cleared. Test can fail due to error 5020 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I06baa010b172c0e97f9c9dcbf4e6b882fb58f66d
Configuration menu - View commit details
-
Copy full SHA for d3872e9 - Browse repository at this point
Copy the full SHA d3872e9View commit details -
Bug#36356730 testSystemRestart leaves Error 5020 Inserted after running
Test case SR1, injects error 5020 to force system to read pages form file when executing prepare operation record. Error is consumed by one LQH instance, but it is never cleared. Test can fail due to error 5020 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: Iada0b78d0651c2403a1817500ed5918608f51ce0
Configuration menu - View commit details
-
Copy full SHA for f05030a - Browse repository at this point
Copy the full SHA f05030aView commit details -
Bug#36356744 testSystemRestart leaves Error 5055 Inserted after running
Test case Bug54611 injects error 5055 to force LQH to abort a fragment scan during node restart. Error is consumed by one LQH instance and then cleared. Since can be more than 1 LQH instance test can fail due to error 5055 being left in other LQH instances. Fix: EI cleared from test side to prevent leftovers whe using more than 1 LQH instance per data node. Change-Id: I0caf8ee277a308e463145f3504f1830b475da9dd
Configuration menu - View commit details
-
Copy full SHA for cdf25de - Browse repository at this point
Copy the full SHA cdf25deView commit details -
Bug#36356862 testSystemRestart leaves Error 7072 Inserted after running
Test case Bug22696 injects error 7072 to split START_FRAGREQ into several log nodes. Error is consumed by one DIH instance, but it is never cleared. Test can fail due to error 7072 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: Ia8a69ffa1e9caec7ea06071d4876d352d90fa446
Configuration menu - View commit details
-
Copy full SHA for 4b1457b - Browse repository at this point
Copy the full SHA 4b1457bView commit details -
Bug#36357333 testDict leaves Error 4013 Inserted after running
Test cases CreateAndDropAtRandom and CreateAndDropIndexes injects error 4013 to make TUP verify table descriptor. Error is consumed by one TUP instance, but it is never cleared. Test can fail due to error 4013 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I4e9df5d8494a34b1059e36e7675450ec231cf7a7
Configuration menu - View commit details
-
Copy full SHA for 06941ae - Browse repository at this point
Copy the full SHA 06941aeView commit details -
Bug#36357341 testDict leaves Error 5088 and 5089 Inserted
after running Test cases DropTableConcurrentLCP and DropTableConcurrentLCP2 injects error 5088 and 5089 to delay a drop table. Error is consumed by one LQH instance, but it is never cleared. Tests can fail due to error 5088 or 5089 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I7f47c674cfa1062cbb6c5c8566dda29e7fa772d4
Configuration menu - View commit details
-
Copy full SHA for c4265f6 - Browse repository at this point
Copy the full SHA c4265f6View commit details -
Bug#36357349 testDict leaves Error 4029 Inserted after running
Test cases TableAddAttrsDuringError injects error 4029 to make an alter table fail. Error is consumed by one TUP instance and then cleared. Since can be more than 1 TUP instance test can fail due to error 4029 being left in other TUP instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 TUP instance per data node. Change-Id: Ic707e4c81bc7c72c5269cfac181c858fa9c38639
Configuration menu - View commit details
-
Copy full SHA for 4cdbbb1 - Browse repository at this point
Copy the full SHA 4cdbbb1View commit details -
Bug#36357354 testScan leaves Error 5057 Inserted after running
Test case Bug54945 injects error 5057 to force a fragment scan to fail due to many active scans. Error is consumed by one LQH instance and then cleared. Since can be more than 1 LQH instance test can fail due to error 5057 being left in other LQH instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 LQH instance per data node. Change-Id: Ice3a6fc2dc7a5f1e69e4423e3fabf897bd67c98f
Configuration menu - View commit details
-
Copy full SHA for eeacc9f - Browse repository at this point
Copy the full SHA eeacc9fView commit details -
Bug#36357361 testScan leaves Error 4036 Inserted after running
Test case TupCheckSumError injects error 4036 to simulate a tuple corruption detection. Error is consumed by one TUP instance and then cleared. Since can be more than 1 TUP instance test can fail due to error 4036 being left in other TUP instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 TUP instance per data node. Change-Id: Ic1f5f9c6f7092b77c6a60f8b56aac2dedce81955
Configuration menu - View commit details
-
Copy full SHA for 6700fb8 - Browse repository at this point
Copy the full SHA 6700fb8View commit details -
Bug#36357373 testDict leaves Error 5076 Inserted after running
Error is injected to LQH Proxy and then proxy resends the error to all workers where it is eventually cleared. But, in LQH proxy instance error is never cleared. Fix: Clear error in test side (Insert Error 0) to force it to be cleared in all LQH instances (including LQH proxy). Change-Id: Icb06ee1b360fa424a1c972ec799749891b2f0f22
Configuration menu - View commit details
-
Copy full SHA for e78b187 - Browse repository at this point
Copy the full SHA e78b187View commit details -
Bug#36357380 testestNdbApi -n Bug28443 leaves Error 9003 Inserted aft…
…er running Error is injected to TRPMAN Proxy and then proxy resends the error to all workers where it is eventually cleared. But, in TRPMAN proxy instance error is never cleared. Fix: Clear error in test side (Insert Error 0) to force it to be cleared in all TRPMAN instances (including TRPMAN proxy). Change-Id: I7a024dca555c8c684e589c62ea2963722b7531a7
Configuration menu - View commit details
-
Copy full SHA for 3e1ab80 - Browse repository at this point
Copy the full SHA 3e1ab80View commit details
Commits on Mar 4, 2024
-
Bug#36082229 [1/2] Check there are no active error inserts
set after NDBAPI test run Error insertion improvements - Have each block log when an error insertion is set - Indicates previous and new value - Gives visual indication of when error insert is set in log - Info for debugging problems - Add new ERROR 1 code which can be used to check that Error inserts are cleared, e.g. at the end of a test - Improve setting + clearing of error insert and extra value together. Change-Id: I18ee88a9a02cec7349a39a85aaec6d03f8df4002
Configuration menu - View commit details
-
Copy full SHA for c9b957a - Browse repository at this point
Copy the full SHA c9b957aView commit details -
Bug#35289234 Can't disable encryption once redo log encryption is ena…
…bled PROBLEM -------- 1. Start the mysqld server innodb_redo_log_encrypt=ON; 2. Shutdown the server. 3. Delete the encryption key. 4. Start the server again with innodb_redo_log_encrypt=OFF; 5. The server is unable to start with the error message that the redo log is encrypted. ANALYSIS -------- 1. This is a known limitation which is documented. 2. During the startup the server reads the redo log file header to determine if the redo log file containing the the latest checkpoint is encrypted or not. This redo log file can contain both encrypted and unencrypted data. 3. If the key ring is missing the startup fails since the header says encrypted data present in the file, but we cannot decrypt it without key. 4. It is to be noted that setting innodb_redo_log_encrypt=OFF dynamically or doing the startup doesn't change the header information in the redo log file, only subsequent new redo log file header will be created without encryption. 5. The redo logs are written to the disk in blocks of 512 bytes 6. When redo log encryption is ON, based on the setting of innodb_log_write_ahead_size we may write empty blocks at the end which are encrypted also. 7. When the user sets innodb_redo_log_encrypt=OFF and then does a slow shutdown the block containing checkpoint_lsn is written to disk without encryption, but the empty block after it may be encrypted 8. During recovery we try to read after the logical redo block containing checkpoint lsn we will encounter an encrypted empty block which causes recovery to fail. FIX --- 1. This patch does the following things (i) We will not read the encryption info from the redo log file header at the start of recovery which will allow the server to start. We only read it if we encounter an encrypted block during scanning of redo logs. (ii) We will not encrypt blocks having empty headers which will ensure that there is no encrypted block after the block having the checkpoint lsn when doing a slow shutdown. Change-Id: I4b37f5a1997f3a2882ac64068e18b33898e871a0
Aditya A committedMar 4, 2024 Configuration menu - View commit details
-
Copy full SHA for e816c2a - Browse repository at this point
Copy the full SHA e816c2aView commit details -
Bug#36362495 routing plugin should not support option disabled
The routing plugin declare the "disabled" option as a supported configuration option even though configuring it does not have any effect. It was added to the supported options by mistake as a copy/paste error. This patch removes "disabled" from the list of the supported routing configuration options. The Router will now error out if it is configured. Change-Id: Ia53e9d00769ade1156813965cfe7a86c6c8d25c8
Andrzej Religa committedMar 4, 2024 Configuration menu - View commit details
-
Copy full SHA for 4d365aa - Browse repository at this point
Copy the full SHA 4d365aaView commit details
Commits on Mar 5, 2024
-
Bug#36082229 [2/2] Check there are no active error inserts set after
NDBAPI test run Problem: Some NDBAPI test do not clear active error insert injected to data nodes during test execution. This can have an unpredictable impact on test cases run afterward. Solution: New function, runCheckNoErrorInserted(), called at the end of every NDBAPI test case to check if there are currently error insert set in data nodes. Function uses the special error insert code '1' that makes data node crash if ERROR_INSERT_VALUE or ERROR_INSERT_EXTRA is set. Change-Id: I977d8881bae2f6d0d3d5090b0302176c69ae01c4
Configuration menu - View commit details
-
Copy full SHA for 7bbb88a - Browse repository at this point
Copy the full SHA 7bbb88aView commit details -
Bug#36356479 decouple TRPMAN error insert handling from CMVMI
Problem: Currently there is a mix between CMVMI and TRPMAN error insert handling. Part of error insert values handled in TRPMAN were originally handled in CMVMI but, when TRPMAN was introduced and all transporter stuff was moved to TRPMAN, they were moved from CMVMI to TRPMAN as well. Also, as part of the moving of transporter handling to TRPMAN, part of the CMVMI error code range was reserved to TRPMAN [9500, 9899] causing CMVMI to have, in fact, two different error range [9000, 9499] and [9900, 9999]. This mix of error insert handling in both CMVMI and TRPMAN is undesirable because in some cases signals are, unnecessarily, sent to CMVMI and then, re-sent to TRPMAN and, in addition, it can cause difficulties reading error insert code. Solution: - TRPMAN error insert handling decoupled from CMVMI. - Error range for CMVMI and TRPMAN redefined as: - TRPMAN [9000, 9599] - CMVMI [9600, 9999] - All existing error insert code preserved with same behaviour as before. - Existing error in the [9000, 9599] range, handled in CMVMI, moved to TRPMAN. - Handling of dump 9006 moved to TRPMAN as well. This patch also fixes testNodeRestart -n Bug24717. Test dumps code 9002 to data node. TRPMAN sets error 9002 in self block, but it never hit because handling of error 9002 is done in CMVMI instead TRPMAN. (It was probably missed in the migration of transporter stuff from CMVMI to TRPMAN). Fix: execNODE_START_REP Implemented in TRPMAN block so error 9002 can be handled as expected. Also, TRPMAN block added to all_blocks list in NDBCNTR in order to make NODE_START_REP to be sent to TRPMAN instances. Change-Id: Ibf56def74818f5043b43c40e39ecdaa1e4e02974
Configuration menu - View commit details
-
Copy full SHA for 1bcdbf7 - Browse repository at this point
Copy the full SHA 1bcdbf7View commit details -
Bug#36192351 Error insert 10099 injected by testBackup never hit
Problem: Some testBackup test cases injects error 10099, but it never hit. 10099 is not handled anywhere in data node code. This makes tests fails because Error in never cleared. Solution: Do not inject error 10099 since it is useless. Change-Id: I0644c372919bbd3deeeb07fada25d41fc8ca2396
Configuration menu - View commit details
-
Copy full SHA for 0627854 - Browse repository at this point
Copy the full SHA 0627854View commit details -
Bug#TBD test_event -n Bug30780 leaves Error Inserted after running
Test injects error 8064 to delay LQH_TRANSREQ during TC takeover. Error is consumed by one DBTC instance and then cleared. Since there can be more than 1 TC instance test can fail due to error 8064 being left in other DBTC instances. Fix: clears EI from test side to prevent leftovers when using more than 1 TC instance per data node. Change-Id: Ia95513d030b42577b19748ddb5f28cf4b7c42a15
Configuration menu - View commit details
-
Copy full SHA for 9850f22 - Browse repository at this point
Copy the full SHA 9850f22View commit details -
Bug#36356532 testBackup -n Bug17882305 leaves Error Inserted after ru…
…nning Test injects error 10046 in Backup instances to force DIH to change the next fragment to scan. Error is injected to Backup Proxy and then proxy resends the error to all workers where it is eventually cleared. But, in Backup proxy instance error is never cleared. Fix: Clear error in test side (Insert Error 0) to force it to be cleared in all Backup instances (including Backup proxy). Change-Id: Ieafdde52bf19c6d3946d6d53dd26eab9dc329526
Configuration menu - View commit details
-
Copy full SHA for c50dcaa - Browse repository at this point
Copy the full SHA c50dcaaView commit details -
Bug#36356550 testBackup leaves Error 10036 and 10038 Inserted after r…
…unning - testBackup -n FailMaster T1 Test injects 10038 into backup block. Error hits but backup block does not clear it. Fix: clear 10038 from test side. - testBackup -n NFMasterAsSlave T1 - testBackup -n FailSlave T1 Test injects 10036 into backup block. Error hits but backup block does not clear it. Fix: clear 10038 from test side. Change-Id: If74dbcfedd117d460d3c03e46d0a308cbd1cac8f
Configuration menu - View commit details
-
Copy full SHA for 6850cb6 - Browse repository at this point
Copy the full SHA 6850cb6View commit details -
Bug#36356580 testNodeRetart -n Bug27466 leaves Error Inserted after r…
…unning Test injects error 8039 to DBTC block. Error is consumed by one DBTC instance and then cleared. Since can be more than 1 TC instance test can fail due to error 8086 being left in other DBTC instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 TC instance per data node. Change-Id: I3dc7a05f80ff8a2206be7040b176c80ef139c2c3
Configuration menu - View commit details
-
Copy full SHA for 3f0b02d - Browse repository at this point
Copy the full SHA 3f0b02dView commit details -
Bug#36356598 testNodeRestart -n LCP_with_many_parts_drop_table leaves
Error Inserted after running Test injects error 10048 to force number of parts of a partial LCP to 1. Error is consumed by one Backup instance, but it is never cleared. Test can fail due to error 10048 being left in Backup instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I3c12564cd3cab4bd2869a4490cb9166903508c91
Configuration menu - View commit details
-
Copy full SHA for f35deb9 - Browse repository at this point
Copy the full SHA f35deb9View commit details -
Bug#36356615 testNodeRestart -n CommittedRead leaves Error Inserted
after running Test injects error 8048 and 8049 to make TC not choose own node for simple/dirty read. Error is consumed by one DBTC instance, but it is never cleared. Test can fail due to error 8048 being left in Backup instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I6aa3a4dbc0bfa057e50f5dd752b1f574963e8447
Configuration menu - View commit details
-
Copy full SHA for f3302c1 - Browse repository at this point
Copy the full SHA f3302c1View commit details -
Bug#36356700 testLimits leaves Error 8068 Inserted after running
Test cases ExhaustSegmentedSectionPk, ExhaustSegmentedSectionScan and ExhaustSegmentedSectionIx injects error 8068 to free all segments hoarded by errors 8065, 8066, 8067 sent earlier. Error is consumed by one DBTC instance and then cleared. Since can be more than 1 TC instance test can fail due to error 8068 being left in other DBTC instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 TC instance per data node. Change-Id: Ie0d89636b523811a4c54ee3ece06f00cbd237192
Configuration menu - View commit details
-
Copy full SHA for 5ead889 - Browse repository at this point
Copy the full SHA 5ead889View commit details -
Bug#36356719 testIndex leaves Error 5020 Inserted after running
Test cases SR1 and SR1_O, injects error 5020 to force system to read pages form file when executing prepare operation record. Error is consumed by one LQH instance, but it is never cleared. Test can fail due to error 5020 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I06baa010b172c0e97f9c9dcbf4e6b882fb58f66d
Configuration menu - View commit details
-
Copy full SHA for 31c6078 - Browse repository at this point
Copy the full SHA 31c6078View commit details -
Bug#36356730 testSystemRestart leaves Error 5020 Inserted after running
Test case SR1, injects error 5020 to force system to read pages form file when executing prepare operation record. Error is consumed by one LQH instance, but it is never cleared. Test can fail due to error 5020 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: Iada0b78d0651c2403a1817500ed5918608f51ce0
Configuration menu - View commit details
-
Copy full SHA for 2af3a98 - Browse repository at this point
Copy the full SHA 2af3a98View commit details -
Bug#36356744 testSystemRestart leaves Error 5055 Inserted after running
Test case Bug54611 injects error 5055 to force LQH to abort a fragment scan during node restart. Error is consumed by one LQH instance and then cleared. Since can be more than 1 LQH instance test can fail due to error 5055 being left in other LQH instances. Fix: EI cleared from test side to prevent leftovers whe using more than 1 LQH instance per data node. Change-Id: I0caf8ee277a308e463145f3504f1830b475da9dd
Configuration menu - View commit details
-
Copy full SHA for 68431fe - Browse repository at this point
Copy the full SHA 68431feView commit details -
Bug#36356862 testSystemRestart leaves Error 7072 Inserted after running
Test case Bug22696 injects error 7072 to split START_FRAGREQ into several log nodes. Error is consumed by one DIH instance, but it is never cleared. Test can fail due to error 7072 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: Ia8a69ffa1e9caec7ea06071d4876d352d90fa446
Configuration menu - View commit details
-
Copy full SHA for 25cb2f1 - Browse repository at this point
Copy the full SHA 25cb2f1View commit details -
Bug#36357333 testDict leaves Error 4013 Inserted after running
Test cases CreateAndDropAtRandom and CreateAndDropIndexes injects error 4013 to make TUP verify table descriptor. Error is consumed by one TUP instance, but it is never cleared. Test can fail due to error 4013 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I4e9df5d8494a34b1059e36e7675450ec231cf7a7
Configuration menu - View commit details
-
Copy full SHA for 28d20d4 - Browse repository at this point
Copy the full SHA 28d20d4View commit details -
Bug#36357341 testDict leaves Error 5088 and 5089 Inserted
after running Test cases DropTableConcurrentLCP and DropTableConcurrentLCP2 injects error 5088 and 5089 to delay a drop table. Error is consumed by one LQH instance, but it is never cleared. Tests can fail due to error 5088 or 5089 being left in LQH instances. Fix: clears EI from test side to prevent leftovers. Change-Id: I7f47c674cfa1062cbb6c5c8566dda29e7fa772d4
Configuration menu - View commit details
-
Copy full SHA for 4de0d49 - Browse repository at this point
Copy the full SHA 4de0d49View commit details -
Bug#36357349 testDict leaves Error 4029 Inserted after running
Test cases TableAddAttrsDuringError injects error 4029 to make an alter table fail. Error is consumed by one TUP instance and then cleared. Since can be more than 1 TUP instance test can fail due to error 4029 being left in other TUP instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 TUP instance per data node. Change-Id: Ic707e4c81bc7c72c5269cfac181c858fa9c38639
Configuration menu - View commit details
-
Copy full SHA for 9454cc5 - Browse repository at this point
Copy the full SHA 9454cc5View commit details -
Bug#36357354 testScan leaves Error 5057 Inserted after running
Test case Bug54945 injects error 5057 to force a fragment scan to fail due to many active scans. Error is consumed by one LQH instance and then cleared. Since can be more than 1 LQH instance test can fail due to error 5057 being left in other LQH instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 LQH instance per data node. Change-Id: Ice3a6fc2dc7a5f1e69e4423e3fabf897bd67c98f
Configuration menu - View commit details
-
Copy full SHA for f921695 - Browse repository at this point
Copy the full SHA f921695View commit details -
Bug#36357361 testScan leaves Error 4036 Inserted after running
Test case TupCheckSumError injects error 4036 to simulate a tuple corruption detection. Error is consumed by one TUP instance and then cleared. Since can be more than 1 TUP instance test can fail due to error 4036 being left in other TUP instances. Fix: EI cleared from test side to prevent leftovers when using more than 1 TUP instance per data node. Change-Id: Ic1f5f9c6f7092b77c6a60f8b56aac2dedce81955
Configuration menu - View commit details
-
Copy full SHA for db5d927 - Browse repository at this point
Copy the full SHA db5d927View commit details -
Bug#36357373 testDict leaves Error 5076 Inserted after running
Error is injected to LQH Proxy and then proxy resends the error to all workers where it is eventually cleared. But, in LQH proxy instance error is never cleared. Fix: Clear error in test side (Insert Error 0) to force it to be cleared in all LQH instances (including LQH proxy). Change-Id: Icb06ee1b360fa424a1c972ec799749891b2f0f22
Configuration menu - View commit details
-
Copy full SHA for 50f594d - Browse repository at this point
Copy the full SHA 50f594dView commit details -
Bug#36357380 testestNdbApi -n Bug28443 leaves Error 9003 Inserted aft…
…er running Error is injected to TRPMAN Proxy and then proxy resends the error to all workers where it is eventually cleared. But, in TRPMAN proxy instance error is never cleared. Fix: Clear error in test side (Insert Error 0) to force it to be cleared in all TRPMAN instances (including TRPMAN proxy). Change-Id: I7a024dca555c8c684e589c62ea2963722b7531a7
Configuration menu - View commit details
-
Copy full SHA for 99d8b36 - Browse repository at this point
Copy the full SHA 99d8b36View commit details -
Bug#28341329 : COMPONENT OPTIONS WITH --LOOSE PREFIX AREN'T CONSIDERED
AFTER INSTALLATION Description: Server does not load component variables specified in the configuration file when the component is installed after server start up. Cache plugin and component variables specified in the configuration file. Fix: Added functionality to load component variables specified in the configuration file during component installation. Added functionality to cache plugin and component variables specified in the configuration file during server start up. The server loads the cached variable values instead of rereading the configuration file during plugin and component installation. Change-Id: I5c1633b70daee536d9522692bde38909df894a84
Omar Sharieff committedMar 5, 2024 Configuration menu - View commit details
-
Copy full SHA for 4b58949 - Browse repository at this point
Copy the full SHA 4b58949View commit details -
Bug#36128335 mysqltest: shutdown_server on Windows is not waiting for…
… the process to end Symptom: MTR tests are failing on Windows on many different tests with many different symptoms after issueing `shutdown_server` command, directly or indirectly by include files. Most failures complain about no access to files that the killed server use, in `force-rmdir` and similar, or with port being still in use, or with files being still in use during the next server instance startup. Root cause: `mysqltest` `kill_process()` is not waiting for the process to fully die. Issuing `TerminateProcess()` is not enough as it is asynchronous operation. Fix: After `TerminateProcess()` we call `WaitForSingleObject()` to wait for the process to fully close. A new testcase is added to try to force the above scenario. Change-Id: Ie40e4fa7ef2567281d38ab53e1fa75479abbf1d5
Marcin Babij committedMar 5, 2024 Configuration menu - View commit details
-
Copy full SHA for 2a1ac3d - Browse repository at this point
Copy the full SHA 2a1ac3dView commit details -
Bug#35932118 innodb.redo_log_archive_01 fails with log0write.cc:2115:…
…ib::fatal The `os_innodb_umask` is global variable, it can't be modified by a thread that wants to create a file with different UNIX access mode, as it will modify it for all threads (not mentioning UndefinedBehavior). The `os_file_set_umask` is supposed to be called only once, at InnoDB initialization. However, after `Bug #29472125 NEED OS_FILE_GET_UMASK()` it was made possible to modify it and `WL#12009 - Redo-log Archiving` used it to modify it. Fix: - `os_file_get_umask()` is removed - `os_file_set_umask()` is modified so it can be called only once - Unix-only `os_file_create_simple_no_error_handling_with_umask()` is added to allow specifying umask param. Change-Id: I78f169cfb99704e031ea1ff758b970cd8c73240d
Marcin Babij committedMar 5, 2024 Configuration menu - View commit details
-
Copy full SHA for e91467d - Browse repository at this point
Copy the full SHA e91467dView commit details -
Bug#34338001 Performance of Temptable is worse than Memory in GROUP B…
…Y scenario Symptom: Some queries with `SELECT ... GROUP BY` can be a few times slower when executed on TempTable than ones executed on Memory temporary table engine. Root cause: The `AllocatorState::current_block` gets allocated and quickly deallocated in case a single `Row` instance is allocated on it and released in a loop. Fix: The `AllocatorState::current_block` is not released when it gets empty. We release it only when `AllocatorState` is deleted, that is when the `Table` gets deleted. Additional fixes: `AllocationScheme` gets a new `block_freed()` method to be able to be fully responsible for managing memory usage reporting to `MemoryMonitor`. `AllocatorState` knows the `AllocationScheme` and can on its own report usage of memory to it. Change-Id: I64ae2387dc23b3f8d027d4050972bf126aa5d004
Marcin Babij committedMar 5, 2024 Configuration menu - View commit details
-
Copy full SHA for 0df87ec - Browse repository at this point
Copy the full SHA 0df87ecView commit details -
Null-merge from mysql-5.7-cluster-7.6 ..
Change-Id: I906440eaf54d60fb8bf7a3c23859ae7fb27b0cda
Configuration menu - View commit details
-
Copy full SHA for 44122ca - Browse repository at this point
Copy the full SHA 44122caView commit details
Commits on Mar 6, 2024
-
Bug#36367610 Run only ndb tests in PB2 ndbcluster builds
MySQL Cluster no longer uses a customized MySQL server and there is no need for re-running the non NDB server tests. Instead the NDB specific testing is increased. Change-Id: Ia9f9622116ecd057afcb5d8177f44cf51ef7eae5
Configuration menu - View commit details
-
Copy full SHA for 98d8023 - Browse repository at this point
Copy the full SHA 98d8023View commit details -
Bug#36317795 Contribution: Unified behaviour when calling plugin->dei…
…nit for all plugins This patch unifies plugin's deinit function call to pass valid plugin pointer instead of the nullptr for all types of plugin. Change-Id: I482497bbaff28d5cd31d74d694056a4df6693152
Configuration menu - View commit details
-
Copy full SHA for ce03671 - Browse repository at this point
Copy the full SHA ce03671View commit details
Commits on Mar 7, 2024
-
Bug#36246859: Collation issue: ERROR 1253 (42000): COLLATION ''
is not valid for CHARACTER SET Condition pushdown to a view fails with a collation mismatch if the view was created with a different charset than the charset used when querying the view. Problem is seen if the underlying fields in the view are literals with COLLATE statements. The string literal with the COLLATE statement is cloned when replacing expressions in the condition that is being pushed down with the expressions from the view. The string literal is currently parsed with the connection charset which in this case is different from the one that was used when the view was created. Therefore the COLLATE statement fails. Creation context for a view has the connection charset and collation information which was used when the view was created. This is currently used for parsing the view when it is later queried. We use the same now when cloning expressions from a view during condition pushdown. Change-Id: Ib040b9a67ddedd5fb9bf5de6fafafb358226e9d9
Chaithra Gopalareddy committedMar 7, 2024 Configuration menu - View commit details
-
Copy full SHA for 1e6ef4c - Browse repository at this point
Copy the full SHA 1e6ef4cView commit details -
Bug#36082229 Check there are no active error inserts set after
NDBAPI test run Fix compilation error due to C++11 syntax storage/ndb/test/include/NDBT_Test.hpp:329: error: expected ';' before 'override' Change-Id: Ifa4e1c594809a0467d0efd7fce1a5e425f03e112
Configuration menu - View commit details
-
Copy full SHA for 2b05117 - Browse repository at this point
Copy the full SHA 2b05117View commit details -
Bug#30766579 ADDING AN INDEX WITH INPLACE GENERATES
Add test for adding index on part of primary key using inplace alter table. Also backport of test case from 8.0: (cherry picked from commit fa0e0e54d6d8bb772b80aacd04e2886ad85707f3) Change-Id: I8010caedadba618af205726a9b84faad2ffb84d6
Configuration menu - View commit details
-
Copy full SHA for 0d9dfd7 - Browse repository at this point
Copy the full SHA 0d9dfd7View commit details -
Bug#30766579 ADDING AN INDEX WITH INPLACE GENERATES
Add test for adding index on part of primary key using inplace alter table. Change-Id: Id2849c1cf429ca94317dbad89067b9d2c6e850fb
Configuration menu - View commit details
-
Copy full SHA for 109f46a - Browse repository at this point
Copy the full SHA 109f46aView commit details -
Configuration menu - View commit details
-
Copy full SHA for f1e6983 - Browse repository at this point
Copy the full SHA f1e6983View commit details -
Bug#32008963 NDB_76_INPLACE_UPGRADE FAILS TO RESTART SERVER IN VALGRIND
Stop ndb_76_inplace_upgrade test from running in valgrind. This has already been done for all other MySQL Server tests, see Bug29520374 CLEAN UP THE VALGRIND TESTING. Change-Id: I3d4d63b3570875a248aeea97c52c10abc1aebbf5
Configuration menu - View commit details
-
Copy full SHA for e18627e - Browse repository at this point
Copy the full SHA e18627eView commit details -
BUG#36272777 skip test using intentional low timeout valgrind
[ 79%] ndb.server_lifecycle w7 [ fail ] mysqltest: At line 8: Query 'CREATE TABLE t1 ( a INT PRIMARY KEY, b VARCHAR(32) ) engine=ndb' failed. ERROR 1296 (HY000): Got error 4009 'Cluster Failure' from NDBCLUSTER Skip test on valgrind since using low connect wait time intentionally. Change-Id: I4ef55b379f2518121843d4ccc78ac39f7f80def0
Configuration menu - View commit details
-
Copy full SHA for 54114cb - Browse repository at this point
Copy the full SHA 54114cbView commit details -
Bug#36319083 [InnoDB] Merge sort buffer can be too small
Symptom: In some circumstances an index cannot be created; this is configuration-dependent Root cause: Merge sort file buffer size can in some cases be calculated to be the size of IO_BLOCK_SIZE. The logic for the buffer is that subsequent records of data are added to the buffer, but when adding a row would overflow the buffer, thw contents are written to disk in multiples of IO_BLOCK_SIZE and space is freed up. If the buffer is only IO_BLOCK_SIZE, it is likely that at that point the existing contents' length is less than IO_BLOCK_SIZE, in which case nothing gets written, no space can be freed up, and the new record cannot be added. Fix: Since maximum allowed key length is 3072 and IO_BLOCK_SIZE is 4096 bytes, and given other factors contributing to buffer length, like page size, which also affects allowed key size, 2 * IO_BLOCK_SIZE is sufficient minimum length to ensure there is always IO_BLOCK_SIZE bytes in the buffer when the write happens. If this ever changes, the merge will fail gracefully. Change-Id: I7aec373cfed9e364751372cce3746eb7ad75b3b9
Andrzej Jarzabek committedMar 7, 2024 Configuration menu - View commit details
-
Copy full SHA for ab7d956 - Browse repository at this point
Copy the full SHA ab7d956View commit details -
Bug#36379291 Add more information/attributes to the Windows EXE/DLL f…
…iles Added the attributes CompanyName ProductName LegalCopyright LegalTrademarks Change-Id: I79cb92b90aabc0ca1961559b0b62c36aa5c525ca
Configuration menu - View commit details
-
Copy full SHA for b01ccca - Browse repository at this point
Copy the full SHA b01cccaView commit details -
Bug#36324900 Ignore ENOENT error from unlink() operation
If one put the NDB data node filesystem on some distributed filesystems one can experience that data node fails when it tries to remove a file due to file is reported to not exist. For a local filesystem that should be impossible to happen, but for some distributed filesystems can as part of internal failover handling retry a removal which may have succeeded before the failover in which case the second removal may fail since file no longer exists. Data node was changed to allow file and directory removal to fail with 'file does not exist' error and treat that as successful removal. Note, this apply only to files under the data node filesystem and backup files. Removing other files will not have changed semantics. Nor will any file removal by other NDB programs change semantics. For testing purposed an extra file deletion call is issued roughly 1% of the times a file deletion is requested, this is only done for debug builds. Change-Id: Ie8f5f587e9e675c2a0705d7e450be0e139b045a8
Configuration menu - View commit details
-
Copy full SHA for 55c08fc - Browse repository at this point
Copy the full SHA 55c08fcView commit details -
Bug#36342792 [InnoDB] IO write to merge file aligned past end of buffer
Symptom: When creating an index on a table containing data, valgrind occasionally reports reads of uninitialized memory from ddl::Builder::bulk_add_row Root cause: When calculating alignment for the final write of ddl::Key_sort_buffer::serialize, the IO write may be aligned to write from region partly past the end of IO buffer. Fix: When such a condition is detected, a portion of the IO buffer is written first to free up space in the buffer. Change-Id: I607ab549712a077cafdc5e067dfd667db40ade4f
Andrzej Jarzabek committedMar 7, 2024 Configuration menu - View commit details
-
Copy full SHA for d24a22a - Browse repository at this point
Copy the full SHA d24a22aView commit details
Commits on Mar 8, 2024
-
Bug#35836581 Server crashes when adding a fulltext index
Note: This commit backports the fix to 8.0 Background: ----------- Auxiliary FTS index is updated when adding a fulltext index. This operation uses the Btree_load::insert method to insert tuples to the index. One such table, aux_table, and the corresponding index, FTS_INDEX_TABLE_IND, is used to create the inverted index. This index is a mapping between each word in the table (all rows) with a vector of all its occurrences. Each occurrence is accounted as a pair of document ID (row where word was seen) and position (offset in the row). This vector, called the ilist, is stored in the tuple which is inserted to the index. The tuple is inserted into the index using Btree_load::insert(dtuple *tuple, size_t level); In the Btree_load::insert, when preparing space for the tuple, we check if there is enough space in the redo log using log_free_check. To ensure that no latches are held when calling this, Btree_load::release is called prior to log_free_check followed by Btree_load::latch to acquire neccessary latches. The call sequence is Btree_load::release -> log_free_check -> Btree_load::latch The Btree_load::release calling Page_load::release will buffer fix the page and commit the MTR. The Btree_load::latch calling Page_load::latch will buffer unfix the page and start the MTR. However, if m_n_recs == 0, meaning that no records are inserted yet, then Btree_load::latch will do nothing. The Page_load's MTR is first started in Page_load::init, and is commited when Page_load::release is called. It is started again in Page_load::latch. When an index is being rebuilt, the order of the function calls is: init -> latch (m_n_recs == 0 does nothing) -> release -> latch -> release -> latch -> ... -> release -> latch -> finish Issue: ------ Btree_load::log_free_check was being called when m_n_recs == 0. This should not happen since if no records were inserted, then there is no need to check for free space. Furthermore, Btree_load::insert(dtuple *tuple, size_t level) must increase m_n_recs if insert is successful instead of Btree_load::builder Fix: ---- Ensure that m_n_recs is non-zero when calling log_free_check. Ensure that insert(dtuple, size_t) increases m_n_recs after successful insert. Note: ----- It was observed that threads created for FTS::insert and FTS::start_parse_threads were run in context of std::thread instead of runnable. Added missing calls to runnable in FTS::insert inorder to use DBUG_EXECUTE_IF. Added similar calls in FTS::start_parse_threads as well Added asserts to ensure number of records inserted are calculated correctly both during and at the end of the insert operation. Change-Id: I8ccd55de79b3ec5d2bef0f99a831aecb99a1ca16
Configuration menu - View commit details
-
Copy full SHA for d1a860a - Browse repository at this point
Copy the full SHA d1a860aView commit details -
Bug#34930219 Missing synchronization of access to THD::m_protocol cau…
…sing SIGSEGV A stack allocated protocol instance was popped while i_s.processlist was about to check whether the client connection was still alive. When the protocol instance went out of scope, the call to connection_alive() accessed an invalid pointer, causing SIGSEGV. The fix is to cache the return value from the current protocol's connection_alive() method when pushing, popping or getting the protocol. This might leave results that are slightly out of sync with reality, but a better synchronization is likely to cause performance degradation. Change-Id: I3d512fadaa0df145af3f25d4cc03fa20143c5310 (cherry picked from commit 99c1919e0596bb6fe0292e602299e6da880fa352)
Configuration menu - View commit details
-
Copy full SHA for 31c0adf - Browse repository at this point
Copy the full SHA 31c0adfView commit details -
Bug#34929814 Inconsistent FTS state in concurrent scenarios
Bug#36347647 Contribution by Tencent: Resiliency issue in fts_sync_commit Symptoms: During various operations on tables containing FTS indexes the state of FTS as comitted to database may become inconsistent, affecting the following scenarios: - the server terminates when synchronizing the FTS cache, - synchronization of FTS cache occurs concurrently with another FTS operation This inconsistency may lead to various negative effects including incorrect query results. An example operation which forces the synchronization of FTS cach is OPTIMIZE TABLE with innodb_optimize_fulltext_only set to ON. Root Cause: Function 'fts_cmp_set_sync_doc_id' and 'fts_sql_commit' use different trx_t objects in function 'fts_sync_commit'. This causes a scenario where 'synced_doc_id' in the config table is already committed, but remaining FTS data isn't yet, leading to issues in the scenarios described above - the server terminating between the commits, or concurrent access getting the intermediate state. Fix: When 'fts_cmp_set_sync_doc_id' is called from 'fts_sync_commit' it will use the transaction provided by the caller. Patch based on contribution by Tencent. Change-Id: I65fa5702db5e7b6b2004a7311a6b0aa97449034f
Andrzej Jarzabek committedMar 8, 2024 Configuration menu - View commit details
-
Copy full SHA for c94f9d8 - Browse repository at this point
Copy the full SHA c94f9d8View commit details
Commits on Mar 9, 2024
-
Bug#34929814 Inconsistent FTS state in concurrent scenarios
Bug#36347647 Contribution by Tencent: Resiliency issue in fts_sync_commit Bug#36342792 IO write to merge file aligned past end of buffer Bug#35237928 When innodb_disable_sort_file_cache=on, create a full-text index will fail Post-push fix: Add doxygen documentation to function parameter. Disable innodb_disable_sort_file_cache test on systems with no O_DIRECT mode support. Fix innodb.fts_sync_commit_resiliency for 8.0 branch. Change-Id: Ie3dbf78b4e84ae22c85d1ce873b58ee926566ff8
Andrzej Jarzabek committedMar 9, 2024 Configuration menu - View commit details
-
Copy full SHA for a036798 - Browse repository at this point
Copy the full SHA a036798View commit details
Commits on Mar 11, 2024
-
Bug #35835864 : Crash during background rollback if both prepare and
active transaction exist SYMPTOM: ------- Assertion failure in InnoDB's background when a transaction for which it wants to acquire an MDL lock, turns out to be no longer active. ROOT CAUSE: ---------- When creating a list of <trx id, table id> pairs for which we will need to acquire MDL in trx_recovery_rollback_thread later, we accidentally include a transaction which not only will not be rolled back by InnoDB background thread (because it is TRX_STATE_PREPARED), but worse still, might be rolled back by even before we get to spawn . SOLUTION: -------- InnoDB should never include tables of transactions which are in TRX_STATE_PREPARED state in the list, because InnoDB is not allowed to roll them back by itself, and they might be rolled back by binlog logic. Thanks to Genze Wu (Alibaba) for the contribution. Change-Id: I9f366c1e1022464e6dd43de08bed89f0510ad786
Mohammad Tafzeel Shams committedMar 11, 2024 Configuration menu - View commit details
-
Copy full SHA for bb2a400 - Browse repository at this point
Copy the full SHA bb2a400View commit details -
Approved by: Erlend Dahl <erlend.dahl@oracle.com>
Configuration menu - View commit details
-
Copy full SHA for 011d8c0 - Browse repository at this point
Copy the full SHA 011d8c0View commit details -
Approved by: Erlend Dahl <erlend.dahl@oracle.com>
Configuration menu - View commit details
-
Copy full SHA for 695eefd - Browse repository at this point
Copy the full SHA 695eefdView commit details
Commits on Mar 14, 2024
-
Bug#36108397 Upgrade to latest protobuf library [patches]
Followup patch: The upgrade to the latest protobuf library has made some changes that are incompatible with the MSVC PGO compiler and linker options on Windows. These incompatibilities are addressed by excluding the protoc executable and Abseil/protobuf DLLs from PGO. Change-Id: I587e2c8729bd07afb6508c748ddf73eca881d2a7
Configuration menu - View commit details
-
Copy full SHA for 81e18a5 - Browse repository at this point
Copy the full SHA 81e18a5View commit details -
Bug#36108397 Upgrade to latest protobuf library [patches]
Followup patch: Suppress linker warnings generated as a consequence of combining CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS with __declspec(dllexport) Change-Id: I7c6b63e6db5c327b582eb8994ee6ef3a64356abc
Configuration menu - View commit details
-
Copy full SHA for f34f780 - Browse repository at this point
Copy the full SHA f34f780View commit details
Commits on Mar 27, 2024
-
Approved-by: Balasubramanian Kandasamy <balasubramanian.kandasamy@oracle.com>
Configuration menu - View commit details
-
Copy full SHA for f96cdad - Browse repository at this point
Copy the full SHA f96cdadView commit details -
Bug#34930219 Missing synchronization of access to THD::m_protocol cau…
…sing SIGSEGV Followup fix: Moving test and result file to internal. Change-Id: Ic9188da7aa08af6c2709219d5d3fb61f5c803913 (cherry picked from commit 9f841c44dd1b128fbd6288c2393f58db89d3e00e)
Configuration menu - View commit details
-
Copy full SHA for 5e9663c - Browse repository at this point
Copy the full SHA 5e9663cView commit details -
Bug#36343647 [ERROR] [MY-013183] [InnoDB] Assertion failure:ibuf0ibuf…
….cc:3833:ib::fatal While investigating above bug, we noticed the test was also failing because of the following bug which is fixed in 8.4. Therefore, cherry-picked following commit#79d678d8219 from 8.4 to 8.0 : Bug#34982348 Assertion failure: mtr0mtr.cc:310:ib::fatal Background: ----------- At the time of inserting a record in the secondary index, innodb tries to find out the position on the tree where to insert the record. During this time if innodb detects that required page is not in the buffer pool then it tries to insert the record in the change buffer (ibuf) while holding an S latch on the tree. On the other hand, once change buffer has buffered that entry in its ibuf tree, it detects if the ibuf index tree needs to be modified then it tries to contract the ibuf tree. In the process, it calls the ibuf_merge_pages() method where we added the `log_free_check()` method before starting an mtr. `log_free_check()` method was added through WL#10310. It has novel intent that is it ensures that there is sufficient space in the redo log buffer and, (debug only) thread is not holding any latch. In this case query thread is holding the S-latch on the index tree. `log_free_check()` must not be called with active latches as that may lead to deadlocks. But merges happen through the background master thread as well. At this time the thread is not already inside a parent mtr. Does that mean we need the log_free_check() in this case ? Notice : * ibuf_merge_in_background() calls ibuf_merge_pages() with sync=false * mtr created in the ibuf_merge_pages() is read-only * With the sync=true the buf_read_ibuf_merge_pages() call does not exit until it reads secondary index leaf pages mentioned in the ibuf pages. These reads may in turn cause the ibuf merge. Now, these ibuf merges may generate more redo logs irrespective of current or io thread. That means the following : - if the current thread is background thread then it doesn't require `log_free_check()` because it does the merges with sync=false. - if the current thread is a query thread that is always inside an mtr, then in this case too it doesn't require `log_free_check()`. This thread may do the merges synchronously as well. Fix: ---- - Removed the ` log_free_check()` call from the `ibuf_merge_pages()` - Introduced a wrapper method that help us detect situation if the thread is already inside an mtr. - Added relevant asserts to validate observations mentioned before. - Developed an mtr test to verify ibuf contraction. Change-Id: I44a1bf7e4605e51e485576807c8578f9c769d993
Configuration menu - View commit details
-
Copy full SHA for 4aa1d53 - Browse repository at this point
Copy the full SHA 4aa1d53View commit details -
Bug#36343647 [ERROR] [MY-013183] [InnoDB] Assertion failure:ibuf0ibuf…
….cc:3833:ib::fatal This is a cherry-pick of the following bug fix which is already fixed in 8.4 through commit#79d67. Bug#35676106 Assertion failure: ibuf0ibuf.cc:3825:ib::fatal triggered thread Description: ------------ When the pages of secondary index are brought to the buffer pool either through the ibuf merge background thread or read through usual io thread, first cached entries from the change buffer are applied to the pages. Once the entries are applied to the page, they are removed from the change buffer. It may possible that the table is deleted or being deleted during change buffer related operations described in the earlier. In the current code we handled the situation of tablespace is deleted but not being deleted. Latter situation must also be handled similarly. Fix: ==== - Replaced the call fil_space_get_flags() with fil_space_acquire_silent(). Later method refuses to acquire the tablespace that is being deleted. - Improved the doxygen of the method ibuf_restore_pos() Change-Id: Ibc5a07c705988282b8b7906d645e2a108f4ada76
Configuration menu - View commit details
-
Copy full SHA for 5d06230 - Browse repository at this point
Copy the full SHA 5d06230View commit details -
Bug#36425219 log_writer_write_buffer must double-check log.write_lsn …
…after reacquiring mutex The log_writer_write_buffer calls various functions which could temporarily release log.writer_mutex, which in case of --innodb_log_writer_threads=OFF could lead to other threads writing to redo log in between. In such case the value of log.write_lsn stored in a local variable would not longer be valid. This patch lets log_writer_write_buffer return (so that the caller can retry) in case log.write_lsn has changed value meanwhile. Change-Id: Ieae034d059a97927e8aaef32d2c119a1295e25c6
Configuration menu - View commit details
-
Copy full SHA for 819d1de - Browse repository at this point
Copy the full SHA 819d1deView commit details -
Bug#36394600 MTR: wait_until_disconnected.inc can't be called with --…
…enable_reconnect Symptom: Some MTR tests that execute `wait_until_disconnected.inc` after executing a `--enable_reconnect` command fail randomly. Root-cause: There is a race condition when server is asked to be restarted and `--enable_reconnect` followed by a `wait_until_disconnected.inc` is called. The server may be up again before the `wait_until_disconnected.inc` starts to poll the server, and it will never notice it went away. Fix: `wait_until_disconnected.inc` calls `--disable_reconnect` explicitly. Some tests are fixed to not call `enable_reconnect` before `wait_until_disconnected.inc`. Change-Id: I6da25e99048d7b26526b164bb206df1c772c3713
Configuration menu - View commit details
-
Copy full SHA for 53caac9 - Browse repository at this point
Copy the full SHA 53caac9View commit details
Commits on May 2, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 6dcee9f - Browse repository at this point
Copy the full SHA 6dcee9fView commit details
Commits on May 16, 2024
-
PS-9067 Fix MTR test failures when run with --mem
https://perconadev.atlassian.net/browse/PS-9067 Post-push fix. Masking of variable paths now considers the case when MTR tests are run parallely with multiple threads.
Configuration menu - View commit details
-
Copy full SHA for 286fd4f - Browse repository at this point
Copy the full SHA 286fd4fView commit details -
PS-9174 Issue in mysqldump (mysql dump utility)
https://perconadev.atlassian.net/browse/PS-9174 Bug#36248967: mysql/mysql-server@f351ea92a5a mysql/mysql-server@c334b7e5f02 Problem: mysqldump not sanitizing the version string obtained from server which may lead to injecting malicious commands to the output. Fix: added function sanitizing the version string by cutting off illegal part and issuing warning. Test: check the server version in the output with and without injected payload. Change-Id: I1f19e1c90bdb8d444285e427092face3bb16da01
Configuration menu - View commit details
-
Copy full SHA for 1e86448 - Browse repository at this point
Copy the full SHA 1e86448View commit details -
PS-9174 Assertion Failure in /mysql-8.0.34/sql/field.cc:7119
https://perconadev.atlassian.net/browse/PS-9174 Bug#35846221 mysql/mysql-server@3cd7cd2066f Problem is due to missing implementation of Item_func_make_set::fix_after_pullout(), which makes this particular MAKE_SET function be regarded as const and may thus be evaluated during resolving. Fixed by implementing a proper fix_after_pullout() function. Change-Id: I7094869588ce4133c4a925e1a237a37866a5bb3c (cherry picked from commit a9f0b388adeef837811fdba2bce2e4ba5b06863b)
Configuration menu - View commit details
-
Copy full SHA for 5deacc5 - Browse repository at this point
Copy the full SHA 5deacc5View commit details -
PS-9174 Failure in Protocol_classic::send_field_metadata
https://perconadev.atlassian.net/browse/PS-9174 Bug#35904044 mysql/mysql-server@271dcf231d0 There may be a failure when returning metadata to the client for certain SQL queries involving dynamic parameters and subqueries in a SELECT clause. The fix is to avoid setting an item name that is a NULL pointer. Change-Id: I1abe206f97060c218de1ae23c63a4da80ffaaae5
Configuration menu - View commit details
-
Copy full SHA for d3a00fe - Browse repository at this point
Copy the full SHA d3a00feView commit details
Commits on May 22, 2024
-
PS-9174 Incorrect results when using group by loose index scan
https://perconadev.atlassian.net/browse/PS-9174 Bug#35854362 mysql/mysql-server@c7e824d18f7 Description: - Indexes are ordered based on their keys. Loose index scan effectively jumps from one unique value (or set of values) to the next based on the index’s prefix keys. - To “jump” values in an index, we use the handler call: ha_index_read_map(). - the first range read sets an end-of-range value to indicate the end of the first range. - The next range read does not clear the previous end-of-range value and applies it to the current range. - Since the end-of-range value has already been crossed in the previous range read, this causes the reads to stop. So the iteration is finished with the current range without moving onto the next range(unique set of values)resulting in an incorrect query result. Fix: - In order to find the next unique value, the old end-of-range value is cleared. Change-Id: I84290fb794db13ec6f0795dd14a92cf85b9dad09
Configuration menu - View commit details
-
Copy full SHA for 78155e7 - Browse repository at this point
Copy the full SHA 78155e7View commit details -
PS-9174 Signal 11 seen in Gtid_set::~Gtid_set
https://perconadev.atlassian.net/browse/PS-9174 BUG#36093405 mysql/mysql-server@6467f70f615 Group Replication maintains a memory structure that keeps track of transactions accepted to commit but not committed on all members yet. This structure, named certification info, is used to detect conflicts and dependencies between transactions. The certification info is cleaned periodically and on Group Replication stop. There was a race identified between these two operations, more precisely: 1) Certifier::garbage_collect() -> while (it != certification_info.end()) { if (it->second->is_subset_not_equals(stable_gtid_set)) { if (it->second->unlink() == 0) delete it->second; 2) Certifier::~Certifier() -> clear_certification_info(); -> for (Certification_info::iterator it = certification_info.begin(); it != certification_info.end(); ++it) { if (it->second->unlink() == 0) delete it->second; `clear_certification_info()` was being called without securing exclusive access to `certification_info` which could cause concurrent access to its items, more precisely `delete it->second`. To solve the above issue, `~Certifier()` (like all other callers) do secure the exclusive access to certification info. Change-Id: I28111d41adb54248d90137ee9d2c17196de045e8
Configuration menu - View commit details
-
Copy full SHA for fc97c78 - Browse repository at this point
Copy the full SHA fc97c78View commit details -
PS-9174 InnoDB:trx hangs due to wrong trx->in_innodb value
https://perconadev.atlassian.net/browse/PS-9174 Bug#35277407 mysql/mysql-server@88b0ebafdf6 This patch will solve the following duplicates of this bug: Bug #112425: trx_t might be Use-After-Free in innobase_commit_by_xid Bug #99643: innobase_commit_by_xid/innobase_rollback_by_xid is not thread safe Bug #105036: trx would be used after free in `innobase_commit_by_xid` and rollback Background: TrxInInnoDB is a RAII wrapper for trx_t object used to track if the transaction's thread is currently executing within InnoDB code. It is acquired on all entry points, and as Innodb can be entered "recursively", the trx->in_depth is used to track the balance of enters and exits. On the outermost enter, the thread additionally checks if trx->in_innodb has the TRX_FORCE_ROLLBACK (0x8000 0000) flag set, which means a high priority transaction is attempting an asynchronous rollback of this transaction, so to avoid races, this thread should wait for the rollback to complete. Issue: TrxInInnoDB's destructor calls exit which resets in_depth and in_innodb increased by enter. However innobase_commit_by_xid and innobase_rollback_by_xid calls trx_free_for_background which returns the trx back to the pool, before the destructor is called. If this trx is being reused by another thread, it can lead to data-race and corrupted value of in_depth and in_innodb. If in_depth gets the value of -1, subsequent calls to enter and exit will bump in_innodb by one. This can lead to indefinite wait if in_innodb reaches TRX_FORCE_ROLLBACK. Fix: Ensure that TrxInInnoDB calls exit before returning the trx object to the pool. Further add checks to catch corrupt values of in_depth when freeing trx. Trx state validation before free was missed in trx_free_prepared_or_active_recovered Thanks to Shaohua Wang (Alibaba, Ex-Innodb) for the contribution Change-Id: Ibf79bec85ffa0eaf65f565c169db61536bff10a2
Configuration menu - View commit details
-
Copy full SHA for 5a77b57 - Browse repository at this point
Copy the full SHA 5a77b57View commit details
Commits on May 23, 2024
-
PS-9174 Inconsistent FTS state in concurrent scenarios
https://perconadev.atlassian.net/browse/PS-9174 Bug#34929814 mysql/mysql-server@c94f9d873b1 mysql/mysql-server@a0367984115 Bug#36347647 Contribution by Tencent: Resiliency issue in fts_sync_commit Symptoms: During various operations on tables containing FTS indexes the state of FTS as comitted to database may become inconsistent, affecting the following scenarios: - the server terminates when synchronizing the FTS cache, - synchronization of FTS cache occurs concurrently with another FTS operation This inconsistency may lead to various negative effects including incorrect query results. An example operation which forces the synchronization of FTS cach is OPTIMIZE TABLE with innodb_optimize_fulltext_only set to ON. Root Cause: Function 'fts_cmp_set_sync_doc_id' and 'fts_sql_commit' use different trx_t objects in function 'fts_sync_commit'. This causes a scenario where 'synced_doc_id' in the config table is already committed, but remaining FTS data isn't yet, leading to issues in the scenarios described above - the server terminating between the commits, or concurrent access getting the intermediate state. Fix: When 'fts_cmp_set_sync_doc_id' is called from 'fts_sync_commit' it will use the transaction provided by the caller. Patch based on contribution by Tencent. Change-Id: I65fa5702db5e7b6b2004a7311a6b0aa97449034f
Configuration menu - View commit details
-
Copy full SHA for 5da6a5a - Browse repository at this point
Copy the full SHA 5da6a5aView commit details -
PS-9174 Unified behaviour when calling plugin->deinit for all plugins
https://perconadev.atlassian.net/browse/PS-9174 Bug#36317795 mysql/mysql-server@ce036717cb5 This patch unifies plugin's deinit function call to pass valid plugin pointer instead of the nullptr for all types of plugin. Change-Id: I482497bbaff28d5cd31d74d694056a4df6693152
Configuration menu - View commit details
-
Copy full SHA for 5883b88 - Browse repository at this point
Copy the full SHA 5883b88View commit details -
PS-9092: Data inconsistencies when high rate of pages split/merge
https://perconadev.atlassian.net/browse/PS-9092 Problem: Query over InnoDB table that uses backward scan over the index occasionally might return incorrect/incomplete results when changes to table (for example, DELETEs in other or even the same connection followed by asynchronous purge) cause concurrent B-tree page merges. Cause: The problem occurs when persistent cursor which is used to scan over index in backwards direction stops on infimum record of the page to which it points currently and releases all latches it has, before moving to the previous page. At this point merge from the previous page to cursor's current one can happen (because cursor doesn't hold latch on current or previous page). During this merge records from the previous page are moved over infimum record and placed before any old user records in the current page. When later our persistent cursor resumes its iteration it might use optimistic approach to cursor restoration which won't detect this kind of page update and resumes the iteration right from infimum record, effectively skipping the moved records. Solution: This patch solves the problem by forcing persisted cursor to use pessimistic approach to cursor restoration in such cases. With this approach cursor restoration is performed by looking up and continuing from user record which preceded infimum record when cursor stopped iteration and released the latches. Indeed, in this case records which were moved during the merge will be visited by cursor as they precede this old-post-infimum record in the page. This forcing of pessimistic restore is achieved by increasing page's modify_clock version counter for the page merged into, when merge happens from the previous page (normally this version counter is only incremented when we delete records from the page or the whole page). Theoretically, this might be also done when we are merging into page the page which follows it. But it is not clear if it is really required, as forward scan over the index is not affected by this problem. In forward scan case different approach to latching is used when we switch between B-tree leaf pages - we always acquire latch on the next page before releasing latch on the current one. As result concurrent merges from the next page to the current one are blocked. Note that the same approach to latching can't be used for backward iteration as it will mean that latching happens into opposite order which will lead to deadlocks.
Configuration menu - View commit details
-
Copy full SHA for 9242483 - Browse repository at this point
Copy the full SHA 9242483View commit details
Commits on May 24, 2024
-
Configuration menu - View commit details
-
Copy full SHA for d4bb6d6 - Browse repository at this point
Copy the full SHA d4bb6d6View commit details
Commits on May 28, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 53f6c40 - Browse repository at this point
Copy the full SHA 53f6c40View commit details -
Merge pull request percona#5299 from Percona-Lab/ubuntu24
Add ubuntu noble ps
Configuration menu - View commit details
-
Copy full SHA for e17b371 - Browse repository at this point
Copy the full SHA e17b371View commit details -
Merge pull request percona#5296 from percona/ubuntu24
PS-9231 Add ubuntu24 build
Configuration menu - View commit details
-
Copy full SHA for 0910775 - Browse repository at this point
Copy the full SHA 0910775View commit details
Commits on Jun 4, 2024
-
PS-9132 mysql.gtid_executed persistent GTID info lost when MySQL cras…
…h in Gtid_state::save https://perconadev.atlassian.net/browse/PS-9132 When the server is killed before persisting the GTIDs into mysql.gtid_executed, the subsequent crash recovery process fails to recover gtids from the binary logs because it does not find the "Prev_gtid_log_event" in the last binary log. This happens because, the last binlog was created during the previous restart but no information about Prev_gtid_log_event was written into the file since the server was killed before persisting to the table. The root cause here, is that the recovery process does not parse the previous binary logs if "Prev_gtid_log_event" was not found in the last binary log created by the server. This issue is fixed by parsing all previous binary logs until a valid "Prev_gtid_log_event" is seen. The test case, encrypted_master_unload_keyring is adjusted for additional warnings.
Configuration menu - View commit details
-
Copy full SHA for 11d4fed - Browse repository at this point
Copy the full SHA 11d4fedView commit details -
PS-9174 [ERROR] [MY-013183] [InnoDB] Assertion failure:ibuf0ibuf.cc:3…
…833:ib::fatal https://perconadev.atlassian.net/browse/PS-9174 Bug#36343647 mysql/mysql-server@5d06230ae2a This is a cherry-pick of the following bug fix which is already fixed in 8.4 through commit#79d67. Bug#35676106 Assertion failure: ibuf0ibuf.cc:3825:ib::fatal triggered thread Description: ------------ When the pages of secondary index are brought to the buffer pool either through the ibuf merge background thread or read through usual io thread, first cached entries from the change buffer are applied to the pages. Once the entries are applied to the page, they are removed from the change buffer. It may possible that the table is deleted or being deleted during change buffer related operations described in the earlier. In the current code we handled the situation of tablespace is deleted but not being deleted. Latter situation must also be handled similarly. Fix: ==== - Replaced the call fil_space_get_flags() with fil_space_acquire_silent(). Later method refuses to acquire the tablespace that is being deleted. - Improved the doxygen of the method ibuf_restore_pos() Change-Id: Ibc5a07c705988282b8b7906d645e2a108f4ada76
Configuration menu - View commit details
-
Copy full SHA for 7785160 - Browse repository at this point
Copy the full SHA 7785160View commit details -
PS-9174 Update the versions numbers
https://perconadev.atlassian.net/browse/PS-9174 Raised MYSQL_VERSION_EXTRA to 50 in MYSQL_VERSION file. Raised PERCONA_INNODB_VERSION to 50 in univ.i file.
Configuration menu - View commit details
-
Copy full SHA for 314844c - Browse repository at this point
Copy the full SHA 314844cView commit details -
Merge pull request percona#5291 from VarunNagaraju/pos-EOL-2
PS-9174 Backport bug fixes from MySQL 8.0.37
Configuration menu - View commit details
-
Copy full SHA for 0fe7650 - Browse repository at this point
Copy the full SHA 0fe7650View commit details -
Configuration menu - View commit details
-
Copy full SHA for 7e2bce5 - Browse repository at this point
Copy the full SHA 7e2bce5View commit details -
Configuration menu - View commit details
-
Copy full SHA for 73a9a9a - Browse repository at this point
Copy the full SHA 73a9a9aView commit details -
Merge pull request percona#5306 from adivinho/remove-ssl-libs-from-ta…
…rballs remove ssl libs from tarballs
Configuration menu - View commit details
-
Copy full SHA for 6199bdd - Browse repository at this point
Copy the full SHA 6199bddView commit details
Commits on Jun 11, 2024
-
Configuration menu - View commit details
-
Copy full SHA for e668e54 - Browse repository at this point
Copy the full SHA e668e54View commit details -
Configuration menu - View commit details
-
Copy full SHA for afa850a - Browse repository at this point
Copy the full SHA afa850aView commit details
Commits on Jun 12, 2024
-
Merge pull request percona#5314 from adivinho/release-5.7.44-50
Release-5.7.44-50
Configuration menu - View commit details
-
Copy full SHA for e968435 - Browse repository at this point
Copy the full SHA e968435View commit details -
Implemented PS-9217 (Merge MySQL 8.0.37) - merge with conflicts
https://perconadev.atlassian.net/browse/PS-9217 Merge tag 'mysql-8.0.37' into release-8.0.37-29
Configuration menu - View commit details
-
Copy full SHA for c3e0790 - Browse repository at this point
Copy the full SHA c3e0790View commit details -
Configuration menu - View commit details
-
Copy full SHA for 857cd47 - Browse repository at this point
Copy the full SHA 857cd47View commit details -
PS-9217 Merge MySQL 8.0.37 - Fix compilation errors
https://perconadev.atlassian.net/browse/PS-9217 * Use std::sort() instead of removed varlen_sort. * Use a getter method to access allocated_mem_counter. Also, updated the version number.
Configuration menu - View commit details
-
Copy full SHA for ba6aa47 - Browse repository at this point
Copy the full SHA ba6aa47View commit details
Commits on Jun 13, 2024
-
Merge pull request percona#5315 from dlenev/PS-8.0.37-29-9092
PS-9092: Data inconsistencies when high rate of pages split/merge (8.0 version)
Configuration menu - View commit details
-
Copy full SHA for 10f2a1a - Browse repository at this point
Copy the full SHA 10f2a1aView commit details
Commits on Jun 14, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 87a3312 - Browse repository at this point
Copy the full SHA 87a3312View commit details -
Configuration menu - View commit details
-
Copy full SHA for cd95a4a - Browse repository at this point
Copy the full SHA cd95a4aView commit details -
https://perconadev.atlassian.net/browse/TEL-46
TEL-46: MySql telemetry component creates telemetry files with not sufficient r/w privileges for TA Problem: Depending on OS configuration, MySql component can create telemetry files with permissions not enough for TA to read their content (e.g. 600). It is required to have permissions mask at least 644. TA deletes files after processing, but deletion permission is a matter of the directory mask, not the file itself. Solution: Storage component adds needed permissions after file creation.
Configuration menu - View commit details
-
Copy full SHA for e4261bd - Browse repository at this point
Copy the full SHA e4261bdView commit details -
PS-9165: Product Usage Tracking - phase 1
https://perconadev.atlassian.net/browse/PS-9165 Updated copyright info.
Configuration menu - View commit details
-
Copy full SHA for 515468d - Browse repository at this point
Copy the full SHA 515468dView commit details -
Merge pull request percona#5243 from kamil-holubicki/percona-telemetry
Percona telemetry
Configuration menu - View commit details
-
Copy full SHA for 2cdd70f - Browse repository at this point
Copy the full SHA 2cdd70fView commit details
Commits on Jun 17, 2024
-
Merge pull request percona#5294 from venkatesh-prasad-v/PS-9219-8.0
PS-9219: MySQL converts collation of date data type in ibd but data dictionary (8.0)
Configuration menu - View commit details
-
Copy full SHA for 9b88474 - Browse repository at this point
Copy the full SHA 9b88474View commit details
Commits on Jun 18, 2024
-
Merge pull request percona#5319 from VarunNagaraju/PS-9121-8.0.37-29
PS-9121 Innodb fails to update spatial index
Configuration menu - View commit details
-
Copy full SHA for 2d41d23 - Browse repository at this point
Copy the full SHA 2d41d23View commit details
Commits on Jun 19, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 3f63bcb - Browse repository at this point
Copy the full SHA 3f63bcbView commit details -
Configuration menu - View commit details
-
Copy full SHA for 096c306 - Browse repository at this point
Copy the full SHA 096c306View commit details
Commits on Jun 20, 2024
-
Merge pull request percona#5326 from adivinho/release-8.0.37-29
fix el8 packaging
Configuration menu - View commit details
-
Copy full SHA for 3c476b6 - Browse repository at this point
Copy the full SHA 3c476b6View commit details -
Merge pull request percona#5327 from inikep/PS-9219-8.0-cirrus
PS-9240 [8.0]: Add gcc-14 to `.cirrus.yml`
Configuration menu - View commit details
-
Copy full SHA for 54fca96 - Browse repository at this point
Copy the full SHA 54fca96View commit details
Commits on Jun 21, 2024
-
PS-9235: Skip deleted keys in keys list with Vault API v2
https://perconadev.atlassian.net/browse/PS-9235 This is a follow up fix for percona@7a874b2 which fixed only one case of deleted keys processing. During keyring_vault plugin startup it fethces a list of known key names via Vault API and populates key's cache without fetching actual key data at this stage. In case v2 API is used fetched list contains names for already deleted keys (with delition_time set in key metadata). Later when server attempts to create a key in keyring with some name its existance is checked in local cache first. The plugin finds a name for already deleted key and makes a wrong assumption as for key existance. To fix the issue cache populating was improved. Now it loads complette key data from the Vault server while populating cache. This lets plugin to check if particular key is still valid. Already deleted keys will not be added to local cache during plugin startup.
Configuration menu - View commit details
-
Copy full SHA for 1b2398b - Browse repository at this point
Copy the full SHA 1b2398bView commit details
Commits on Jun 25, 2024
-
Merge pull request percona#5324 from oleksandr-kachan/PS-9235-8.0.37
PS-9235: Skip deleted keys in keys list with Vault API v2
Configuration menu - View commit details
-
Copy full SHA for 2f4f0ba - Browse repository at this point
Copy the full SHA 2f4f0baView commit details -
Merge pull request percona#5285 from percona-ysorokin/dev/PS-8963-8.0…
…-sequence_table_keyword PS-8963 fix 8.0: SEQUENCE_TABLE Issue
Configuration menu - View commit details
-
Copy full SHA for 79c1086 - Browse repository at this point
Copy the full SHA 79c1086View commit details
Commits on Jun 26, 2024
-
PS-9165 Percona telemetry MTR test post-push fix
https://perconadev.atlassian.net/browse/PS-9165 The thread number in the test percona_utility_user is masked since it is different when telemetry is enabled from when it is disabled.
Configuration menu - View commit details
-
Copy full SHA for 81756ef - Browse repository at this point
Copy the full SHA 81756efView commit details -
Merge pull request percona#5335 from VarunNagaraju/PS-9165-MTR-test
PS-9165 Percona telemetry MTR test post-push fix
Configuration menu - View commit details
-
Copy full SHA for 923d936 - Browse repository at this point
Copy the full SHA 923d936View commit details -
Merge pull request percona#5336 from percona-ysorokin/dev/PS-8963-8.0…
…-sequence_table_keyword PS-8963 fix: SEQUENCE_TABLE Issue
Configuration menu - View commit details
-
Copy full SHA for 89a18a5 - Browse repository at this point
Copy the full SHA 89a18a5View commit details
Commits on Jun 27, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 34dd242 - Browse repository at this point
Copy the full SHA 34dd242View commit details -
Merge pull request percona#5329 from VarunNagaraju/PS-9217-gcc-14
PS-9217: Merge MySQL 8.0.37 Fix warnings with gcc-14
Configuration menu - View commit details
-
Copy full SHA for 1d3c1fb - Browse repository at this point
Copy the full SHA 1d3c1fbView commit details -
Configuration menu - View commit details
-
Copy full SHA for e16519a - Browse repository at this point
Copy the full SHA e16519aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 3daa680 - Browse repository at this point
Copy the full SHA 3daa680View commit details -
PKG-38 SElinux blocks PS from writing telemetry if semanage is not pr…
…esent PKG-40 AA profile update (cherry picked from commit b43c182)
Configuration menu - View commit details
-
Copy full SHA for 353e667 - Browse repository at this point
Copy the full SHA 353e667View commit details -
PS-9165 postfix: Product Usage Tracking - phase 1 (MTR fixes)
https://perconadev.atlassian.net/browse/PS-9165 'sys_vars.plugin_dir_basic' and 'clone.plugin_mismatch' MTR test cases modified so that they could be run on a server built both with and without telemetry component ('-DWITH_PERCONA_TELEMETRY=ON' CMake option).
Configuration menu - View commit details
-
Copy full SHA for 897e4f7 - Browse repository at this point
Copy the full SHA 897e4f7View commit details -
Merge pull request percona#5339 from percona-ysorokin/dev/PS-9165-8.0…
…-mtr_postfixes PS-9165 postfix 8.0: Product Usage Tracking - phase 1 (MTR fixes)
Configuration menu - View commit details
-
Copy full SHA for bfeeb75 - Browse repository at this point
Copy the full SHA bfeeb75View commit details -
PS-9165 postfix 8.0: Product Usage Tracking - phase 1
https://perconadev.atlassian.net/browse/PS-9165 When the server is started with --innodb-read-only flag, Percona Telemetry Component can not be installed/uninstalled, because the it is prohibited to add/delete the row in mysql.component table. In such a case "Skip updating mysql metadata in InnoDB read-only mode." warning was printed into server's error log which caused several MTR tests to fail. Solution: Do not call dd::check_if_server_ddse_readonly() but do the check directly and print information level message if needed.
Configuration menu - View commit details
-
Copy full SHA for d6057e8 - Browse repository at this point
Copy the full SHA d6057e8View commit details -
Merge pull request percona#5341 from kamil-holubicki/PS-9165-8.0-mtr_…
…postfixes-3 PS-9165 postfix 8.0: Product Usage Tracking - phase 1
Configuration menu - View commit details
-
Copy full SHA for 81619b3 - Browse repository at this point
Copy the full SHA 81619b3View commit details
Commits on Jun 28, 2024
-
PS-9165 postfix 8.0: Product Usage Tracking - phase 1 (MTR fixes) (pe…
…rcona#5340) * PS-9165 postfix 8.0: Product Usage Tracking - phase 1 (MTR fixes) https://perconadev.atlassian.net/browse/PS-9165 1. test_session_info.test duplicated with IDs recorded for the case when Percona Telemetry is built-in. It is better than masking IDs in output, because the test relies on real ID values 2. regression.test - ID masked in test output 3. prep_stmt_sundries - make assertion value dependant on Percona Telemetry being built-in
Configuration menu - View commit details
-
Copy full SHA for 28264d3 - Browse repository at this point
Copy the full SHA 28264d3View commit details -
PS-9222 ALTER TABLE ALGORITHM=INSTANT FIX #1
https://perconadev.atlassian.net/browse/PS-9222 Problem ======= When writing to the redo log, an issue of column order change not being recorded with INSTANT DDL was fixed by creating an array with size equal to the number of fields in the index which kept track of whether the original position of the field was changed or not. Later, that array would be used to make a decision on logging the field. But, this solution didn't take into account the fact that there could be column prefixes because of the primary key. This resulted in inaccurate entries being filled in the fields_with_changed_order[] array. Solution ======== It is fixed by using the method, get_col_phy_pos() which takes into account the existence of column prefix instead of get_phy_pos() while generating fields_with_changed_order[] array.
Configuration menu - View commit details
-
Copy full SHA for d537fd3 - Browse repository at this point
Copy the full SHA d537fd3View commit details -
PS-9222 ALTER TABLE ALGORITHM=INSTANT FIX #2
https://perconadev.atlassian.net/browse/PS-9222 Problem ======= When writing to the redo log, an issue of column order change not being recorded with INSTANT DDL was fixed by checking if the fields are also reordered, then adding the columns into the list. However when calculating the size of the buffer this fix doesn't take account the extra fields that may be logged, and causing the assertion on the buffer size failed eventually. Solution ======== To calculate the buffer size correctly, we move the logic of finding reordered fiedls before buffer size calculation, then count the number of fields with the same logic when deciding if a field needs to be logged.
Configuration menu - View commit details
-
Copy full SHA for 39e9957 - Browse repository at this point
Copy the full SHA 39e9957View commit details -
Merge pull request percona#5321 from VarunNagaraju/PS-9222-8.0
PS-9222 Include reordered fields when calculating mlog buffer size
Configuration menu - View commit details
-
Copy full SHA for 86bde10 - Browse repository at this point
Copy the full SHA 86bde10View commit details -
PS-9165 postfix: Product Usage Tracking - phase 1 (MTR KV fixes) (per…
…cona#5342) https://perconadev.atlassian.net/browse/PS-9165 'keyring_vault.keyring_udf' MTR test case modified so that it could be run on a server built both with and without telemetry component ('-DWITH_PERCONA_TELEMETRY=ON' CMake option).
Configuration menu - View commit details
-
Copy full SHA for f43c48d - Browse repository at this point
Copy the full SHA f43c48dView commit details
Commits on Jul 4, 2024
-
Revert "PS-9222 ALTER TABLE ALGORITHM=INSTANT FIX #2"
This reverts commit 39e9957.
Configuration menu - View commit details
-
Copy full SHA for da3fb7b - Browse repository at this point
Copy the full SHA da3fb7bView commit details -
Revert "PS-9222 ALTER TABLE ALGORITHM=INSTANT FIX #1"
This reverts commit d537fd3.
Configuration menu - View commit details
-
Copy full SHA for f1b4ed4 - Browse repository at this point
Copy the full SHA f1b4ed4View commit details -
Bug#36571091: MySQL server crashes on UPDATE after ALTER TABLE
fields_with_changed_order didn't treat prefix_len key table well. col->get_phy_pos() returns raw data about prefix phy_pos. We need to use field->get_phy_pos() for actual phy_pos here. Change-Id: If13449f9e6e6191cd0f2e7102f62ac28024727f8 (cherry picked from commit 678c193)
Configuration menu - View commit details
-
Copy full SHA for ae97d1a - Browse repository at this point
Copy the full SHA ae97d1aView commit details -
Bug#36526369 MySQL server crashes on UPDATE after ALTER TABLE
Issue: In bug#35183686 fix, we started logging the fields whose column order changed. But while calculating size needed in redo to log index information these columns were missing from calculation. Fix Make sure these colums are also considered while calculating size needed to log index entry. Change-Id: Ic8752c72a8f5beddfc5739688068b9c32b02a700 (cherry picked from commit e6248f5)
Configuration menu - View commit details
-
Copy full SHA for d277f1d - Browse repository at this point
Copy the full SHA d277f1dView commit details -
Bug#36526369 MySQL server crashes on UPDATE after ALTER TABLE
- Post push fix for memory leak Change-Id: I034bb20c71dfe3ae467e762b2dd7c7f95fb0679b (cherry picked from commit a3561b3)
Configuration menu - View commit details
-
Copy full SHA for 6bab954 - Browse repository at this point
Copy the full SHA 6bab954View commit details -
PS-9222 Test cases for MySQL server crashes on UPDATE after ALTER TABLE
https://perconadev.atlassian.net/browse/PS-9222 Our patches have been reverted in favour of Upstream's fixes in 8.0.38. The testcases for the scenario have been preserved.
Configuration menu - View commit details
-
Copy full SHA for 6190452 - Browse repository at this point
Copy the full SHA 6190452View commit details -
Merge pull request percona#5346 from VarunNagaraju/PS-9222-upstream
PS-9222 MySQL server crashes on UPDATE after ALTER TABLE
Configuration menu - View commit details
-
Copy full SHA for a195525 - Browse repository at this point
Copy the full SHA a195525View commit details
Commits on Jul 8, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 3a6841f - Browse repository at this point
Copy the full SHA 3a6841fView commit details -
Merge pull request percona#5350 from adivinho/release-8.0.37-29
PS-9284 Release tasks ticket for PS-8.0.37
Configuration menu - View commit details
-
Copy full SHA for e88bdef - Browse repository at this point
Copy the full SHA e88bdefView commit details -
Configuration menu - View commit details
-
Copy full SHA for 2f053c6 - Browse repository at this point
Copy the full SHA 2f053c6View commit details -
Merge pull request percona#5351 from adivinho/release-8.0.37-29
PS-9284 Release tasks ticket for PS-8.0.37
Configuration menu - View commit details
-
Copy full SHA for e3b0a41 - Browse repository at this point
Copy the full SHA e3b0a41View commit details
Commits on Jul 9, 2024
-
Merge pull request percona#5325 from percona/release-5.7.44-50
Release-5.7.44-50
Configuration menu - View commit details
-
Copy full SHA for 899a9b7 - Browse repository at this point
Copy the full SHA 899a9b7View commit details
Commits on Jul 19, 2024
-
PS-9317 chgrp and chmod errors during PS package installation with PE…
…RCONA_TELEMETRY_DISABLE=1
Configuration menu - View commit details
-
Copy full SHA for 14337bb - Browse repository at this point
Copy the full SHA 14337bbView commit details -
Configuration menu - View commit details
-
Copy full SHA for 176bf7b - Browse repository at this point
Copy the full SHA 176bf7bView commit details -
Merge pull request percona#5359 from adivinho/release-8.0.37-29
PS-9320 Please add tarballs for Debian bookworm and Ubuntu Noble for PS
Configuration menu - View commit details
-
Copy full SHA for 5b212ba - Browse repository at this point
Copy the full SHA 5b212baView commit details
Commits on Jul 25, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 4a1b694 - Browse repository at this point
Copy the full SHA 4a1b694View commit details -
Merge pull request percona#5363 from adivinho/release-8.0.37-29
PS-9284 Release tasks ticket for PS-8.0.37
Configuration menu - View commit details
-
Copy full SHA for 97dced4 - Browse repository at this point
Copy the full SHA 97dced4View commit details
Commits on Aug 2, 2024
-
Configuration menu - View commit details
-
Copy full SHA for 30dc4e7 - Browse repository at this point
Copy the full SHA 30dc4e7View commit details
Commits on Aug 7, 2024
-
Merge pull request percona#5375 from percona/release-8.0.37-29
PS-9284 Release tasks ticket for PS 8.0.37
Configuration menu - View commit details
-
Copy full SHA for 5b68f9b - Browse repository at this point
Copy the full SHA 5b68f9bView commit details
Commits on Aug 8, 2024
-
Merge branch 'percona/5.7' at Percona-Server-5.7.44-50 into null-merg…
…e-mysql-5.7.44-50
Configuration menu - View commit details
-
Copy full SHA for b765ea9 - Browse repository at this point
Copy the full SHA b765ea9View commit details
Commits on Aug 9, 2024
-
Merge pull request percona#5376 from oleksandr-kachan/null-merge-mysq…
…l-5.7.44-50 Null merge PS 5.7.44-50 at percona@899a9b79e71 into 8.0
Configuration menu - View commit details
-
Copy full SHA for b12015e - Browse repository at this point
Copy the full SHA b12015eView commit details
Commits on Aug 12, 2024
-
PS-9233: UUID Boost library parts to supoort uuid_vx component
https://perconadev.atlassian.net/browse/PS-9233 UUID lib is from dev branch of upcoming version 1.86 The boost::uuid lib is from develop branch, last coomit hash 02c82ce Small fixe for the compatibility fith older boost_1_77_0 (cherry picked from commit 0592d68)
Configuration menu - View commit details
-
Copy full SHA for 9fe9e26 - Browse repository at this point
Copy the full SHA 9fe9e26View commit details -
PS-9233: Implementation of UUID v1-v7 functions according to RFC 9562
https://perconadev.atlassian.net/browse/PS-9233 This squashed commit contains also fixes done or suggested by Yura Sorokin (cherry picked from commit 63952bf)
Configuration menu - View commit details
-
Copy full SHA for 234b17f - Browse repository at this point
Copy the full SHA 234b17fView commit details -
PS-9233 feature: Implementation of UUID v1-v7 functions according to …
…RFC 9562 (packaging) (percona#5360) https://perconadev.atlassian.net/browse/PS-9233 Updated both DEB and RPM packaging scripts to include new 'component_uuid_vx_udf.so' shared library. (cherry picked from commit b0efa52)
Configuration menu - View commit details
-
Copy full SHA for 646a7cf - Browse repository at this point
Copy the full SHA 646a7cfView commit details
Commits on Aug 28, 2024
-
Configuration menu - View commit details
-
Copy full SHA for ba98c45 - Browse repository at this point
Copy the full SHA ba98c45View commit details