Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PS-9284 Release tasks ticket for PS 8.0.37 #5375

Merged
merged 368 commits into from
Aug 7, 2024
Merged

Conversation

adivinho
Copy link
Contributor

@adivinho adivinho commented Aug 6, 2024

No description provided.

zmur and others added 30 commits January 25, 2024 09:48
Add verbose level to to ndb_waiter.

Verbosity levels are:

 0 - prints nothing only uses process exit code.
 1 - prints final connect status
 2 - prints status every time it is checked.

To get the old ndb_waiter output use --verbose=2, default is the lesser
verbose level 1.

Exit codes are:

 0 - wait succeeded
 1 - wait timed out
 2 - parameter error, for example bad node id given
 3 - failed to connect to management server

If program get disconnected from management server during run, we now
only retry connect using --connect-retries and --connect-retry-delay
once, not 10 times.

For the public NdbAPI function connect the default value for verbose is
changed to 1 which will behave as before. Any one running an old NdbAPI
program using the old default value 0 for verbose against a new
libndbclient will not see connection failure printouts. They should
recompile program and change any explicit passing verbose as 0 to 1.

Change-Id: Ic21cb97dbd09b8a7f387e52ca7efc351d1baf762
Change-Id: I183b1d2179f4906a429992bafdee4e3edecc48cc
USE EXECUTE_PROCESS(COMMAND ....) rather than EXEC_PROGRAM(....)

Change-Id: I6adb0bad57c94302578cae86f82b27c533d5d902
(cherry picked from commit 71f4f574e7dc433c9e9064aeecd2055048b6f633)
…wrong INSERT REDO log

REDO log is not logged (non instant add/del) column order change with instant DDL.
It might cause wrong REDO log replay when recovery, very danger.

The fix makes the column order change logged also correctly.

Change-Id: I7e617735bb9d327136b18b5165039e1155f1fe50
Added new variable connect_timeout in mgmapi data structure
with default value 3000 milliseconds.
This variable is used rather than the generic timeout in
ndb_mgm_connect to avoid threads that connects to the cluster
through this node to block for 60 secs.

Change-Id: I30afc099f943265afaabfdd111a0fe3046b7d2f7
Restore the interactive help text in ndb_mgm from before clang-format,
and disable clang-format for that particular block of code.

Change-Id: I30d29d893473ed6c7b388affbcffe8e498e178ed
In the Cluster/J test runner, AllTests::main(), accept the command-line
option -n followed by a list of test names. In this case, only run
the named tests.

Also support a new option --print-cases (short form -l) option,
to list tests.

In the Cluster/J test runner (TestRunner.java), fix a spelling
error in the output, and if no tests were run print "no tests run"
rather than "all tests succeeded".

Add a new NDBAPI test utility called testClusterJ. This program takes
care of creating a clusterj.properties file, constructing the Java
classpath and command line, and executing the test runner. It takes no
command-line options of its own; all supplied options are passed
verbatim to the Java program. If MTR_CLASSPATH is set in the
environment, this is appended to the classpath.

The clusterj.properties file is created in TMPDIR, and is deleted
when the program exits.

If NDB_CONNECTSTRING is set in the environment, testClusterJ uses it
in clusterj.properties. If not, and a management server is listening
at port 13000 (as in the common case of an MTR environment), it uses
port 13000. Otherwise it falls back to port 1186.

If CLUSTERJ_MYSQLD is set in the environment, testClusterJ uses it
in the JDBC connect string in clusterj.properties. If not, and
a management server was found at port 13000, it uses "localhost:13001".
Otherwise it falls back to "localhost:3306".

Change-Id: Ie42fd37057e8f8afb27639a9216deadf4ab6a2b1
It is a documented limitation of Cluster/J Query that setLimits() cannot
be used with deletePersistentAll(). This patch removes that limitation,
in order to allow a user to perform a large delete in batches, using a
batch size that is smaller than MaxNoOfConcurrentOperations.

This change allows use of a limit size, but not a skip count,
in setLimits(). Use of a non-zero skip count will cause
deletePersistentAll() to fail.

To make use of this feature, the user should delete iteratively until
the count of deleted rows is less than the batch size, as in this
example:

    /* Delete in batches */
    query.setLimits(0, DeleteBatchSize);
    int result = 0;
    do {
        result = query.deletePersistentAll();
        System.out.println("Batch result: " + result);
    } while(result == DeleteBatchSize);

Change-Id: I7b7337ce75e4f7529028dccb5455bd1fcaa3051a
The AppTest.CmdLineExtraConfigNoDefaultFail test of
routertest_router_mysqlrouter_app fails when mysqlrouter.conf exists
in the current directory.

This does not happen in test-environments where the current directory
is under control and empty, but in build-environments where a the
build-dir may contain other fails.

The existing check in the test for "mysqlrouter.conf in current dir"
is faulty as it replaces {origin} with the program-name.

Change
======

- replace {origin} with the origin-dir, instead of program-name

Change-Id: I522cb199a90219eff4ab3f929a4c9a8232df7885
There may be a failure when returning metadata to the client for certain
SQL queries involving dynamic parameters and subqueries in a SELECT
clause. The fix is to avoid setting an item name that is a NULL pointer.

Change-Id: I1abe206f97060c218de1ae23c63a4da80ffaaae5
               (mysql 8.0.33) get the following error:
               "PHP message: PHP Warning: mysql_connect():
               Server sent charset (255) unknown to the
               client"

Description:
------------
When the character_set_server variable is set using
SET PERSIST or SET GLOBAL, it does not take effect for the
new client sessions or for a client that is establishing a
connection to the server after the server is restarted. The
only way currently we can achieve this is by setting this
variable via command line when the server is started.

Fix:
----
The fix is to make sure that at the time of server restart,
the data is read in the correct order so that the variable
takes effect as expected.

Change-Id: I8fb468efaa4492d00549dc47b975b4fdfe0ec970
If hostname fails to resolve a hostname (e.g. due to DNS problems),
the router currently:

- skips the destination
- if there are no other destinations, closes the port.

But as it doesn't add the destination to quarantine, the destination
will not be check if the DNS problems resolves, to reopen the port.

Change
======

- add the destination to quarantine if address resolution fails.

Change-Id: I83f5d705aae549a21657f2f97703d663d2c53a8a
General cleanup

 - Use g_eventLogger for events that should be async / are potentially
   generated from threads delivering signals
 - Use fprintf(stderr) / Ndberr for 'pre-crash' states which should be
   flushed before process exit/abort
   Add a Timestamp and 'NDBAPI FATAL ERROR' to these occurrences.
 - Remove unnecessary logging
 - Remove non standard 'user help' logging
 - Add some more context info to async messages to understand e.g.
   - What the message is about
   - Which instance(s) it is referring to (connection id, event buffer)

Not a complete cleanup, more can be done.

Change-Id: I904435608f699401e7431bb5c68c0c5fb0bd8df1
ndb_cluster_connection::connect() logging modifications.

When the connect() method encounters errors from the ConfigRetriever subobject
during connection setup, it logs these using the asynchronous g_eventLogger
object.

This is a case of a method logging errors related to its invocation
which does not really make sense to produce asynchronously.

In this case :
 - The error text is available programmatically using the
   get_latest_error_msg() function.
 - There is a 'verbose' option passed to the method which is
   used to control the verbosity of logging of errors generated
   by the embedded MGMAPI object.
 - Almost all existing Ndb tools use the verbose option.
 - The MySQLD ha_ndbcluster does not use the verbose option.

The fix here involves :
 - Modifying the ndb_cluster_connection::connect() to :
   - Only log in cases where the verbose parameter is set
   - Log directly to stdout.
   The verbose parameter causes the embedded MGMAPI component
   to log errors to stdout, so the MGMAPI and other behaviours
   are aligned here.

 - Modifying 1 Ndb tool to use the verbose option
   (ndb_move_data)

Behavioural change
 - Callers passing verbose = 0 will no longer get any stdout
   logging of ConfigRetriever errors.
   They need to query this using the get_latest_error_msg() API
   if required.

 - Callers passing verbose = 1 will get stdout logging of
   ConfigRetriever errors synchronously, without any timestamp
   (as added by g_eventLogger.

Change-Id: Id2d5783a0b5d1612b08437297980ed016ae51c3b
Fix missing include file.
Fix ignored return value from getcwd().

Change-Id: I0bff386ab59344d61cd1bfe814d36ed3310a9eed
Problem:
Implicit rollback when refusing to discover table in an active
transaction causes entire transaction to rollback.

Analysis:
When table definition mismatch occurs between NDB and MySQL dictionary
the table will be installed into MySQL DD from NDB, this is called
discover. Installing into DD requires committing the current active
transaction and thus discover is not possible while in an active
transaction, discover need to be refused and an error and warning
returned to the user. Unfortunately the code checking for being in an
active transaction also checked if the table already existed in DD thus
implicitly rolled back the transaction. This meant that a subsequent
read from same table in the same transaction would
allow discover.

Solution:
1. Remove check whether or not table already exists in DD, instead
always
refuse discover while in transaction. This means that also subsequent
attempts to use the same table, in the transaction, will return the same
error.
2. Improve warning message to describe to user that discover was refused
due to being in active transaction.
3. Use correct external return code ER_TABLE_DEF_CHANGED rather than
internal HA_ERR_TABLE_DEF_CHANGED when formatting the error message.
4. Update existing tests to:
 a) accept that it's not possible to discover
any table while in an active transaction
 b accept the new error code

Change-Id: Ie1bd877253c9895229691d2c154f73329ee6220d
changed, please retry transaction'

Problem:
Failure occurs when replica encounters a "Table definition changed"
error while applying a row change.

Analysis:
The replica applier is in the middle of a transaction when the "Table
definition changed" error occurs, this means that it can't discover
table by just installing the new table definition into its data
dictionary and continue as that would commit the transaction too early.

Solution:
Change discover functionality to rollback replica applier transaction,
install table and then push warning to make the applier ´retry
transaction.

Change-Id: Ia03bf88766e298e2b8b4b8a195300edc63268768
…t order

Symptom:
The names of threads in `SHOW ENGINE INNODB STATUS` in the `FILE I/O` section are wrong - the first name is `(null)`, rest is shifted by one.

Root cause:
A shift by one was introduced by Bug#34992157.

Fix:
The names are filled in in `AIO::start()` in correct order.

Change-Id: I6e752c75d5778ae524f4494574891e9ec8040739
content mismatch [noclose]

Disable test on Solaris until further notice.

Change-Id: Ia36b69740e600cf67dd2771d9b47d0e2bffc8462
    content mismatch [noclose]

Disable test on Solaris until further notice

Change-Id: I43cc4a75c12a30e8b534e5cd09bd40a7aa36e1a9
[This is a cherry-picked commit of Bug 36037224 from mysql-trunk]

This patch is based on the contribution from Xiaoyang Chen, thank you!

Symptoms:
---------
Running some queries that use Unique Hash index with TempTable engine
takes significant more time compared to when queries are run with the
MEMORY engine.

Background:
===========
While running some intersect sub-queries, temporary tables are
materialized. If the temp table is already instantiated during
materialization then all of its records are deleted. In the TempTable
engine, this call translates to temptable::Table::truncate(). This
method clears all the rows and indexes of the temp table.
TempTable engine uses standard cpp containers to maintain the indexes.
It uses, std::unordered_set for unique hash index,
std::unordered_multiset for non-unique hash index,
and std::multiset for tree unique index.

std::unordered_multiset also resets the hash buckets to 0 like the
std::unordered_set does. Therefore, some queries using duplicate hash
index may also get similar symptoms.

Since std::multiset is based on the RB tree, it removes the RB tree
nodes if they exist, calling clear() on empty container should be mostly
no-op. Thus it should not suffer from the same problem.

The reason temp table cleanup happens multiple times because optimizer
calls ClearForExecution() repeatedly, rather than doing the table
cleanup in the path-specific iterator. Filed Bug#36181085 to fix this
behavior in the optimizer layer.

Now why do memset is called so many times and takes so much time during
the query execution reported in this bug ?
- The query inserts 1048576 keys of 16 bytes each in the container that
  leads the bucket size of 1337629. Bucket shrinks only during assignment
  or move operations or destruction of the container.
- When the clear() is called first time, container deallocates the nodes
  from each bucket (if there were any due to collision) and do the
  memset on the bucket which is around 21402064 bytes(~20MB)
- Each subsequent truncate calls(524286 times) do the memset on the
  bucket. Thus around (524286*21402064) = ~10TB reset happened. This
  obviously is expensive.

Fix
===
- Call the clear(), only if it is really required that is, if the
  container is not empty.
- Extend the fix to Hash_duplicates::truncate().

Change-Id: Ie2b008c66a6b2792304c44ea885c4fff188bc6fc
Queries using the Multi-Range Read optimization run slower after the
fix for Bug#36017204.

The reason is that the insertion sort used to sort the records in the
MRR buffer does not perform well when the number of records in the
buffer is high.

Fixed by replacing the insertion sort with a call to qsort() in the C
standard library. It performs better on large buffers.

Change-Id: I39da5db5c329d482bf83b18fb27e52a0c5fcae03
…PLAY_RENAME ON WINDOWS 2

Symptom:

Tests involving RENAME deadlock on Windows

Root cause:

The original fix for the
Bug#32808809 INNODB: DEADLOCK BETWEEN FIL_SHARD::OPEN_FILE, FIL_OP_REPLAY_RENAME ON WINDOWS
missed to call to guard.rollback() before fil_space_read_name_and_filepath(),
which would be needed to actually close() the file handle.
The fil_space_read_name_and_filepath() tries to acquire Fil_shard mutex,
which risks a deadlock if done while keeping the file open, because
usual ordering of these two operations is the opposite: first latch the
Fil_shard mutex, and only then open the file.
In particular page cleaners writing dirty pages to tablespace files do
it in this usual order, and thus can deadlock with a RENAME which does
it in the opposite order.

Fix:

Turns out we don't need a guard at all, because we can simply close the
file handle right after a call to read_first_page().

Also, when trying to prove correctness of the patch, it turned out that
there is some confusion in the code about the lifecycle of the file handle.
The doxygen for read_first_page() says the file handle should be
open before calling it, which would be a useful property to reason about
the code, if really enforced.
Indeed most callers ensured this, yet the read_first_page() had a
defensive conditional logic to open the file handle if it is closed,
just in case.
Turned out that there was one place in the code where we tried to call
validate_first_page() while the file handle was closed, and that place
was in itself buggy, in SysTablespace::read_lsn_and_check_flags():

   for (int retry = 0; retry < 2; ++retry) {
     err = it->validate_first_page(it->m_space_id, flushed_lsn, false);

     if (err != DB_SUCCESS &&
         (retry == 1 || it->open_or_create(srv_read_only_mode) != DB_SUCCESS ||
          it->restore_from_doublewrite(0) != DB_SUCCESS)) {
       it->close();

       return (err);
     }
   }

clearly the intent here was to retry in case of error, yet the buggy
logic was retrying even if there was no error. Because the contract of
validate_first_page() specified in doxygen clearly says the file handle
is closed after the call, the second call occured while handle was closed.
This patch fixes the problem by unrolling this loop, so that it is easier
to understand and verify its logic.
This made it possible for this patch to, also remove the conditional
opening of the file handle from read_first_page() to match its contract.

The rewrite of the loop also made it clear that there was another bug
in a code which was not covered with tests:

   /* Make sure the tablespace space ID matches the
   space ID on the first page of the first datafile. */
   if (space_id() != it->m_space_id) {
     ib::error(ER_IB_MSG_444)
         << "The " << name() << " data file '" << it->name()
         << "' has the wrong space ID. It should be " << space_id() << ", but "
         << it->m_space_id << " was found";

     it->close();

     return (err);
   }

The err was provably always DB_SUCCESS, so the relatively serious error
of not having the right space_id, was ignored by the caller.
Turns out that this code is not really needed, because the
validate_first_page() we called earlier can help us verify the space_id.
However there was yet another bug here: instead of passing the expected
space_id() to it, we were passing the it->m_space_id found on disc -
which defeated the purpose of validation, because it was comparing the
value read from disc to itself.
This patch fixes it by removing the buggy error handling and passing
space_id() instead of it->m_space_id to validate_first_page.

Change-Id: I05230d01e5bfc39fe97f645100a8db212c1219c8
Wrong results were returned if an equijoin condition referred to a
JSON expression and the chosen join strategy was a hash join.

The reason was that the join condition used string comparison
semantics or floating-point comparison semantics, depending on which
type the JSON expression was compared to. Both approaches are wrong,
as the values should be compared as JSON values.

Fixed by using Json_wrapper::make_hash_key() to create the hash values
when one or both of the sides of the hash join condition is a JSON
expression.

Change-Id: I02dc157a51c433a41225c25b60f8b3569593d0d1
(cherry picked from commit 2a0bb9a47ca6833bf1efd145f0f415c4eb05166a)
…bquery

Bug#35087820: VALUES Statement with dependent subquery is wrong

When a table value constructor has to be read multiple times during a
query, it will only return data the first time it is read. When it's
read the second time, it will return EOF immediately.

It is caused by a bug in TableValueConstructorIterator::Read() where
it returns EOF immediately when the number of examined rows is equal
to the number of rows in the table value constructor. Since this
counter is intentionally not reset when the iterator is
re-initialized, the table value constructor will keep returning EOF
forever once every row has been read, and not start again from the
beginning when it's re-initialized.

Fixed by checking the position of the iterator over the row value list
instead of the number of examined rows.

Change-Id: I0e828eb0de0360a16cea9f77b578c68205cefbaf
(cherry picked from commit 7dc427011ed66613edf61dd144867b8728918dcb)
An assertion failed when inserting data into a table with a
zero-length column, such as CHAR(0) or BINARY(0).

The purpose of the assertion was to detect if a column was copied into
itself, but it could also be triggered when copying between adjacent
columns if one of the columns had zero length, since the two columns
could have the same position in the record buffer in that case. This
is not a problem, since copying zero bytes is a no-op.

Fixed by making the assertion less strict and only fail if it detects
that a non-zero number of bytes is copied from a source that is
identical to the target.

Change-Id: Ifc07b13a552c3ef6ce99bcc1d491afeef506e9c1
(cherry picked from commit d0463fd8b4657911589ecbf968eaad64983cd672)
Unpack protobuf-24.4.tar.gz
rm -r docs examples objectivec ruby rust
git add protobuf-24.4

Change-Id: I09bd7eec92b3a26d9f28ec82a88974544bc6b5e6
(cherry picked from commit a615bdd4582b8b1b5d053d64ebfd286b7509393b)
New versions of protobuf depend on abseil, so check in the sources.

Unpack abseil-cpp-20230802.1.tar.gz
git add abseil-cpp-20230802.1

Change-Id: Iefd5efe514ce0f02d891cb28192e820c21a1b18c
(cherry picked from commit 51f6f617fdce24d244f742d3c2c91313546fc194)
Local patches:

In several of our own cmake sources:
   TARGET_COMPILE_FEATURES(${TARGET} PUBLIC cxx_std_17)
This will override any cxx_std_14 properties set on
libraries we link in from abseil or protobuf.
Those PUBLIC properties on 3rd party libs broke the build on Mac.

CMakeLists.txt
  For "bundled" protobuf, we need "bundled" abseil also.

  Silence -Wstringop-overflow warnings in libraries generated
  from.proto files.

cmake/protobuf_proto_compile.cmake
  Abseil header file directories must be part of the public interface
  for libraries generated from .proto files.

extra/abseil/abseil-cpp-20230802.1/absl/base/CMakeLists.txt
  Rename cmake variable LIBRT, to avoid name conflicts with our own
  cmake code.

extra/protobuf/CMakeLists.txt
  Silence more warnings.
  Build static protobuf libraries also on Windows.
  TODO: build shared protobuf and abseil libs, and INSTALL them.

extra/protobuf/protobuf-24.4/CMakeLists.txt
  Disable INSTALL and BUILD_TESTS.

extra/protobuf/protobuf-24.4/cmake/libprotobuf-lite.cmake
extra/protobuf/protobuf-24.4/cmake/libprotobuf.cmake
extra/protobuf/protobuf-24.4/cmake/libprotoc.cmake
  Disable the --version-script which is bundled with protobuf sources,
  we need to apply our own version script instead.
  Add our own INSTALL targets for shared libraries.

extra/protobuf/protobuf-24.4/cmake/protobuf-configure-target.cmake
  disable the /MP compiler option for clang on Windows

Misc. CMakeLists.txt files
  Disable -DPROTOBUF_USE_DLLS.
  TODO: build shared proto and absl libs on Windows.

opentelemetry-cpp/opentelemetry-cpp-1.10.0/CMakeLists.txt
  Rename the WITH_ABSEIL cmake option (previously default to OFF) to
  TELEMETRY_WITH_ABSEIL (default ON) so that the build is not
  broken after a 'git pull'.

opentelemetry-cpp/opentelemetry-cpp-1.10.0/api/CMakeLists.txt
  Disable find_package(absl) for "bundled" protobuf/abseil.

storage/innobase/handler/i_s.cc
  Misc gcc versions started to complain about:
  storage/innobase/handler/i_s.cc:722:5: error:
  either all initializer clauses should be designated or none of them should be
    722 |     nullptr,

Change-Id: I2dcec2a3f489d180015b3feeeef966f0bc3af676
(cherry picked from commit fd7598d85dd54545e0819a45235f38df89631d84)
percona-ysorokin and others added 26 commits June 27, 2024 18:09
https://perconadev.atlassian.net/browse/PS-9165

'sys_vars.plugin_dir_basic' and  'clone.plugin_mismatch' MTR test cases modified
so that they could be run on a server built both with and without telemetry
component ('-DWITH_PERCONA_TELEMETRY=ON' CMake option).
…stfixes

PS-9165 postfix 8.0: Product Usage Tracking - phase 1 (MTR fixes)
https://perconadev.atlassian.net/browse/PS-9165

When the server is started with --innodb-read-only flag,
Percona Telemetry Component can not be installed/uninstalled,
because the it is prohibited to add/delete the row in mysql.component
table. In such a case
"Skip updating mysql metadata in InnoDB read-only mode."
warning was printed into server's error log which caused several MTR
tests to fail.

Solution: Do not call dd::check_if_server_ddse_readonly()
but do the check directly and print information level message if needed.
…es-3

PS-9165 postfix 8.0: Product Usage Tracking - phase 1
)

* PS-9165 postfix 8.0: Product Usage Tracking - phase 1 (MTR fixes)

https://perconadev.atlassian.net/browse/PS-9165

1. test_session_info.test duplicated with IDs recorded for the case
when Percona Telemetry is built-in. It is better than masking IDs in
output, because the test relies on real ID values

2. regression.test - ID masked in test output

3. prep_stmt_sundries - make assertion value dependant on Percona
Telemetry being built-in
https://perconadev.atlassian.net/browse/PS-9222

Problem
=======
When writing to the redo log, an issue of column order change not
being recorded with INSTANT DDL was fixed by creating an array
with size equal to the number of fields in the index which kept
track of whether the original position of the field was changed
or not. Later, that array would be used to make a decision on
logging the field.
But, this solution didn't take into account the fact that
there could be column prefixes because of the primary key. This
resulted in inaccurate entries being filled in the
fields_with_changed_order[] array.

Solution
========
It is fixed by using the method, get_col_phy_pos() which takes
into account the existence of column prefix instead of get_phy_pos()
while generating fields_with_changed_order[] array.
https://perconadev.atlassian.net/browse/PS-9222

Problem
=======
When writing to the redo log, an issue of column order change not
being recorded with INSTANT DDL was fixed by checking if the fields
are also reordered, then adding the columns into the list.
However when calculating the size of the buffer this fix doesn't take
account the extra fields that may be logged, and causing the assertion
on the buffer size failed eventually.

Solution
========
To calculate the buffer size correctly, we move the logic of finding
reordered fiedls before buffer size calculation, then count the number
of fields with the same logic when deciding if a field needs to be logged.
PS-9222 Include reordered fields when calculating mlog buffer size
https://perconadev.atlassian.net/browse/PS-9165

'keyring_vault.keyring_udf' MTR test case modified so that it could be run
on a server built both with and without telemetry component
('-DWITH_PERCONA_TELEMETRY=ON' CMake option).
fields_with_changed_order didn't treat prefix_len key table well.
col->get_phy_pos() returns raw data about prefix phy_pos.
We need to use field->get_phy_pos() for actual phy_pos here.

Change-Id: If13449f9e6e6191cd0f2e7102f62ac28024727f8
(cherry picked from commit 678c193)
Issue:
 In bug#35183686 fix, we started logging the fields whose column order
 changed. But while calculating size needed in redo to log index information
 these columns were missing from calculation.

Fix
 Make sure these colums are also considered while calculating size
 needed to log index entry.

Change-Id: Ic8752c72a8f5beddfc5739688068b9c32b02a700
(cherry picked from commit e6248f5)
- Post push fix for memory leak

Change-Id: I034bb20c71dfe3ae467e762b2dd7c7f95fb0679b
(cherry picked from commit a3561b3)
https://perconadev.atlassian.net/browse/PS-9222

Our patches have been reverted in favour of Upstream's fixes in 8.0.38.
The testcases for the scenario have been preserved.
PS-9222 MySQL server crashes on UPDATE after ALTER TABLE
PS-9284 Release tasks ticket for PS-8.0.37
PS-9284 Release tasks ticket for PS-8.0.37
PS-9320 Please add tarballs for Debian bookworm and Ubuntu Noble for PS
PS-9284 Release tasks ticket for PS-8.0.37
@adivinho adivinho requested review from a team and percona-ysorokin August 6, 2024 09:40
@oleksandr-kachan oleksandr-kachan merged commit 5b68f9b into 8.0 Aug 7, 2024
24 of 25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.