-
Fuzzer Fixing For Fun! Particularly around random number generation and number sequences.
-
Add simulator coverage for
get_account_transfers
andget_account_balances
.
-
Reduce the default
--limit-pipeline-requests
value, dropping RSS memory consumption.
-
Build system simplifications.
-
#2026, #2020, #2030, #2031, #2008
Tidying up (now) unused symbols and functionality.
-
Rename docs section from "Develop" to "Coding".
-
Fix a case where an early return could result in a partially inserted transfer persisting.
-
Big improvements to allowing TigerBeetle to run with less memory! You can now run TigerBeetle in
--development
mode by default with an RSS of under 1GB. Most of these gains came from #1981 which allows running with a smaller runtime request size. -
Devhub improvements - make it harder to miss failures due to visualization bugs, show the PR author in fuzzer table and color canary "failures" as success.
-
Add
--account-batch-size
to the benchmark, mirroring--transfer-batch-size
. -
Rename the Deploy section to Operating, add a new correcting transfer recipe, and note that
lookup_accounts
shouldn't be used before creating transfers to avoid potential TOCTOUs.
-
⚡ Update Zig from 0.11.0 to 0.13.0! As part of this, replace non-mutated
var
s withconst
. -
Similar to #1991, adds the async
io_uring_prep_statx
syscall for Linux's IO implementation, allowing non-blockingstatx()
s while serving requests - to determine when the binary on disk has changed.
-
Refactor an internal iterator to expose a mutable pointer instead of calling
@constCast
on it. There was a comment justifying the operation's safety, but it turned out to be safer to expose it as a mutable pointer (avoiding misusage from the origin) rather than performing an unsound mutation over a constant pointer. -
Implement a random Grid/Scrubber tour origin, where each replica starts scrubbing the local storage in a different place, covering more blocks across the entire cluster.
-
Model and calculate the probability of data loss in terms of the Grid/Scrubber cycle interval, allowing to reduce the read bandwidth dedicated for scrubbing.
-
Fix a simulator bug where all the WAL sectors get corrupted when a replica crashes while writing them simultaneously.
-
As part of multiversioning binaries, adds the async
io_uring_prep_openat
syscall for Linux's IO implementation, allowing non-blockingopen()
s while serving requests (which will be necessary during upgrade checks). -
Require the
--experimental
flag when starting TigerBeetle with flags that aren't considered stable, that is, flags not explicitly documented in the help message, limiting the surface area for future compatibility.
-
Fix crash when upgrading solo replica.
-
Pin points crossing Go client FFI boundary to prevent memory corruption.
-
Build our .NET client for .NET 8, the current LTS version. Thanks @woksin!
-
Document recovery case
@L
in VSR. -
We implicitly supported underscores in numerical CLI flags. Add tests to make this explicit.
-
Add the size of an empty data file to devhub, tweak the benchmark to always generate the same sized batches, and speed up loading the devhub itself.
-
Ease restriction which guarded against unnecessary pulses.
-
Docs fixes and cleanup.
-
Fix determinism bug in test workload checker.
-
Expose
ticks_max
as runtime CLI argument. -
Devhub/benchmark improvements.
-
#1918, #1916, #1913, #1921, #1922, #1920, #1945, #1941, #1934, #1927
Lots of CFO enhancements - the CFO can now do simple minimization, fuzz PRs and orchestrate the VOPR directly. See the output on our devhub!
-
Fix a bug in the VOPR, add simple minimization, and remove the voprhub code. Previously, the voprhub is what took care of running the VOPR. Now, it's handled by the CFO and treated much the same as other fuzzers.
-
Prevent time-travel in our replica test code.
-
Fix a fuzzer bug around checkpoint / commit ratios.
-
Add the ability to limit the VSR pipeline size at runtime to save memory.
-
Fix path handling on Windows by switching to
NtCreateFile
. Before, TigerBeetle would silently treat all paths as relative on Windows. -
In preparation for multiversion binaries, make
release_client_min
a parameter, set byrelease.zig
. This allows us to ensure backwards compatibility with older clients. -
Add some additional asserts around block lifetimes in compaction.
-
Fix parsing of multiple CLI positional fields.
-
Remove
main_pkg_path = src/
early, to help us be compatible with Zig 0.12. -
Docs organization and link fixes.
-
#1906, #1904, #1903, #1901, #1899, #1886
Fixes and performance improvements to fuzzers.
-
Reduces cache size for the
--development
flag, which was originally created to bypass direct I/O requirements but can also aggregate other convenient options for non-production environments. -
Reduction in memory footprint, calculating the maximum number of messages from runtime-known configurations.
-
Removes the
bootstrap.{sh,bat}
scripts, replacing them with a more transparent instruction for downloading the binary release or building from source. -
Nicely handles "illegal instruction" crashes, printing a friendly message when the CPU running a binary release is too old and does not support some modern instructions such as AES-NI and AVX2.
-
Include micro-benchmarks as part of the unit tests, so there's no need for a special case in the CI while we still compile and check them.
-
A TigerStyle addition on "why prefer a explicitly sized integer over
usize
". -
Rename "Getting Started" to "Quick Start" for better organization and clarifications.
-
While TigerBeetle builds are deterministic, Zip files include a timestamp that makes the build output non-deterministic! This PR sets an explicit timestamp for entirely reproducible releases.
-
Extracts the zig compiler path into a
ZIG_EXE
environment variable, allowing easier sharing of the same compiler across multiple git work trees.
-
Move message allocation farther down into the
tigerbeetle start
code path.tigerbeetle format
is now faster, since it no longer allocates these messages. -
Reduce the connection limit, which was unnecessarily high.
-
Implement zig-zag merge join for merging index scans. (Note that this functionality is not yet exposed to TigerBeetle's API.)
-
Print memory usage more accurately during
tigerbeetle start
.
-
Fix blob-size CI check with respect to shallow clones.
-
Add more fuzzers to CFO (Continuous Fuzzing Orchestrator).
-
Improve fuzzer performance.
-
On the devhub, show at most one failing seed per fuzzer.
-
#1820, #1867, #1877, #1873, #1853, #1872, #1845, #1871
Documentation improvements.
-
Implement grid scrubbing --- a background job that periodically reads the entire data file, verifies its correctness and repairs any corrupted blocks.
-
Turn on continuous fuzzing and integrate it with devhub.
-
Improve navigation on the docs website.
A very special song from our friend MEGAHIT!
-
Incrementally recompute the number values to compact in the storage engine. This smooths out I/O latency, giving a nice bump to transaction throughput under load.
-
Add
--development
flag toformat
andstart
commands in production binaries to downgrade lack of Direct I/O support from a hard error to a warning.TigerBeetle uses Direct I/O for certain safety guarantees, but this feature is not available on all development environments due to varying file systems. This serves as a compromise between providing a separate development release binary and strictly requiring Direct I/O to be present.
-
Add fixed upper bound to loop in the StorageChecker.
-
Orchestrate continuous fuzzing of tigerbeetle components straight from the build system! This gives us some flexibility on configuring our set of machines which test and report errors.
-
Styling updates and fixes.
-
Fix a case the VOPR found where a replica recovers into
recovering_head
unexpectedly.
-
Improve CLI errors around sizing by providing human readable (1057MiB vs 1108344832) values.
-
#1818, #1831, #1829, #1817, #1826, #1825
Documentation improvements.
-
Additional LSM compaction comments and assertions.
-
Clarify some scan internals and add additional assertions.
-
Some of our comments had duplicate words - thanks @divdeploy for for noticing!
-
Reject incoming client requests that have an unexpected message length.
-
Fix message alignment.
-
StorageChecker
now verifies grid determinism at bar boundaries. -
Fix VOPR liveness false positive when standby misses an op.
-
Assert that the type-erased LSM block metadata matches the comptime one, specialized over
Tree
. -
Use a FIFO as a block_pool instead of trying to slice arrays during compaction.
-
Implement
get_account_transfers
andget_account_balances
in the REPL. -
#1781, #1784, #1765, #1816, #1808, #1802, #1798, #1793, #1805
Documentation improvements.
-
Improve Docker experience by handling
SIGTERM
through tini. -
For reproducible benchmarks, allow setting
--seed
on the CLI.
-
Move
request_queue
outside ofvsr.Client
. -
Extract
CompactionPipeline
to a dedicated function. -
Replace compaction interface with comptime dispatch.
-
Remove the duplicated
CompactionInfo
value stored inPipelineSlot
, referencing it from theCompaction
by its coordinates. -
CLI output improvements.
-
Improvements in the client libraries CI.
-
Metrics adjustments for Devhub and Nyrkio integration.
-
Various bug fixes in the build script and removal of the "Do not use in production" warning.
- Bump version to 0.15.x
- Starting with 0.15.x, TigerBeetle is ready for production use, preserves durability and provides a forward upgrade path through storage stability.
-
Set TigerBeetle's block size to 512KB.
Previously, we used to have a block size of 1MB to help with approximate pacing. Now that pacing can be tuned independently of block size, reduce this value (but not too much - make the roads wider than you think) to help with read amplification on queries.
-
Implement compaction pacing: traditionally LSM databases run compaction on a background thread. In contrast compaction in tigerbeetle is deterministically interleaved with normal execution process, to get predictable latencies and to guarantee that ingress can never outrun compaction.
In this PR, this "deterministic scheduling" is greatly improved, slicing compaction work into smaller bites which are more evenly distributed across a bar of batched requests.
-
Include information about tigerbeetle version into the VSR protocol and the data file.
-
#1732, #1743, #1742, #1720, #1719, #1705, #1708, #1707, #1723, #1706, #1700, #1696, #1686.
Many availability issues found by the simulator fixed!
-
Fix a buffer leak when
get_account_balances
is called on an invalid account.
-
#1671, #1713, #1709, #1688, #1691, #1690.
Many improvements to the documentation!
-
Rename
get_account_history
toget_account_balances
. -
Automatically expire pending transfers.
-
Implement in-place upgrades, so that the version of tigerbeetle binary can be updated without recreating the data file from scratch.
-
Consistently use
MiB
rather thanMB
in the CLI interface. -
Mark
--standby
andbenchmark
CLI arguments as experimental.
-
Unify PostedGroove and the index pending_status.
-
Include an entire header into checkpoint state to ease recovery after state sync.
-
Fetching account history and transfers now has unit tests, helping detect and fix a reported bug with posting and voiding transfers.
-
#1656, #1659, #1666, #1667, #1667
Preparation for in-place upgrade support.
-
#1633, #1661, #1652, #1647, #1637, #1638, #1655
Documentation has received some very welcome organizational and clarity changes. Go check them out!
-
#1584 Lower our memory usage by removing a redundant stash and not requiring a non-zero object cache size for Grooves.
The object cache is designed to help things like Account lookups, where the positive case can skip all the prefetch machinery, but it doesn't make as much sense for other Grooves.
-
Hook nyrkiö up to our CI! You can find our dashboard here in addition to our devhub.
-
#1635 #1634 #1623 #1619 #1609 #1608 #1595
Lots of small VSR changes, including a VOPR crash fix.
-
Fix a VOPR failure where state sync would cause a break in the hash chain.
-
Use Expand-Archive over unzip in PowerShell - thanks @felipevalerio for reporting!
-
Implement explicit coverage marks.
-
#1621 #1625 #1622 #1600 #1605 #1618 #1606
Minor doc fixups.
-
Default the VOPR to short log, and fix a false assertion in the liveness checker.
-
Fix a memory leak in our Java tests.
-
Rework the log repair logic to never repair beyond a "confirmed" checkpoint, fixing a liveness issue where it was impossible for the primary to repair its entire log, even with a quorum of replicas at a recent checkpoint.
-
Some Java unit tests created native client instances without the proper deinitialization, causing an
OutOfMemoryError
during CI. -
Fix Vopr's false alarms.
-
Document how assertions should be used, especially those with complexity O(n) under the
constants.verify
conditional. -
Harmonize and automate the logging pattern by using the
@src
built-in to retrieve the function name. -
Include the benchmark smoke as part of the
zig build test
command rather than a special case during CI. -
Remove unused code coverage metrics from the CI.
-
Re-enable Windows CI 🎉.
-
DVCs implicitly nack missing prepares from old log-views.
(This partially addresses a liveness issue in the view change.)
-
When a replica joins a view by receiving an SV message, some of the SV's headers may be too far ahead to insert into the journal. (That is, they are beyond the replica's checkpoint trigger.)
During a view change, those headers are now eligible to be DVC headers.
(This partially addresses a liveness issue in the view change.)
-
Fixes a bug in the C client that wasn't handling
error.TooManyOutstanding
correctly.
-
Bring back Windows tests for .Net client in CI.
-
Add script to scaffold changelog updates.
-
Improve CI/test error reporting.
-
Draw devhub graph as line graph.
-
Simplify command to run a single test.
-
Add client batching integration tests.
-
Format default values into the CLI help message.
-
Track commit timestamp to enable retrospective benchmarking in the devhub.
-
Improve CI/test performance.
-
Guarantee that the test runner correctly reports "zero tests run" when run with a filter that matches no tests.
-
(Hat tip to iofthetiger!)
-
Reduce checkpoint latency by checkpointing the grid concurrently with other trailers.
-
Fix a logical race condition (which was caught by an assert) when reading and writing client replies concurrently.
-
Double check that both checksum and request number match between a request and the corresponding reply.
-
Optimize fields with zero value by not adding them to an index.
-
Introduce
get_account_history
operation for querying the historical balances of a given account. -
Add helper function for generating approximately monotonic IDs to various language clients.
-
Harden VSR against edge cases.
-
Allows VSR to perform checkpoint steps concurrently to reduce latency spikes.
-
Removed unused indexes on account balances for a nice bump in throughput and lower memory usage.
-
Only zero-out the parts necessary for correctness of fresh storage buffers. "Defense in Depth" without sacrificing performance!
-
TigerBeetle's dev workbench now also tracks memory usage (RSS), throughput, and latency benchmarks over time!
-
Simplify assertions and tests for VSR and Replica.
-
.NET CI fixups
-
Spring Cleaning
-
Panic on checkpoint divergence. Previously, if a replica's state on disk diverged, we'd use state sync to bring it in line. Now, we don't allow any storage engine nondeterminism (mixed version clusters are forbidden) and panic if we encounter any.
-
Fix a liveness issues when starting a view across checkpoints in an idle cluster.
-
Stop an isolated replica from locking a standby out of a cluster.
-
Change
get_account_transfers
to usetimestamp_min
andtimestamp_max
to allow filtering by timestamp ranges. -
Allow setting
--addresses=0
when starting TigerBeetle to enable a mode helpful for integration tests:- A free port will be picked automatically.
- The port, and only the port, will be printed to stdout which will then be closed.
- TigerBeetle will exit when its stdin is closed.
-
TigerBeetle now has a dev workbench! Currently we track our build times and executable size over time.
-
tigerbeetle client ...
is nowtigerbeetle repl ...
.
-
Deprecate support and testing for Node 16, which is EOL.
-
#1477, #1469, #1475, #1457, #1452.
Improve VOPR & VSR logging, docs, assertions and tests.
-
Improve integration tests around Node and
pending_transfer_expired
- thanks to our friends at Rafiki for reporting!
-
Avoid an extra copy of data when encoding the superblock during checkpoint.
-
Use more precise upper bounds for static memory allocation, reducing memory usage by about 200MiB.
-
When reading data past the end of the file, defensively zero-out the result buffer.
-
Upgrade C# client API to use
Span<T>
. -
Add ID generation function to the Java client. TigerBeetle doesn't assign any meaning to IDs and can use anything as long as it is unique. However, for optimal performance it is best if these client-generated IDs are approximately monotonic. This can be achieved by, for example, using client's current timestamp for high order bits of an ID. The new helper does just that.
-
Rewrite git history to remove large files accidentally added to the repository during early quick prototyping phase. To make this durable, add CI checks for unwanted files. The original history is available at:
https://github.com/tigerbeetle/tigerbeetle-history-archive
-
New tips for the style guide:
Welcome to 2024!
-
#1425, #1412, #1410, #1408, #1395.
Run more fuzzers directly in CI as a part of not rocket science package.
-
Formalize some ad-hoc testing practices as proper integration tests (that is, tests that interact with a
tigerbeetle
binary through IPC). -
Add a lint check for unused Zig files.
-
Improve cluster availability by including conservative information about the current view into ping-pong messages. In particular, prevent the cluster from getting stuck when all replicas become primaries for different views.
-
Test both the latest and the oldest supported Java version on CI.
-
Fix a data race on close in the Java client.
-
Make binaries on Linux about six times smaller (12MiB -> 2MiB). Turns
tigerbeetle
was accidentally including 10 megabytes worth of debug info! Note that unfortunately stripping all debug info also prevents getting a nice stack trace in case of a crash. We are working on finding the minimum amount of debug information required to get just the stack traces. -
Cleanup error handling API for Java client to never surface internal errors as checked exceptions.
-
Add example for setting up TigerBeetle as a systemd service.
-
Drop support for .Net Standard 2.1.
-
Don't exit repl on
help
command.
-
Overhaul documentation-testing infrastructure to reduce code duplication.
-
Don't test NodeJS client on platforms for which there are no simple upstream installation scripts.
-
Use histogram in the benchmark script to reduce memory usage.
“The exception confirms the rule in cases not excepted." ― Cicero.
Due to significant commits we had this last week, we decided to make an exception in our release schedule and cut one more release in 2023!
Still, the TigerBeetle team wishes everyone happy holidays! 🎁
-
Some CI-related stuff plus the
-Drelease
flag, which will bring back the joy of using the compiler from the command line 🤓. -
Added value count to
TableInfo
, allowing future optimizations for paced compaction.
-
The simulator found a failure when the WAL gets corrupted near a checkpoint boundary, leading us to also consider scenarios where corrupted blocks in the grid end up "intersecting" with corruption in the WAL, making the state unrecoverable where it should be. We fixed it by extending the durability of "prepares", evicting them from the WAL only when there's a quorum of checkpoints covering this "prepare".
-
Fix a unit test that regressed after we changed an undesirable behavior that allowed
prefetch
to invoke its callback synchronously. -
Relaxed a simulator's verification, allowing replicas of the core cluster to be missing some prepares, as long as they are from a past checkpoint.
-
A highly anticipated feature lands on TigerBeetle: it's now possible to retrieve the transfers involved with a given account by using the new operation
get_account_transfers
.Note that this feature itself is an ad-hoc API intended to be replaced once we have a proper Querying API. The real improvement of this PR is the implementation of range queries, enabling us to land exciting new features on the next releases.
-
Bump the client's maximum limit and the default value of
concurrency_max
to fully take advantage of the batching logic.
As the last release of the year 2023, the TigerBeetle team wishes everyone happy holidays! 🎁
-
We've established a rotation between the team for handling releases. As the one writing these release notes, I am now quite aware.
-
Fix panic in JVM unit test on Java 21. We test JNI functions even if they're not used by the Java client and the semantics have changed a bit since Java 11.
-
Move client sessions from the Superblock (database metadata) into the Grid (general storage). This simplifies control flow for various sub-components like Superblock checkpointing and Replica state sync.
-
An optimization for removes on secondary indexes makes a return. Now tombstone values in the LSM can avoid being compacted all the way down to the lowest level if they can be cancelled out by inserts.
-
Clients automatically batch pending similar requests 🎉! If a tigerbeetle client submits a request, and one with the same operation is currently in-flight, they will be grouped and processed together where possible (currently, only for
CreateAccount
andCreateTransfers
). This should greatly improve the performance of workloads which submit a single operation at a time.
-
Defense in depth: add checkpoint ID to prepare messages. Checkpoint ID is a hash that covers, via hash chaining, the entire state stored in the data file. Verifying that checkpoint IDs match provides a direct strong cryptographic guarantee that the state is the same across replicas, on top of existing guarantee that the sequence of events leading to the state is identical.
-
Gate the main branch on more checks: unit-tests for NodeJS and even more fuzzers.
-
Code cleanups after removal of storage size limit.
-
Fix free set index. The free set is a bitset of free blocks in the grid. To speed up block allocation, the free set also maintains an index --- a coarser-grained bitset where a single bit corresponds to 1024 blocks. Maintaining consistency between a data structure and its index is hard, and thorough assertions are crucial. When moving free set to the grid, we discovered that, in fact, we don't have enough assertions in this area and, as a result, even have a bug! Assertions added, bug removed!
-
LSM tree fuzzer found a couple of bugs in its own code.
-
Remove format-time limit on the size of the data file. Before, the maximum size of the data file affected the layout of the superblock, and there wasn't any good way to increase this limit, short of recreating the cluster from scratch. Now, this limit only applies to the in-memory data structures: when a data files grows large, it is sufficient to just restart its replica with a larger amount of RAM.
-
We finally have the "installation" page in our docs!
-
Use Zig's new
if (@inComptime())
builtin to compute checksum of an empty byte slice at compile time. -
Fix unit tests for the Go client and add them to not rocket science set of checks.
-
When validating our releases, use the
release
branch instead ofmain
to ensure everything is in sync, and give the Java validation some retry logic to allow for delays in publishing to Central. -
Pad storage checksums from 128-bit to 256-bit. These are currently unused, but we're reserving the space for AEAD tags in future.
-
Remove a trailing comma in our Java client sample code.
-
Switch
bootstrap.sh
to use spaces only for indentation and ensure it's checked by our shellcheck lint. -
Update our
DESIGN.md
to better reflect storage fault probabilities and add in a reference. -
Add
CHANGELOG.md
validation to our tidy lint script. We now check line length limits and trailing whitespace. -
In keeping with TigerStyle rename
reserved_nonce
tononce_reserved
. -
Note in TigerStyle that callbacks go last in the list of parameters.
-
Add an exception for line length limits if there's a link in said line.
-
Recursively check for padding in structs used for data serialization, ensuring that no uninitialized bytes can be stored or transmitted over the network. Previously, we checked only if the struct had no padding, but not its fields.
-
Minor adjustments in the release process, making it easier to track updates in the documentation website when a new version is released, even if there are no changes in the documentation itself.
-
Fix outdated documentation regarding 128-bit balances.
-
Fix a bug discovered and reported during the Hackathon 2023, where the Node.js client's error messages were truncated due to an incorrect string concatenation adding a null byte
0x00
in the middle of the string. -
Update the Node.js samples instructions, guiding the user to install all dependencies before the sample project.
-
We've doubled the
Header
s size to 256 bytes, paving the way for future improvements that will require extra space. Concurrently, this change also refactors a great deal of code. Some of theHeader
's fields are shared by all messages, however, eachCommand
also requires specific pieces of information that are only used by its kind of message, and it was necessary to repurpose and reinterpret fields so that the same header could hold different data depending on the context. Now, commands have their own specialized data type containing the fields that are only pertinent to the context, making the API much safer and intent-clear. -
With larger headers (see #1295) we have enough room to make the cluster ID a 128-bit integer, allowing operators to generate random cluster IDs without the cost of having a centralized ID coordinator. Also updates the documentation and sample programs to reflect the new maximum batch size, which was reduced from 8191 to 8190 items after we doubled the header.
-
Implement last-mile release artifact verification in CI.
-
Bump the simulator's safety phase max-ticks to avoid false positives from the liveness check.
-
Fix a crash caused by a race between a commit and a repair acquiring a client-reply
Write
. -
Fix a crash caused by a race between state (table) sync and a move-table compaction.
Both bugs didn't stand a chance in the Line of Fire of our deterministic simulator!
-
Specify which CPU features are supported in builds.
-
Improve
shell.zig
's directory handling, to guard against mistakes with respect to the current working directory. -
Interpret a git hash as a VOPR seed, to enable reproducible simulator smoke tests in CI.
-
Explicitly target glibc 2.7 when building client libraries, to make sure TigerBeetle clients are compatible with older distributions.
-
Revive the TigerBeetle VOPRHub! Some previous changes left it on it's Last Stand, but the bot is back in business finding liveness bugs: #1266
-
Set the latest Docker image to track the latest release. Avoids language clients going out of sync with your default docker replica installations.
-
Move website doc generation for https://docs.tigerbeetle.com/ into the main repo.
-
Addressed some release quirks with the .NET and Go client builds.
-
Prove a tighter upper bound for the size of manifest log. With this new bound, manifest log is guaranteed to fit in allocated memory and is smaller. Additionally, manifest log compaction is paced depending on the current length of the log, balancing throughput and time-to-recovery.
-
Recommend using ULID for event IDs. ULIDs are approximately sorted, which significantly improves common-case performance.
-
Rewrite Node.js client implementation to use the common C client underneath. While clients for other languages already use the underlying C library, the Node.js client duplicated some code for historical reasons, but now we can leave that duplication in the past. This Is A Photograph.
-
Increase block size to reduce latencies due to compaction work. Today, we use a simplistic schedule for compaction, which causes latency spikes at the end of the bar. While the future solution will implement a smarter compaction pacing to distribute the work more evenly, we can get a quick win by tweaking the block and the bar size, which naturally evens out latency spikes.
-
The new release process changed the names of the published artifacts (the version is no longer included in the name). This broke our quick start scripts, which we have fixed. Note that we are in the process of rolling out the new release process, so some unexpected breakage is expected.
-
Speed up secondary index maintenance by statically distinguishing between insertions and updates. Faster than the speed of night!
-
Include Docker images in the release.
-
Simplify superblock layout by using a linked list of blocks for manifest log, so that the superblock needs to store only two block references.
P.S. Note the PR number!
This is the start of the changelog. A lot happened before this point and is lost in the mist of git history, but any notable change from this point on shall be captured by this document.
-
Remove bloom filters. TigerBeetle implements more targeted optimizations for both positive and negative lookups, making bloom filters a net loss.
-
Increase alignment of data blocks to 128KiB (from 512 bytes). Larger alignment gives operators better control over physical layout of data on disk.
-
Overhaul of CI and release infrastructure. CI and releases are now driven by Zig code. The main branch is gated on integration tests for all clients.
This is done in preparation for the first TigerBeetle release.
For archeological inquiries, check out the state of the repository at the time of the first changelog: