Releases: streamingfast/firehose-solana
v1.1.2
v1.1.1
- Fix "unable to fetch block was skipped and should not have been requested" but when an endpoint is a load balancer pointing to different nodes -- this removes an optimization that reduces the number of RPC calls by assuming that no new block will appear below the 'latest slot' of your node. If you are indeed pointing to only single solana nodes, you can use the new
--optimize-single-target
flag to re-enable this optimization. - Fix startup always looking for 'first block' instead of 'cursor block', failing unnecessarily on non-archive nodes
- Add --network flag (default: mainnet) -- this flag is used only to enable a special fix around block 13334464, which you don't want to skip on testnet or devnet.
v1.1.0
v1.0.5
v1.0.4
v1.0.3
v1.0.2
v1.0.1
-
Fixed
tools check merged-blocks
default range when-r <range>
is not provided to now be[0, +∞]
(was previously[HEAD, +∞]
). -
Fixed
tools check merged-blocks
to be able to run without a block range provided. -
Added API Key based authentication to
tools firehose-client
andtools firehose-single-block-client
, specify the value through environment variableFIREHOSE_API_KEY
(you can use flag--api-key-env-var
to change variable's name to something else thanFIREHOSE_API_KEY
). -
Fixed
tools check merged-blocks
examples using block range (range should be specified as[<start>]?:[<end>]
). -
Added
--substreams-tier2-max-concurrent-requests
to limit the number of concurrent requests to the tier2 Substreams service.
v1.0.0
v1.0.0
Operator notes
Important
- All firehose processes have been removed from this binary. You will need to run this program from the firecore binary
- Previous
firesol start ...
command becomesfirecore start ...
- Previous
- New Poller: firesol no longer gets blocks from a Bigtable instance: it fetches the blocks using RPC calls
- Run
firecore start reader
with--reader-node-path=/path/to/firesol
and--reader-node-arguments=fetch rpc <https://your.solana.rpc/path> <start-block>
- Run
- New Block Format requires either fetching all the merged blocks again or converting them
- Convert old blocks by running:
ACCEPT_SOLANA_LEGACY_BLOCK_FORMAT=true firesol upgrade-merged-blocks <source-store> <dest-store> <start-num:stop-num>
- Convert old blocks by running:
- Upgrading your deployment will require a "stop the world" upgrade, where you start the new binaries, pointing to the new blocks, without any contact with the previous blocks or components.
Removed
- All the
firesol start ...
commands have been removed. Use firecore binary to run the reader, merger, relayer, firehose and substreams services - All the existing
firesol tools
commands
Added
- Added
fetch rpc <endpoint> <start_block>
command fetches and prints the blocks in protobuf format, to be used by thefirecore start reader
command. - Added
upgrade-merged-blocks
command to perform the upgrade on previous solana merged-blocks. - Bumped firecore version to v1.2.0
Fixed
- Fixed Substreams scheduler sometimes taking a long time to spawn more than a single worker.
v0.2.7
Operators
Important
We have had reports of older versions of this software creating corrupted merged-blocks-files (with duplicate or extra out-of-bound blocks)
This release adds additional validation of merged-blocks to prevent serving duplicate blocks from the firehose or substreams service.
This may cause service outage if you have produced those blocks or downloaded them from another party who was affected by this bug.
- Find the affected files by running the following command (can be run multiple times in parallel, over smaller ranges)
tools check merged-blocks-batch <merged-blocks-store> <start> <stop>
- If you see any affected range, produce fixed merged-blocks files with the following command, on each range:
tools fix-bloated-merged-blocks <merged-blocks-store> <output-store> <start>:<stop>
- Copy the merged-blocks files created in output-store over to the your merged-blocks-store, replacing the corrupted files.
Added
- Firehose logs now include auth information (userID, keyID, realIP) along with blocks + egress bytes sent.
- Added
tools check merged-blocks-batch
to simplify checking blocks continuity in batched mode, optionally writing results to a store - Added the command
tools fix-bloated-merged-blocks
to try to fix merged-blocks that contain duplicates and blocks outside of their range. - Command
tools print one-block and merged-blocks
now supports a new--output-format
jsonl
format. Bytes data can now printed as hex or base58 string instead of base64 string. - Added retry loop for merger when walking one block files. Some use-cases where the bundle reader was sending files too fast and the merger was not waiting to accumulate enough files to start bundling merged files
Fixed
- Bumped
bstream
: thefilesource
will now refuse to read blocks from a merged-files if they are not ordered or if there are any duplicate. - The command
tools download-from-firehose
will now fail if it is being served blocks "out of order", to prevent any corrupted merged-blocks from being created. - The command
tools print merged-blocks
did not print the whole merged-blocks file, the arguments were confusing: now it will parse <start_block> as a uint64. - The command
tools unmerge-blocks
did not cover the whole given range, now fixed
Removed
-
Breaking The
reader-node-log-to-zap
flag has been removed. This was a source of confusion for operators reporting Firehose on bugs because the node's logs where merged within normal Firehose on logs and it was not super obvious.Now, logs from the node will be printed to
stdout
unformatted exactly like presented by the chain. Filtering of such logs must now be delegated to the node's implementation and how it deals depends on the node's binary. Refer to it to determine how you can tweak the logging verbosity emitted by the node. -
Flag
substreams-rpc-endpoints
removed, this was present by mistake and unused actually. -
Flag
substreams-rpc-cache-store-url
removed, this was present by mistake and unused actually. -
Flag
substreams-rpc-cache-chunk-size
removed, this was present by mistake and unused actually.
(Note: release 0.2.6 was never actually released)