Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Index multiple blockfiles #87

Merged
merged 11 commits into from
Feb 1, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 12 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,21 @@ the NFT creator would create a message that assigned a new NFT Y to the satoshi
with ordinal X. The owner of the UTXO containing the satoshi with ordinal X
owns NFT Y, and can transfer that ownership to another person with a
transaction that sends ordinal Y to a UTXO that the new owner controls. The
current owner can sign a message proving that they own a given UTXO, wich also
current owner can sign a message proving that they own a given UTXO, which also
serves as proof of ownership of all the NFTs assigned to satoshis within that
UTXO.

## Index and Caveats

The `ord` command builds an index using the contents of a local `bitcoind`'s
data directory, which must be halted while the index is built. Currently, the
index is built every time the `ord` runs, but that is a temporary limitation.
Reorgs are also not properly handled.

The index is stored in `index.redb`, and should not be concurrently modified
while an instance of `ord` is running, or used by two `ord` instances
simultaneously.

## Numbering

Satoshis are assigned ordinal numbers in the order in which they are mined.
Expand Down
42 changes: 22 additions & 20 deletions src/index.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ pub(crate) struct Index {
}

impl Index {
const HASH_TO_BLOCK: &'static str = "HASH_TO_BLOCK";
const HASH_TO_CHILDREN: &'static str = "HASH_TO_CHILDREN";
const HASH_TO_HEIGHT: &'static str = "HASH_TO_HEIGHT";
const HASH_TO_OFFSET: &'static str = "HASH_TO_OFFSET";
const HEIGHT_TO_HASH: &'static str = "HEIGHT_TO_HASH";
const OUTPOINT_TO_ORDINAL_RANGES: &'static str = "OUTPOINT_TO_ORDINAL_RANGES";

Expand Down Expand Up @@ -138,15 +138,24 @@ impl Index {
}

fn index_blockfile(&self) -> Result {
{
for i in 0.. {
let blocks = match fs::read(self.blocksdir.join(format!("blk{:05}.dat", i))) {
Ok(blocks) => blocks,
Err(err) => {
if err.kind() == io::ErrorKind::NotFound {
break;
} else {
return Err(err.into());
}
}
};

let tx = self.database.begin_write()?;

let mut hash_to_children: MultimapTable<[u8], [u8]> =
tx.open_multimap_table(Self::HASH_TO_CHILDREN)?;

let mut hash_to_offset: Table<[u8], u64> = tx.open_table(Self::HASH_TO_OFFSET)?;

let blocks = fs::read(self.blocksdir.join("blk00000.dat"))?;
let mut hash_to_block: Table<[u8], [u8]> = tx.open_table(Self::HASH_TO_BLOCK)?;

let mut offset = 0;

Expand All @@ -163,7 +172,7 @@ impl Index {

hash_to_children.insert(&block.header.prev_blockhash, &block.block_hash())?;

hash_to_offset.insert(&block.block_hash(), &(offset as u64))?;
hash_to_block.insert(&block.block_hash(), &blocks[range.clone()])?;

offset = range.end;

Expand Down Expand Up @@ -224,15 +233,14 @@ impl Index {
Some(guard) => {
let hash = guard.to_value();

let hash_to_offset: ReadOnlyTable<[u8], u64> = tx.open_table(Self::HASH_TO_OFFSET)?;
let offset = hash_to_offset
.get(hash)?
.ok_or("Could not find offset to block in index")?
.to_value() as usize;

let blocks = fs::read(self.blocksdir.join("blk00000.dat"))?;
let hash_to_block: ReadOnlyTable<[u8], [u8]> = tx.open_table(Self::HASH_TO_BLOCK)?;

Ok(Some(Self::decode_block_at(&blocks, offset)?))
Ok(Some(Block::consensus_decode(
hash_to_block
.get(hash)?
.ok_or("Could not find block in index")?
.to_value(),
)?))
}
}
}
Expand All @@ -247,12 +255,6 @@ impl Index {
Ok(offset..offset + len)
}

fn decode_block_at(blocks: &[u8], offset: usize) -> Result<Block> {
Ok(Block::consensus_decode(
&blocks[Self::block_range_at(blocks, offset)?],
)?)
}

pub(crate) fn list(&self, outpoint: OutPoint) -> Result<Vec<(u64, u64)>> {
let rtx = self.database.begin_read()?;
let outpoint_to_ordinal_ranges: ReadOnlyTable<[u8], [u8]> =
Expand Down
2 changes: 1 addition & 1 deletion src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ use {
cmp::Ordering,
collections::VecDeque,
fmt::{self, Display, Formatter},
fs,
fs, io,
ops::{Add, AddAssign, Deref, Range, Sub},
path::{Path, PathBuf},
process,
Expand Down
14 changes: 13 additions & 1 deletion tests/find.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,19 @@ fn regression_empty_block_crash() -> Result {
Test::new()?
.command("find --blocksdir blocks 0 --slot --as-of-height 1")
.block()
.block_without_coinbase()
.block_with_coinbase(false)
.expected_stdout("0.0.0.0\n")
.run()
}

#[test]
fn index_multiple_blockfiles() -> Result {
Test::new()?
.command("find --blocksdir blocks 0 --as-of-height 1 --slot")
.expected_stdout("1.1.0.0\n")
.block()
.blockfile()
.block()
.transaction(&[(0, 0, 0)], 1)
.run()
}
110 changes: 60 additions & 50 deletions tests/integration.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ use {
error::Error,
fs::{self, File},
io::{self, Write},
iter,
process::Command,
str,
},
Expand All @@ -30,24 +31,26 @@ type Result<T = ()> = std::result::Result<T, Box<dyn Error>>;

struct Test {
args: Vec<String>,
expected_stdout: String,
expected_stderr: String,
blockfiles: Vec<usize>,
blocks: Vec<Block>,
expected_status: i32,
expected_stderr: String,
expected_stdout: String,
ignore_stdout: bool,
tempdir: TempDir,
blocks: Vec<Block>,
}

impl Test {
fn new() -> Result<Self> {
Ok(Self {
args: Vec::new(),
expected_stdout: String::new(),
expected_stderr: String::new(),
blockfiles: Vec::new(),
blocks: Vec::new(),
expected_status: 0,
expected_stderr: String::new(),
expected_stdout: String::new(),
ignore_stdout: false,
tempdir: TempDir::new()?,
blocks: Vec::new(),
})
}

Expand Down Expand Up @@ -126,41 +129,11 @@ impl Test {
Ok(stdout.to_owned())
}

fn block(mut self) -> Self {
if self.blocks.is_empty() {
self.blocks.push(genesis_block(Network::Bitcoin));
} else {
self.blocks.push(Block {
header: BlockHeader {
version: 0,
prev_blockhash: self.blocks.last().unwrap().block_hash(),
merkle_root: Default::default(),
time: 0,
bits: 0,
nonce: 0,
},
txdata: vec![Transaction {
version: 0,
lock_time: 0,
input: vec![TxIn {
previous_output: OutPoint::null(),
script_sig: script::Builder::new()
.push_scriptint(self.blocks.len().try_into().unwrap())
.into_script(),
sequence: 0,
witness: vec![],
}],
output: vec![TxOut {
value: 50 * COIN_VALUE,
script_pubkey: script::Builder::new().into_script(),
}],
}],
});
}
self
fn block(self) -> Self {
self.block_with_coinbase(true)
}

fn block_without_coinbase(mut self) -> Self {
fn block_with_coinbase(mut self, coinbase: bool) -> Self {
if self.blocks.is_empty() {
self.blocks.push(genesis_block(Network::Bitcoin));
} else {
Expand All @@ -173,7 +146,26 @@ impl Test {
bits: 0,
nonce: 0,
},
txdata: Vec::new(),
txdata: if coinbase {
vec![Transaction {
version: 0,
lock_time: 0,
input: vec![TxIn {
previous_output: OutPoint::null(),
script_sig: script::Builder::new()
.push_scriptint(self.blocks.len().try_into().unwrap())
.into_script(),
sequence: 0,
witness: vec![],
}],
output: vec![TxOut {
value: 50 * COIN_VALUE,
script_pubkey: script::Builder::new().into_script(),
}],
}]
} else {
Vec::new()
},
});
}
self
Expand Down Expand Up @@ -214,20 +206,38 @@ impl Test {
self
}

fn blockfile(mut self) -> Self {
self.blockfiles.push(self.blocks.len());
self
}

fn populate_blocksdir(&self) -> io::Result<()> {
let blocksdir = self.tempdir.path().join("blocks");
fs::create_dir(&blocksdir)?;
let mut blockfile = File::create(blocksdir.join("blk00000.dat"))?;

for block in &self.blocks {
let mut encoded = Vec::new();
block.consensus_encode(&mut encoded)?;
blockfile.write_all(&[0xf9, 0xbe, 0xb4, 0xd9])?;
blockfile.write_all(&(encoded.len() as u32).to_le_bytes())?;
blockfile.write_all(&encoded)?;
for tx in &block.txdata {
eprintln!("{}", tx.txid());

let mut start = 0;

for (i, end) in self
.blockfiles
.iter()
.copied()
.chain(iter::once(self.blocks.len()))
.enumerate()
{
let mut blockfile = File::create(blocksdir.join(format!("blk{:05}.dat", i)))?;

for block in &self.blocks[start..end] {
let mut encoded = Vec::new();
block.consensus_encode(&mut encoded)?;
blockfile.write_all(&[0xf9, 0xbe, 0xb4, 0xd9])?;
blockfile.write_all(&(encoded.len() as u32).to_le_bytes())?;
blockfile.write_all(&encoded)?;
for tx in &block.txdata {
eprintln!("{}", tx.txid());
}
}

start = end;
}

Ok(())
Expand Down