Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

electrs exits with code 135 #226

Closed
lukechilds opened this issue Mar 30, 2020 · 51 comments
Closed

electrs exits with code 135 #226

lukechilds opened this issue Mar 30, 2020 · 51 comments

Comments

@lukechilds
Copy link
Contributor

I have the following Docker Compose setup:

version: "2"
services:

  bitcoind:
    image: lukechilds/bitcoind:v0.19.1
    stop_grace_period: 10m
    restart: always
    expose:
      - "8332"
    volumes:
      - ./data/bitcoind:/data/.bitcoin
    command: >-
      -rpcbind=0.0.0.0
      -rpcallowip=0.0.0.0/0
      -disablewallet=1

  electrs:
    image: electrs-app:latest
    stop_grace_period: 10m
    restart: always
    user: root
    ports:
      - "50001:50001"
    volumes:
      - ./data/bitcoind:/root/.bitcoin:ro
      - ./data/electrs:/root
    command: >-
      electrs
      -vvvv
      --timestamp
      --db-dir /root/db
      --daemon-rpc-addr bitcoind:8332

Changed to run as root user because lukechilds/bitcoind currently runs as root so electrs-app needs to be root to read .bitcoin/.cookie. I know I should fix that but it's fine for just playing around.

Anyway, I don't think it should be root that's causing issues, I'm sure (I think?) this exact config has worked before, but for some reason when I try to run it now I get:

pi@pinode:~ $ docker-compose up electrs
Starting pi_electrs_1 ... done
Attaching to pi_electrs_1
electrs_1   | Config { log: StdErrLog { verbosity: Trace, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: bitcoin, db_path: "/root/db/mainnet", daemon_dir: "/root/.bitcoin", daemon_rpc_addr: V4(172.18.0.2:8332), electrum_rpc_addr: V4(127.0.0.1:50001), monitoring_addr: V4(127.0.0.1:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 4, tx_cache_size: 10485760, txid_limit: 100, server_banner: "Welcome to electrs 0.8.3 (Electrum Rust Server)!", blocktxids_cache_size: 10485760 }
electrs_1   | 2020-03-30T18:49:48.987+00:00 - DEBUG - Server listening on 127.0.0.1:4224
electrs_1   | 2020-03-30T18:49:48.989+00:00 - DEBUG - Running accept thread
electrs_1   | 2020-03-30T18:49:48.996+00:00 - INFO - NetworkInfo { version: 190100, subversion: "/Satoshi:0.19.1/", relayfee: 0.00001 }
electrs_1   | 2020-03-30T18:49:48.997+00:00 - INFO - BlockchainInfo { chain: "main", blocks: 623645, headers: 623645, bestblockhash: "0000000000000000000f359f884320a688b7c9d29b2aa3d77e25b40d0c2d149a", pruned: false, initialblockdownload:
false }
electrs_1   | 2020-03-30T18:49:48.998+00:00 - DEBUG - opening DB at "/root/db/mainnet"
pi_electrs_1 exited with code 135

Is there any way to get any more debug to see why it's failing? Does code 135 mean something specific?

Thanks!

@lukechilds
Copy link
Contributor Author

Oh and my Docker image is built from the included Dockerfile in this repo at the current tip of master
(9ecf131) if that helps.

@lukechilds
Copy link
Contributor Author

lukechilds commented Mar 31, 2020

Confirmed the issue definitely isn't root by setting correct file permissions and running electrs as non-root. I still get the same issue.

pi@pinode:~/electrs-tmp-data $ docker run --network pi_default --volume $HOME/data/bitcoind:/home/user/.bitcoin:ro --volume $PWD:/home/user --rm -i -t electrs-app electrs -vvvv --timestamp --db-dir /home/user/db --daemon-rpc-addr bitcoind:8332
Config { log: StdErrLog { verbosity: Trace, quiet: false, timestamp: Millisecond, modules: [], writer: "stderr", color_choice: Auto }, network_type: bitcoin, db_path: "/home/user/db/mainnet", daemon_dir: "/home/user/.bitcoin", daemon_rpc_addr: V4(172.18.0.2:8332), electrum_rpc_addr: V4(127.0.0.1:50001), monitoring_addr: V4(127.0.0.1:4224), jsonrpc_import: false, index_batch_size: 100, bulk_index_threads: 4, tx_cache_size: 10485760, txid_limit: 100, server_banner: "Welcome to electrs 0.8.3 (Electrum Rust Server)!", blocktxids_cache_size: 10485760 }
2020-03-31T05:12:28.559+00:00 - DEBUG - Server listening on 127.0.0.1:4224
2020-03-31T05:12:28.561+00:00 - DEBUG - Running accept thread
2020-03-31T05:12:28.564+00:00 - INFO - NetworkInfo { version: 190100, subversion: "/Satoshi:0.19.1/", relayfee: 0.00001 }
2020-03-31T05:12:28.565+00:00 - INFO - BlockchainInfo { chain: "main", blocks: 623713, headers: 623713, bestblockhash: "0000000000000000000525bb929a7eceb05ef1d496a312390c9c8edc2dc078be", pruned: false, initialblockdownload: false }
2020-03-31T05:12:28.567+00:00 - DEBUG - opening DB at "/home/user/db/mainnet"

I also noticed this core binary file appears in /home/user. Is this some kind of crash dump that could be used for debugging? (db/mainnet/LOG exists but is empty)

pi@pinode:~/electrs-tmp-data $ ls
core  db

@Kixunil
Copy link
Contributor

Kixunil commented Apr 1, 2020

I believe 135 means killed by a signal (didn't find the code in electrs source). I guess this comes from rocksdb (crashes right after log mentioning opening it). I think there were some issues with rocksdb on RPi which you seem to be using.

core file is definitely a coredump. I don't have the appropriate RPi at hand, so I'd suggest to run gdb electrs core (maybe you'll need to specify the paths and install gdb somehow, I have no clue what kind of magic Docker does around that) and then type bt command and send us the output.

@lukechilds
Copy link
Contributor Author

Thanks @Kixunil! I'll try and get a backtrace for you.

@lukechilds
Copy link
Contributor Author

Doesn't seem to be much there:

$gdb eletrs core
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "arm-linux-gnueabihf".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
eletrs: No such file or directory.
[New LWP 1]
[New LWP 10]
[New LWP 9]
[New LWP 7]
[New LWP 12]
[New LWP 14]
[New LWP 15]
[New LWP 16]
[New LWP 17]
[New LWP 11]
[New LWP 8]
[New LWP 13]
Core was generated by `electrs -vvvv --timestamp --db-dir /home/user/db --daemon-rpc-addr bitcoind:833'.
Program terminated with signal SIGBUS, Bus error.
#0  0x006b25bc in ?? ()
[Current thread is 1 (LWP 1)]
(gdb) bt
#0  0x006b25bc in ?? ()
#1  0x007a5f94 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

@Kixunil
Copy link
Contributor

Kixunil commented Apr 2, 2020

eletrs: No such file or directory. - I guess you need to use a full path. I think it's /home/user/.cargo/bin/electrs

Anyway, I just realized it's likely that it won't contain enough information for debugging because of release mode. You need to remove --release from cargo build and add --debug to cargo install in the docker file and rebuild to switch to debug mode.

At least we have confirmation it's a signal. :)

@lukechilds
Copy link
Contributor Author

eletrs: No such file or directory. - I guess you need to use a full path. I think it's /home/user/.cargo/bin/electrs

My bad it was a typo, I missed the "c" in electrs. Tried again with the correct name and got the same output though.

Anyway, I just realized it's likely that it won't contain enough information for debugging because of release mode. You need to remove --release from cargo build and add --debug to cargo install in the docker file and rebuild to switch to debug mode.

I'll rebuild the Docker image wit those flag changes and try again. Do I need to create a new coredump with the new image or can I just gdb electrs core with the original core dump?

@Kixunil
Copy link
Contributor

Kixunil commented Apr 2, 2020

You will definitely need a new coredump.

@lukechilds
Copy link
Contributor Author

Hmmn, this is very odd.

With the following change:

diff --git a/Dockerfile b/Dockerfile
index 228b4f1..b3798be 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -10,8 +10,8 @@ USER user
 WORKDIR /home/user
 COPY ./ /home/user

-RUN cargo build --release
-RUN cargo install --path .
+RUN cargo build
+RUN cargo install --debug --path .

 # Electrum RPC
 EXPOSE 50001

I can build the Docker image and attempt to start the sync with the exact same command as before and it works without crashing.

I'll let it sync and then try rebuilding without those flag changes and see if it's still stable. 🤷‍♂️

@Kixunil
Copy link
Contributor

Kixunil commented Apr 2, 2020

Might be a bug in compiler miscompiling with optimizations. Would be really helpful to figure out which (Rust or C) is it.

@lukechilds
Copy link
Contributor Author

I'll post back with the results after the initial index has finished, it's at about 40% atm.

Would be really helpful to figure out which (Rust or C) is it.

By this do you mean whether it's electrs or RocksDB?

@Kixunil
Copy link
Contributor

Kixunil commented Apr 3, 2020

By this do you mean whether it's electrs or RocksDB?

Yes.

@lukechilds
Copy link
Contributor Author

lukechilds commented Apr 6, 2020

I can confirm that the issue doesn't occur in non-debug mode.

The full sync has completed in debug mode.
I can start and stop the debug binary and it continues to work as normal.

Starting the release binary and pointing to the db dir indexed by the debug binary doesn't work, it fails with the exact same error I reported in the initial comment when trying to start a fresh sync.

What else can I do to help you figure out if the error is happening in Rust or C?

@Kixunil
Copy link
Contributor

Kixunil commented Apr 7, 2020

Damn, that sucks. It's also possible to have release build with debug symbols. You need to add debug = true to [profile.release] in Cargo.toml of electrs and then build it with --release. When it crashes, you should have the debug symbols available in gdb.

I also noticed there's lto = true, so might be worth trying to disable it in a separate test - please try it with lto and debug symbols first, so we can understand the issue better.

Other ideas:

  • if lto = false doesn't crash, then try lto = "thin"
  • if lto = false crashes, try adding codegen-units = 1 or incremental = false

Maybe I should make my idea clear: if any options above help, it is probably a bug in rustc - file bug against it. (But still may be a result of UB, most likely in RocksDb, but could be electrs dependencies too.) If gdb points to RocksDb file a bug report against it. If it's in Rust code, inspect the unsafe code in the dependencies. File bug against a dependency or rustc if none found.

Note that this is all an optimistic heuristic: unfortunately if there's UB anywhere, it could appear as an entirely different problem somewhere else and it could be a nightmare to find. (I have experienced such issue in some C code a few years ago.)

@lukechilds
Copy link
Contributor Author

Ok, testing debug = true first:

diff --git a/Cargo.toml b/Cargo.toml
index a5f2ca5..c18ea1c 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -17,6 +17,7 @@ spec = "config_spec.toml"

 [profile.release]
 lto = true
+debug = true

 [features]
 latest_rust = []  # use latest Rust features (otherwise, support Rust 1.34)

@Kixunil
Copy link
Contributor

Kixunil commented Apr 7, 2020

If all else fails, maybe we could try sanitizers.

@lukechilds
Copy link
Contributor Author

lukechilds commented Apr 7, 2020

I can get some useful info out of gdb now with debug = true on the --release build.

$ gdb electrs core
GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "arm-linux-gnueabihf".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from electrs...done.
[New LWP 1]
[New LWP 8]
[New LWP 13]
[New LWP 7]
[New LWP 11]
[New LWP 15]
[New LWP 12]
[New LWP 16]
[New LWP 14]
[New LWP 9]
[New LWP 6]
[New LWP 10]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
Core was generated by `electrs -vvvv --timestamp --db-dir /home/user/db --daemon-rpc-addr bitcoind:833'.
Program terminated with signal SIGBUS, Bus error.
#0  rocksdb::DBImpl::DBImpl (this=0x1f44548, options=..., dbname=..., seq_per_batch=<optimized out>, batch_per_txn=<error reading variable: access outside bounds of object referenced via synthetic pointer>) at rocksdb/db/db_impl.cc:154
154           logfile_number_(0),
[Current thread is 1 (Thread 0xb6f35220 (LWP 1))]
warning: Missing auto-load script at offset 0 in section .debug_gdb_scripts
of file /usr/local/cargo/bin/electrs.
Use `info auto-load python-scripts [REGEXP]' to list them.
(gdb) bt
#0  rocksdb::DBImpl::DBImpl (this=0x1f44548, options=..., dbname=..., seq_per_batch=<optimized out>, batch_per_txn=<error reading variable: access outside bounds of object referenced via synthetic pointer>) at rocksdb/db/db_impl.cc:154
#1  0x0078c5ec in rocksdb::DBImpl::Open (db_options=..., dbname=..., column_families=..., handles=<optimized out>, dbptr=<optimized out>,
    seq_per_batch=<error reading variable: access outside bounds of object referenced via synthetic pointer>, batch_per_txn=<optimized out>) at rocksdb/db/db_impl_open.cc:1112
Python Exception <class 'gdb.error'> There is no member named _M_dataplus.:
#2  0x0078bbb0 in rocksdb::DB::Open (db_options=..., dbname=, column_families=std::vector of length 0, capacity 0, handles=<optimized out>, dbptr=0xbedb7318) at rocksdb/db/db_impl_open.cc:1085
#3  rocksdb::DB::Open (options=..., dbname=..., dbptr=<optimized out>) at rocksdb/db/db_impl_open.cc:1070
#4  0x00746b74 in rocksdb_open (options=0x1f41e80, name=0x1f43d20 "/home/user/db/mainnet", errptr=0xbedb7600) at rocksdb/db/c.cc:498
#5  0x007313a8 in rocksdb::db::_$LT$impl$u20$rocksdb..DB$GT$::open_cf_descriptors::h29d9618ffc53b8d8 (path=<optimized out>, cfs=..., opts=<optimized out>)
    at /usr/local/cargo/registry/src/gh.neting.cc-1ecc6299db9ec823/rocksdb-0.12.2/src/ffi_util.rs:39
#6  rocksdb::db::_$LT$impl$u20$rocksdb..DB$GT$::open_cf::h0263f574c73f9acf (path=0x1f42a00, cfs=..., opts=<optimized out>) at /usr/local/cargo/registry/src/gh.neting.cc-1ecc6299db9ec823/rocksdb-0.12.2/src/db.rs:696
#7  rocksdb::db::_$LT$impl$u20$rocksdb..DB$GT$::open::h7989e28aa64f9f89 (path=0x1f42a00, opts=<optimized out>) at /usr/local/cargo/registry/src/gh.neting.cc-1ecc6299db9ec823/rocksdb-0.12.2/src/db.rs:680
#8  electrs::store::DBStore::open_opts::h63281bd60a3c96e1 (opts=...) at src/store.rs:60
#9  electrs::store::DBStore::open::h0024b1a9fbaff635 (path=<optimized out>, low_memory=<optimized out>) at src/store.rs:67
#10 electrs::run_server::hd98dbb48d585005f (config=<optimized out>) at src/bin/electrs.rs:43
#11 electrs::main::ha318391e5c9a2f93 () at src/bin/electrs.rs:85
#12 0x007396b4 in std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::hc1a8bfa50b148e76 () at /rustc/91856ed52c58aa5ba66a015354d1cc69e9779bdf/src/libstd/rt.rs:64
#13 0x0072d508 in main () at /rustc/91856ed52c58aa5ba66a015354d1cc69e9779bdf/src/libcore/iter/traits/iterator.rs:1571

@lukechilds
Copy link
Contributor Author

Testing again with:

diff --git a/Cargo.toml b/Cargo.toml
index a5f2ca5..f570efb 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -16,7 +16,8 @@ build = "build.rs"
 spec = "config_spec.toml"

 [profile.release]
-lto = true
+lto = false
+debug = true

 [features]
 latest_rust = []  # use latest Rust features (otherwise, support Rust 1.34)

to see if lto = false stops the crash.

@lukechilds
Copy link
Contributor Author

lto = false still crashes too.

if lto = false crashes, try adding codegen-units = 1 or incremental = false

Should I add these options in addition to or instead of lto = false?

@Kixunil
Copy link
Contributor

Kixunil commented Apr 8, 2020

Which version of Raspberry Pi do you use? If it's RPi 3, then I think I know what's happening:

The line it crashes on is attempting to initialize 64-bit integer, which on ARM requires to sit on an address that's multiple of 8 bytes (8B == 64b), but the allocator returns pointers aligned to 4B (32b). Access with invalid alignment produces SIGBUS. However, there are slower instructions that don't require aligned access and LLVM uses those in debug mode instead, so that's why it's not crashing in debug.

In ideal world, the allocator would be capable of returning 64-bit aligned pointers even on 32-bit but I don't know if we can do anything about it. So the next best thing seems to be adding -fmax-type-align=32 to the compile options when compiling for RPi 3.

I think the next approach should be trying to patch the cc crate that librocksdb-sys uses for compiling and see if it helps. If yes, I'd make a PR. The reason for patching cc instead of librocksdb-sys is it should help all other projects compiling C(++) on RPi 3.

I need to do some other stuff right now, will look at patching later. In the meantime, please let me know if it's really RPi 3 (or lower; RPi 4 would be really strange) and if you have the time, familiarize yourself with how to replace cargo dependencies (Google, I know it's possible, jut don't remember how exactly), because you will need to do it in order to test my patch of cc. Of course, if you are able to patch it, you may try yourself. :)

@lukechilds
Copy link
Contributor Author

In the meantime, please let me know if it's really RPi 3 (or lower; RPi 4 would be really strange)

It's a Pi 4 🤔

and if you have the time, familiarize yourself with how to replace cargo dependencies (Google, I know it's possible, jut don't remember how exactly), because you will need to do it in order to test my patch of cc. Of course, if you are able to patch it, you may try yourself. :)

I'm only really experienced with high level languages. I wish I could help more with this but I really wouldn't know where to start digging into some Rust/C stuff to fix low level compilation/architecture type bugs.

I'm sure I can manage to swap out the Cargo dep though.

@Kixunil
Copy link
Contributor

Kixunil commented Apr 8, 2020

Ah, according to a Hackaday article Raspbian is 32 bit even on Pi 4. That makes even more sense. 32 bit CPU requiring 64-bit-aligned access sounded a bit strange to me.

So let's proceed with a patch to cc. I think the logic should be if CPU is 64 bit, but OS 32 bit, then fixup the alignment. (I wouldn't be surprised if there's more than alignment to fix, but I don't know what else might be needed.)

This should be a very high-level patch - just adding a command line option in the above mentioned scenario. BTW, I think Rust is pretty nice even for high level people (modern features like sum types, iterators, async/await), if you ever want to give it a try. :)

Kixunil added a commit to Kixunil/cc-rs that referenced this issue Apr 8, 2020
There's an issue with compiling C(++) code on 64-bit CPUs running 32-bit
OSes, which have 32-bit-aligned allocators. The resulting binary will
use aligned accesses in release mode.

See romanz/electrs#226 for an example of such
issue.

This change adds default flags setting alignment to whatever the pointer
alignment is. I realize the limitations of this change such as:

* Works in Cargo build scripts only (the major use case for `cc` crate)
* Is unnecessary if allocator actually supports 64 bit alingment
* Is incorrect if allocator supports smaller alignment than what's
  pointer width.

However, resolving this issue requires either matching on all possible
triples, which I don't have the time to implement or somehow fetching
the configuration for given triple, which I don't know how to do. This
is also an experiment to see if it even helps at all.
@Kixunil
Copy link
Contributor

Kixunil commented Apr 8, 2020

So I wrote the patch, please note the fix-alignment branch. Try to swap cc dependency with the changed code and compile in release mode.

Also, maybe a good idea to run rustc --print cfg and paste the output here. I'd like to check if the output is what I'd expect.

@lukechilds
Copy link
Contributor Author

@Kixunil trying to get your branch working now.

Btw, seems like #193 is related...?

@Kixunil
Copy link
Contributor

Kixunil commented Apr 11, 2020

I don't think it is. This is SIGBUS, that one is SIGILL. But we don't have a backtrace from that instance, so hard to say. Actually the stack trace is in the other issue and it's significantly different.

@lukechilds
Copy link
Contributor Author

Interestingly, I just quickly tried updating the base Rust image and that seems to be working.

diff --git a/Dockerfile b/Dockerfile
index 228b4f1..3518a80 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -1,4 +1,4 @@
-FROM rust:1.34.0-slim
+FROM rust:1.42.0-slim

 RUN apt-get update
 RUN apt-get install -y clang cmake

Building the Docker image with just that that change applied to master is working without crashes.

@Kixunil
Copy link
Contributor

Kixunil commented Apr 11, 2020

🤔 that might mean my analysis was incorrect or new version of Rust somehow affects the allocator or something. Will have to think about it.

Would you be still willing to try my patch, so we can understand the situation better?

@lukechilds
Copy link
Contributor Author

Yeah, of course!

Would it make more sense to checkout the rust-rocksdb repo and then I can just (I think) point Cargot.toml to your repo/branch for cc?

I tried just checking out rust-rocksdb in a fresh Rust Docker image and running cargo test but that fails with some libclang related stuff:

   Compiling toml v0.5.6
   Compiling serde_json v1.0.51
   Compiling librocksdb-sys v6.7.3 (/rust-rocksdb/librocksdb-sys)
   Compiling trybuild v1.0.25
error: failed to run custom build command for `librocksdb-sys v6.7.3 (/rust-rocksdb/librocksdb-sys)`

Caused by:
  process didn't exit successfully: `/rust-rocksdb/target/debug/build/librocksdb-sys-4eb2e00cfdb9f9f4/build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-changed=build.rs
cargo:rerun-if-changed=rocksdb/
cargo:rerun-if-changed=snappy/
cargo:rerun-if-changed=lz4/
cargo:rerun-if-changed=zstd/
cargo:rerun-if-changed=zlib/
cargo:rerun-if-changed=bzip2/
cargo:warning=couldn't execute `llvm-config --prefix` (error: No such file or directory (os error 2))
cargo:warning=set the LLVM_CONFIG_PATH environment variable to the full path to a valid `llvm-config` executable (including the executable itself)

--- stderr
thread 'main' panicked at 'Unable to find libclang: "couldn\'t find any valid shared libraries matching: [\'libclang.so\', \'libclang-*.so\', \'libclang.so.*\', \'libclang-*.so.*\'], set the `LIBCLANG_PATH` environment variable to a path
where one of these files can be found (invalid: [])"', /usr/local/cargo/registry/src/gh.neting.cc-1ecc6299db9ec823/bindgen-0.53.2/src/lib.rs:1956:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

warning: build failed, waiting for other jobs to finish...
error: build failed

I ran apt-get install libclang-dev but then it still seemed to have trouble finding some stuff:

   Compiling librocksdb-sys v5.18.3 (/rust-rocksdb/librocksdb-sys)
error: failed to run custom build command for `librocksdb-sys v5.18.3 (/rust-rocksdb/librocksdb-sys)`

Caused by:
  process didn't exit successfully: `/rust-rocksdb/target/debug/build/librocksdb-sys-3bd9be25e3624f67/build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-changed=build.rs
cargo:rerun-if-changed=rocksdb/
cargo:rerun-if-changed=snappy/
cargo:rerun-if-changed=lz4/
cargo:rerun-if-changed=zstd/
cargo:rerun-if-changed=zlib/
cargo:rerun-if-changed=bzip2/

--- stderr
rocksdb/include/rocksdb/c.h:65:10: fatal error: 'stdarg.h' file not found
rocksdb/include/rocksdb/c.h:65:10: fatal error: 'stdarg.h' file not found, err: true
thread 'main' panicked at 'unable to generate rocksdb bindings: ()', librocksdb-sys/build.rs:34:20
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Note I haven't even pointed it at your cc patch yet, this is just with the official master branch.

Is this expected or is this something that needs fixing before proceeding to test your cc patch?

@Kixunil
Copy link
Contributor

Kixunil commented Apr 11, 2020

You also need cmake and libsnappy-dev

I understand now, you didn't just upgrade Rust, you also upgraded the whole underlying OS from stretch to buster, which probably fixed the bug somewhere in the C++ toolchain. Fixing the bug in cc might still be valuable for people running older systems.

@lukechilds
Copy link
Contributor Author

you also upgraded the whole underlying OS from stretch to buster

Ahh, you're right!

$ docker run -it --entrypoint bash rust:1.34.0-slim -c 'cat /etc/os-release | grep PRETTY'
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
$ docker run -it --entrypoint bash rust:1.42.0-slim -c 'cat /etc/os-release | grep PRETTY'
PRETTY_NAME="Debian GNU/Linux 10 (buster)"

That's very confusing. They offer both stretch and buster variants on Docker hub but if you don't specify it defaults to different Debian versions when you specify different Rust versions.

If I submit a PR to update the base Rust Docker image which seems to resolve this issue, do you want me to pin the Debian version too so this is fixed going forwards?

We can pin the docker image to 1.42.0-slim-buster. Might make it easier to troubleshoot any future issues.

Of course I'm still happy to continue testing cc changes on stretch too.

@Kixunil
Copy link
Contributor

Kixunil commented Apr 11, 2020

AFAIK @romanz had a good reason to keep Rust 1.34 - due to it being the latest supported Rust version in Debian Buster. But maybe if we could change the underlying OS to Buster while keeping 1.34 that might work.

@lukechilds
Copy link
Contributor Author

I installed those other packages:

apt-get install git libclang-dev cmake libsnappy-dev

But still got the same error as before:

   Compiling toml v0.5.6
   Compiling serde_json v1.0.51
   Compiling librocksdb-sys v6.7.3 (/rust-rocksdb/librocksdb-sys)
   Compiling trybuild v1.0.25
error: failed to run custom build command for `librocksdb-sys v6.7.3 (/rust-rocksdb/librocksdb-sys)`

Caused by:
  process didn't exit successfully: `/rust-rocksdb/target/debug/build/librocksdb-sys-4eb2e00cfdb9f9f4/build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-changed=build.rs
cargo:rerun-if-changed=rocksdb/
cargo:rerun-if-changed=snappy/
cargo:rerun-if-changed=lz4/
cargo:rerun-if-changed=zstd/
cargo:rerun-if-changed=zlib/
cargo:rerun-if-changed=bzip2/
cargo:warning=couldn't execute `llvm-config --prefix` (error: No such file or directory (os error 2))
cargo:warning=set the LLVM_CONFIG_PATH environment variable to the full path to a valid `llvm-config` executable (including the executable itself)

--- stderr
rocksdb/include/rocksdb/c.h:65:10: fatal error: 'stdarg.h' file not found
rocksdb/include/rocksdb/c.h:65:10: fatal error: 'stdarg.h' file not found, err: true
thread 'main' panicked at 'unable to generate rocksdb bindings: ()', librocksdb-sys/build.rs:30:20
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

warning: build failed, waiting for other jobs to finish...
error: build failed

Strange because stdarg.h does exist:

$ find / -name stdarg.h
/usr/lib/gcc/arm-linux-gnueabihf/8/include/stdarg.h
/usr/lib/llvm-7/lib/clang/7.0.1/include/stdarg.h
/usr/include/c++/8/tr1/stdarg.h

@Kixunil
Copy link
Contributor

Kixunil commented Apr 11, 2020

Damn, I swear I saw this problem before and it was something trivial like missing package.

@Kixunil
Copy link
Contributor

Kixunil commented Apr 11, 2020

Ah, I think it was llvm-dev.

@lukechilds
Copy link
Contributor Author

Oh also:

But maybe if we could change the underlying OS to Buster while keeping 1.34 that might work.

There is no official Buster image for 1.34, only stretch.

https://hub.docker.com/_/rust?tab=tags&page=1&name=1.34

So if we need to stick to Rust 1.34 then we'll need to fix the cc issue anyway.

@lukechilds
Copy link
Contributor Author

Thanks, late here now but I'll try llvm-dev in the morning!

@Kixunil
Copy link
Contributor

Kixunil commented Apr 11, 2020

There is no official Buster image for 1.34, only stretch.

WTF. Could we perhaps depend on pure Debian buster (without Rust) and just do apt install cargo? 1.34 is in the repository. I think there's a way to install specific version, so we avoid accidentally upgrading Rust if a new version finally lands.

@lukechilds
Copy link
Contributor Author

lukechilds commented Apr 12, 2020

llvm-dev didn't seem to help either, same issue:

$ docker run -it --entrypoint bash rust:1.42.0-slim
root@b00becd1e206:/# apt update && apt install -y git libclang-dev cmake libsnappy-dev llvm-dev &&  git clone https://github.com/rust-rocksdb/rust-rocksdb && cd rust-rocksdb && git submodule update --init --recursive && cargo test
...
   Compiling librocksdb-sys v6.7.3 (/rust-rocksdb/librocksdb-sys)
error: failed to run custom build command for `librocksdb-sys v6.7.3 (/rust-rocksdb/librocksdb-sys)`

Caused by:
  process didn't exit successfully: `/rust-rocksdb/target/debug/build/librocksdb-sys-4eb2e00cfdb9f9f4/build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-changed=build.rs
cargo:rerun-if-changed=rocksdb/
cargo:rerun-if-changed=snappy/
cargo:rerun-if-changed=lz4/
cargo:rerun-if-changed=zstd/
cargo:rerun-if-changed=zlib/
cargo:rerun-if-changed=bzip2/

--- stderr
rocksdb/include/rocksdb/c.h:65:10: fatal error: 'stdarg.h' file not found
rocksdb/include/rocksdb/c.h:65:10: fatal error: 'stdarg.h' file not found, err: true
thread 'main' panicked at 'unable to generate rocksdb bindings: ()', librocksdb-sys/build.rs:30:20
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

@lukechilds
Copy link
Contributor Author

Could we perhaps depend on pure Debian buster (without Rust) and just do apt install cargo?

Yeah definitely I'll see if that works.

@lukechilds
Copy link
Contributor Author

Manually adding export C_INCLUDE_PATH=/usr/lib/llvm-7/lib/clang/7.0.1/include/ gets rid of the 'stdarg.h' file not found error but then it just fails a bit later on. Shouldn't this be already set or automatically detected...?

@lukechilds
Copy link
Contributor Author

lukechilds commented Apr 12, 2020

Ah ok I got it! Just apt install llvm clang is all that's required to build.

This successfully builds and run tests against a fresh Rust Docker image:

apt update && apt install -y git llvm clang &&  git clone https://github.com/rust-rocksdb/rust-rocksdb && cd rust-rocksdb && git submodule update --init --recursive && cargo test

I moved this over to my workstation because it was taking forever but I'll try this on the Pi now and see if it fails and if dropping in your cc patch resolves it.

@Kixunil
Copy link
Contributor

Kixunil commented Apr 12, 2020

I believe you only need to add:

[patch.crates-io]
cc = { git = "https://github.com/Kixunil/cc-rs", branch = "fix-alignment" }

to Cargo.toml of electrs to test it

@lukechilds
Copy link
Contributor Author

lukechilds commented Apr 12, 2020

When updating the lockfile I was getting:

root@b867c2f4023f:/data# cargo update -p cc
    Updating crates.io index
    Updating git repository `https://github.com/Kixunil/cc-rs`
warning: Patch `cc v1.0.50 (https://github.com/Kixunil/cc-rs?branch=fix-alignment#f6e079e8)` was not used in the crate graph.
Check that the patched package version and available features are compatible
with the dependency requirements. If the patch has a different version from
what is locked in the Cargo.lock file, run `cargo update` to use the new
version. This may also occur with an optional dependency that is not enabled.

So I had to update the version number in the patch branch to 1.0.41 for the rust-rocksdb dep to get resolved to it.

https://github.com/lukechilds/cc-rs/tree/fix-alignment

That successfully patches the cc dep, however building fails with unrecognized command line option errors:

   Compiling env_logger v0.6.2
   Compiling chrono v0.4.9
   Compiling synstructure v0.12.1
error: failed to run custom build command for `backtrace-sys v0.1.32`
process didn't exit successfully: `/home/user/target/release/build/backtrace-sys-6227e9ef15992805/build-script-build` (exit code: 1)
--- stdout
cargo:rustc-cfg=rbt
TARGET = Some("armv7-unknown-linux-gnueabihf")
OPT_LEVEL = Some("3")
HOST = Some("armv7-unknown-linux-gnueabihf")
CC_armv7-unknown-linux-gnueabihf = None
CC_armv7_unknown_linux_gnueabihf = None
HOST_CC = None
CC = None
CFLAGS_armv7-unknown-linux-gnueabihf = None
CFLAGS_armv7_unknown_linux_gnueabihf = None
HOST_CFLAGS = None
CFLAGS = None
CRATE_CC_NO_DEFAULTS = None
CARGO_CFG_TARGET_FEATURE = None
running: "cc" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-march=armv7-a" "-mstack-align=32" "-mdata-align=32" "-mconst-align=32" "-I" "src/libbacktrace" "-I" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out"
 "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=32" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-Dbacktrace_full=__rbt_backtrace_full" "-Dbacktrace_dwarf_add=__rbt_backtrace_dwarf_add" "-Dbacktrace_initialize=__rbt_backtrace_initialize" "-Dbacktrace_pcinfo=__rbt_backtrace_pcinfo" "-Dbacktrace_syminfo=__rbt_backtrace_syminfo" "-Dbacktrace_get_view=__rbt_backtrace_get_view" "-Dbacktrace_release_view=__rbt_backtrace_release_view" "-Dbacktrace_alloc=__rbt_backtrace_alloc" "-Dbacktrace_free=__rbt_backtrace_free" "-Dbacktrace_vector_finish=__rbt_backtrace_vector_finish" "-Dbacktrace_vector_grow=__rbt_backtrace_vector_grow" "-Dbacktrace_vector_release=__rbt_backtrace_vector_release" "-Dbacktrace_close=__rbt_backtrace_close" "-Dbacktrace_open=__rbt_backtrace_open" "-Dbacktrace_print=__rbt_backtrace_print" "-Dbacktrace_simple=__rbt_backtrace_simple" "-Dbacktrace_qsort=__rbt_backtrace_qsort" "-Dbacktrace_create_state=__rbt_backtrace_create_state" "-Dbacktrace_uncompress_zdebug=__rbt_backtrace_uncompress_zdebug" "-Dmacho_get_view=__rbt_macho_get_view" "-Dmacho_symbol_type_relevant=__rbt_macho_symbol_type_relevant" "-Dmacho_get_commands=__rbt_macho_get_commands" "-Dmacho_try_dsym=__rbt_macho_try_dsym" "-Dmacho_try_dwarf=__rbt_macho_try_dwarf" "-Dmacho_get_addr_range=__rbt_macho_get_addr_range" "-Dmacho_get_uuid=__rbt_macho_get_uuid" "-Dmacho_add=__rbt_macho_add" "-Dmacho_add_symtab=__rbt_macho_add_symtab" "-Dmacho_file_to_host_u64=__rbt_macho_file_to_host_u64" "-Dmacho_file_to_host_u32=__rbt_macho_file_to_host_u32" "-
Dmacho_file_to_host_u16=__rbt_macho_file_to_host_u16" "-o" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out/src/libbacktrace/alloc.o" "-c" "src/libbacktrace/alloc.c"
cargo:warning=cc: error: unrecognized command line option '-mstack-align=32'; did you mean '-Wstack-usage='?
cargo:warning=cc: error: unrecognized command line option '-mdata-align=32'; did you mean '-Wcast-align'?
cargo:warning=cc: error: unrecognized command line option '-mconst-align=32'; did you mean '-Wcast-align'?
exit code: 1
running: "cc" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-march=armv7-a" "-mstack-align=32" "-mdata-align=32" "-mconst-align=32" "-I" "src/libbacktrace" "-I" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out"
 "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=32" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-Dbacktrace_full=__rbt_backtrace_full" "-Dbacktrace_dwarf_add=__rbt_backtrace_dwarf_add" "-Dbacktrace_initialize=__rbt_backtrace_initialize" "-Dbacktrace_pcinfo=__rbt_backtrace_pcinfo" "-Dbacktrace_syminfo=__rbt_backtrace_syminfo" "-Dbacktrace_get_view=__rbt_backtrace_get_view" "-Dbacktrace_release_view=__rbt_backtrace_release_view" "-Dbacktrace_alloc=__rbt_backtrace_alloc" "-Dbacktrace_free=__rbt_backtrace_free" "-Dbacktrace_vector_finish=__rbt_backtrace_vector_finish" "-Dbacktrace_vector_grow=__rbt_backtrace_vector_grow" "-Dbacktrace_vector_release=__rbt_backtrace_vector_release" "-Dbacktrace_close=__rbt_backtrace_close" "-Dbacktrace_open=__rbt_backtrace_open" "-Dbacktrace_print=__rbt_backtrace_print" "-Dbacktrace_simple=__rbt_backtrace_simple" "-Dbacktrace_qsort=__rbt_backtrace_qsort" "-Dbacktrace_create_state=__rbt_backtrace_create_state" "-Dbacktrace_uncompress_zdebug=__rbt_backtrace_uncompress_zdebug" "-Dmacho_get_view=__rbt_macho_get_view" "-Dmacho_symbol_type_relevant=__rbt_macho_symbol_type_relevant" "-Dmacho_get_commands=__rbt_macho_get_commands" "-Dmacho_try_dsym=__rbt_macho_try_dsym" "-Dmacho_try_dwarf=__rbt_macho_try_dwarf" "-Dmacho_get_addr_range=__rbt_macho_get_addr_range" "-Dmacho_get_uuid=__rbt_macho_get_uuid" "-Dmacho_add=__rbt_macho_add" "-Dmacho_add_symtab=__rbt_macho_add_symtab" "-Dmacho_file_to_host_u64=__rbt_macho_file_to_host_u64" "-Dmacho_file_to_host_u32=__rbt_macho_file_to_host_u32" "-
Dmacho_file_to_host_u16=__rbt_macho_file_to_host_u16" "-o" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out/src/libbacktrace/dwarf.o" "-c" "src/libbacktrace/dwarf.c"
cargo:warning=cc: error: unrecognized command line option '-mstack-align=32'; did you mean '-Wstack-usage='?
cargo:warning=cc: error: unrecognized command line option '-mdata-align=32'; did you mean '-Wcast-align'?
cargo:warning=cc: error: unrecognized command line option '-mconst-align=32'; did you mean '-Wcast-align'?
exit code: 1

--- stderr


error occurred: Command "cc" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-march=armv7-a" "-mstack-align=32" "-mdata-align=32" "-mconst-align=32" "-I" "src/libbacktrace" "-I" "/home/user/target/release/build/backtrace-sys-b2a6bb
045adf9ded/out" "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=32" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-Dbacktrace_full=__rbt_backtrace_full" "-Dbacktrace_dwarf_add=__rbt_backtrace_dwarf_add" "-Dbacktrace_initialize=__rbt_backtrace_initialize" "-Dbacktrace_pcinfo=__rbt_backtrace_pcinfo" "-Dbacktrace_syminfo=__rbt_backtrace_syminfo" "-Dbacktrace_get_view=__rbt_backtrace_get_view" "-Dbacktrace_release_view=__rbt_backtrace_release_view" "-Dbacktrace_alloc=__rbt_backtrace_alloc" "-Dbacktrace_free=__rbt_backtrace_free" "-Dbacktrace_vector_finish=__rbt_backtrace_vector_finish" "-Dbacktrace_vector_grow=__rbt_backtrace_vector_grow" "-Dbacktrace_vector_release=__rbt_backtrace_vector_release" "-Dbacktrace_close=__rbt_backtrace_close" "-Dbacktrace_open=__rbt_backtrace_open" "-Dbacktrace_print=__rbt_backtrace_print" "-Dbacktrace_simple=__rbt_backtrace_simple" "-Dbacktrace_qsort=__rbt_backtrace_qsort" "-Dbacktrace_create_state=__rbt_backtrace_create_state" "-Dbacktrace_uncompress_zdebug=__rbt_backtrace_uncompress_zdebug" "-Dmacho_get_view=__rbt_macho_get_view" "-Dmacho_symbol_type_relevant=__rbt_macho_symbol_type_relevant" "-Dmacho_get_commands=__rbt_macho_get_commands" "-Dmacho_try_dsym=__rbt_macho_try_dsym" "-Dmacho_try_dwarf=__rbt_macho_try_dwarf" "-Dmacho_get_addr_range=__rbt_macho_get_addr_range" "-Dmacho_get_uuid=__rbt_macho_get_uuid" "-Dmacho_add=__rbt_macho_add" "-Dmacho_add_symtab=__rbt_macho_add_symtab" "-Dmacho_file_to_host_u64=__rbt_macho_file_to_host_u64" "-Dmacho_file_to_host_u32=__rbt_macho_file_to_host_u32" "-Dmacho_file_to_host_u16=__rbt_macho_file_to_host_u16" "-o" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out/src/libbacktrace/alloc.o" "-c" "src/libbacktrace/alloc.c" with args "cc" did not execute success
fully (status code exit code: 1).



warning: build failed, waiting for other jobs to finish...
error: build failed
The command '/bin/sh -c cargo build --release' returned a non-zero code: 101

@Kixunil
Copy link
Contributor

Kixunil commented Apr 12, 2020

🤔 going to check what's the problem with gcc, could you try setting CC=clang and try again?

@lukechilds
Copy link
Contributor Author

With CC=clang:

   Compiling backtrace-sys v0.1.32                                                                                                                                                                                                   [29/1765]
   Compiling libloading v0.5.2
   Compiling secp256k1 v0.15.5
   Compiling url v1.7.2
   Compiling chrono v0.4.9
   Compiling env_logger v0.6.2
error: failed to run custom build command for `libloading v0.5.2`
process didn't exit successfully: `/home/user/target/release/build/libloading-08979a0c985223d9/build-script-build` (exit code: 1)
--- stdout
cargo:rustc-link-lib=dl
TARGET = Some("armv7-unknown-linux-gnueabihf")
OPT_LEVEL = Some("3")
HOST = Some("armv7-unknown-linux-gnueabihf")
CC_armv7-unknown-linux-gnueabihf = None
CC_armv7_unknown_linux_gnueabihf = None
HOST_CC = None
CC = Some("clang")
CFLAGS_armv7-unknown-linux-gnueabihf = None
CFLAGS_armv7_unknown_linux_gnueabihf = None
HOST_CFLAGS = None
CFLAGS = None
CRATE_CC_NO_DEFAULTS = None
DEBUG = Some("false")
running: "clang" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "--target=armv7-unknown-linux-gnueabihf" "-fnew-alignment=32" "-fmax-type-align=32" "-Wall" "-Wextra" "-o" "/home/user/target/release/build/libloading-1a1ddd315c814bf5
/out/src/os/unix/global_static.o" "-c" "src/os/unix/global_static.c"
cargo:warning=clang: error: unknown argument: '-fnew-alignment=32'
exit code: 1

--- stderr


error occurred: Command "clang" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "--target=armv7-unknown-linux-gnueabihf" "-fnew-alignment=32" "-fmax-type-align=32" "-Wall" "-Wextra" "-o" "/home/user/target/release/build/libloading-1
a1ddd315c814bf5/out/src/os/unix/global_static.o" "-c" "src/os/unix/global_static.c" with args "clang" did not execute successfully (status code exit code: 1).



warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `backtrace-sys v0.1.32`
process didn't exit successfully: `/home/user/target/release/build/backtrace-sys-6227e9ef15992805/build-script-build` (exit code: 1)
--- stdout
cargo:rustc-cfg=rbt
TARGET = Some("armv7-unknown-linux-gnueabihf")
OPT_LEVEL = Some("3")
HOST = Some("armv7-unknown-linux-gnueabihf")
CC_armv7-unknown-linux-gnueabihf = None
CC_armv7_unknown_linux_gnueabihf = None
HOST_CC = None
CC = Some("clang")
CFLAGS_armv7-unknown-linux-gnueabihf = None
CFLAGS_armv7_unknown_linux_gnueabihf = None
HOST_CFLAGS = None
CFLAGS = None
CRATE_CC_NO_DEFAULTS = None
running: "clang" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "--target=armv7-unknown-linux-gnueabihf" "-fnew-alignment=32" "-fmax-type-align=32" "-I" "src/libbacktrace" "-I" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out" "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=32" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-Dbacktrace_full=__rbt_backtrace_full" "-Dbacktrace_dwarf_add=__rbt_backtrace_dwarf_add" "-Dbacktrace_initialize=__rbt_backtrace_initialize" "-Dbacktrace_pcinfo=__rbt_backtrace_pcinfo" "-Dbacktrace_syminfo=__rbt_backtrace_syminfo" "-Dbacktrace_get_view=__rbt_backtrace_get_view" "-Dbacktrace_release_view=__rbt_backtrace_release_view" "-Dbacktrace_alloc=__rbt_backtrace_alloc" "-Dbacktrace_free=__rbt_backtrace_free" "-Dbacktrace_vector_finish=__rbt_backtrace_vector_finish" "-Dbacktrace_vector_grow=__rbt_backtrace_vector_grow" "-Dbacktrace_vector_release=__rbt_backtrace_vector_release" "-Dbacktrace_close=__rbt_backtrace_close" "-Dbacktrace_open=__rbt_backtrace_open" "-Dbacktrace_print=__rbt_backtrace_print" "-Dbacktrace_simple=__rbt_backtrace_simple" "-Dbacktrace_qsort=__rbt_backtrace_qsort" "-Dbacktrace_create_state=__rbt_backtrace_create_state" "-Dbacktrace_uncompress_zdebug=__rbt_backtrace_uncompress_zdebug" "-Dmacho_get_view=__rbt_macho_get_view" "-Dmacho_symbol_type_relevant=__rbt_macho_symbol_type_relevant" "-Dmacho_get_commands=__rbt_macho_get_commands" "-Dmacho_try_dsym=__rbt_macho_try_dsym" "-Dmacho_try_dwarf=__rbt_macho_try_dwarf" "-Dmacho_get_addr_range=__rbt_macho_get_addr_range" "-Dmacho_get_uuid=__rbt_macho_get_uuid" "-Dmacho_add=__rbt_macho_add" "-Dmacho_add_symtab=__rbt_macho_add_symtab" "-Dmacho_file_to_host_u64=__rbt_macho_file_to_host_u64" "-Dmacho_file_to_host_u32=__rbt_macho_file_to_host_u32" "-Dmacho_file_to_host_u16=__rbt_macho_file_to_host_u16" "-o" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out/src/libbacktrace/alloc.o" "-c" "src/libbacktrace/alloc.c"
cargo:warning=clang: error: unknown argument: '-fnew-alignment=32'
exit code: 1
running: "clang" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "--target=armv7-unknown-linux-gnueabihf" "-fnew-alignment=32" "-fmax-type-align=32" "-I" "src/libbacktrace" "-I" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out" "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=32" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-Dbacktrace_full=__rbt_backtrace_full" "-Dbacktrace_dwarf_add=__rbt_backtrace_dwarf_add" "-Dbacktrace_initialize=__rbt_backtrace_initialize" "-Dbacktrace_pcinfo=__rbt_backtrace_pcinfo" "-Dbacktrace_syminfo=__rbt_backtrace_syminfo" "-Dbacktrace_get_view=__rbt_backtrace_get_view" "-Dbacktrace_release_view=__rbt_backtrace_release_view" "-Dbacktrace_alloc=__rbt_backtrace_alloc" "-Dbacktrace_free=__rbt_backtrace_free" "-Dbacktrace_vector_finish=__rbt_backtrace_vector_finish" "-Dbacktrace_vector_grow=__rbt_backtrace_vector_grow" "-Dbacktrace_vector_release=__rbt_backtrace_vector_release" "-Dbacktrace_close=__rbt_backtrace_close" "-Dbacktrace_open=__rbt_backtrace_open" "-Dbacktrace_print=__rbt_backtrace_print" "-Dbacktrace_simple=__rbt_backtrace_simple" "-Dbacktrace_qsort=__rbt_backtrace_qsort" "-Dbacktrace_create_state=__rbt_backtrace_create_state" "-Dbacktrace_uncompress_zdebug=__rbt_backtrace_uncompress_zdebug" "-Dmacho_get_view=__rbt_macho_get_view" "-Dmacho_symbol_type_relevant=__rbt_macho_symbol_type_relevant" "-Dmacho_get_commands=__rbt_macho_get_commands" "-Dmacho_try_dsym=__rbt_macho_try_dsym" "-Dmacho_try_dwarf=__rbt_macho_try_dwarf" "-Dmacho_get_addr_range=__rbt_macho_get_addr_range" "-Dmacho_get_uuid=__rbt_macho_get_uuid" "-Dmacho_add=__rbt_macho_add" "-Dmacho_add_symtab=__rbt_macho_add_symtab" "-Dmacho_file_to_host_u64=__rbt_macho_file_to_host_u64" "-Dmacho_file_to_host_u32=__rbt_macho_file_to_host_u32" "-Dmacho_file_to_host_u16=__rbt_macho_file_to_host_u16" "-o" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out/src/libbacktrace/dwarf.o" "-c" "src/libbacktrace/dwarf.c"
cargo:warning=clang: error: unknown argument: '-fnew-alignment=32'
exit code: 1

--- stderr


error occurred: Command "clang" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "--target=armv7-unknown-linux-gnueabihf" "-fnew-alignment=32" "-fmax-type-align=32" "-I" "src/libbacktrace" "-I" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out" "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=32" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-Dbacktrace_full=__rbt_backtrace_full" "-Dbacktrace_dwarf_add=__rbt_backtrace_dwarf_add" "-Dbacktrace_initialize=__rbt_backtrace_initialize" "-Dbacktrace_pcinfo=__rbt_backtrace_pcinfo" "-Dbacktrace_syminfo=__rbt_backtrace_syminfo" "-Dbacktrace_get_view=__rbt_backtrace_get_view" "-Dbacktrace_release_view=__rbt_backtrace_release_view" "-Dbacktrace_alloc=__rbt_backtrace_alloc" "-Dbacktrace_free=__rbt_backtrace_free" "-Dbacktrace_vector_finish=__rbt_backtrace_vector_finish" "-Dbacktrace_vector_grow=__rbt_backtrace_vector_grow" "-Dbacktrace_vector_release=__rbt_backtrace_vector_release" "-Dbacktrace_close=__rbt_backtrace_close" "-Dbacktrace_open=__rbt_backtrace_open" "-Dbacktrace_print=__rbt_backtrace_print" "-Dbacktrace_simple=__rbt_backtrace_simple" "-Dbacktrace_qsort=__rbt_backtrace_qsort" "-Dbacktrace_create_state=__rbt_backtrace_create_state" "-Dbacktrace_uncompress_zdebug=__rbt_backtrace_uncompress_zdebug" "-Dmacho_get_view=__rbt_macho_get_view" "-Dmacho_symbol_type_relevant=__rbt_macho_symbol_type_relevant" "-Dmacho_get_commands=__rbt_macho_get_commands" "-Dmacho_try_dsym=__rbt_macho_try_dsym" "-Dmacho_try_dwarf=__rbt_macho_try_dwarf" "-Dmacho_get_addr_range=__rbt_macho_get_addr_range" "-Dmacho_get_uuid=__rbt_macho_get_uuid" "-Dmacho_add=__rbt_macho_add" "-Dmacho_add_symtab=__rbt_macho_add_symtab" "-Dmacho_file_to_host_u64=__rbt_macho_file_to_host_u64" "-Dmacho_file_to_host_u32=__rbt_macho_file_to_host_u32" "-Dmacho_file_to_host_u16=__rbt_macho_file_to_host_u16" "-o" "/home/user/target/release/build/backtrace-sys-b2a6bb045adf9ded/out/src/libbacktrace/alloc.o" "-c" "src/libbacktrace/alloc.c" with args "clang" did not execute successfully (status code exit code: 1).



warning: build failed, waiting for other jobs to finish...
error: build failed
The command '/bin/sh -c cargo build --release' returned a non-zero code: 101

@Kixunil
Copy link
Contributor

Kixunil commented Apr 12, 2020

🤦 this is terrible. The compilers document those arguments, yet don't support them. Maybe because they are too old? You could check the versions...

But honestly, I feel quite demotivated right now. Looks like the most robust solution is to upgrade the OS anyway.

I'd suggest:

  • Updating the docker file (I don't know Docker enough to try, you may do the PR or @romanz )
  • Documenting that electrs doesn't work on old RPi OSes because they (or their toolchains) are broken. (Yes, I'm pretty confident that it's their fault, not of electrs for misallocating/miscompiling)

If you're interested in knowing more precisely who to blame, you might try CC=clang without my patch - if it works, it's the fault of GCC, if it doesn't, it's probably fault of the allocator. The information might have some documentation value for people who are stuck at old versions for some reason, but I'm not really sure if they'd be able to do anything about it.

Thanks a lot for your willingness to go beyond the necessary steps, helping me understand the situation better! I suggest closing this after Dockerfile and/or documentation are updated.

@romanz
Copy link
Owner

romanz commented Apr 12, 2020

Many thanks for reporting and debugging this issue!
Updated Dockerfile to use rust:1.42.0-slim at 7d23da5.

@lukechilds
Copy link
Contributor Author

lukechilds commented Apr 13, 2020

@romanz oops, just seen you already updated the issue after opening my PR and seeing there's a merge conflict. I made the commit late last night and didn't have time to open a PR.

@Kixunil indicated there was a reason you wanted to keep Rust 1.34.0:

AFAIK @romanz had a good reason to keep Rust 1.34 - due to it being the latest supported Rust version in Debian Buster. But maybe if we could change the underlying OS to Buster while keeping 1.34 that might work.
#226 (comment)

Is that the case?

If so I made a Dockerfile that is based on Debian Buster and then installs Rust 1.34.0 with an easy way to change the Rust version. #234

If you'd rather update to 1.42.0 than obviously just ignore that PR. However it might be worth locking to a specific Debian version, e.g 1.42.0-slim-buster instead of rust:1.42.0-slim to prevent any future toolchain changes affecting electrs builds. #235

@romanz
Copy link
Owner

romanz commented Apr 14, 2020

Resolved via #235 - thanks for the help :)

@Kixunil
Copy link
Contributor

Kixunil commented Apr 14, 2020

This issue should be closed, right?

@lukechilds
Copy link
Contributor Author

Yeah.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants